Oxford University Press's Blog, page 121
January 13, 2021
Cubs galore
The time has come to find out where cub came from. I dealt with cub in my etymological dictionary, but a quick look at the word may not hurt, the more so as cub, which surfaced in English texts only in the early sixteenth century, turned out to be an aggressive creature: it ousted whelp, and later the verb to cub came into existence. Whatever the origin of cub (and the answer will be only partly illuminating), the constant suppression of old words by upstarts is a process worth noticing, regardless of the theme being discussed today. Not that whelp is such a dignified name (it rather probably means “yelper”: see the post for 16 December 2020), but still it is a Germanic noun of unquestionable antiquity, with relatives all over the place, as becomes whelps. Yet words like cub are irresistibly attractive: short, rootless, and mildly sound-imitating or sound-symbolic. This is how slang, this child of emotions and rude humor, originates all over the world and why its etymology is so hard to trace.
Cub emerged in the form cubbe, but adding superfluous letters was so common in the past (many scribes believed that the longer a word is, the more attention it will gain) that the existence of the second syllable is dubious. A few wild suggestions about the origin of cub may be disregarded, but two conjectures travel from one source to another and deserve attention. Irish cuib “whelp” is often mentioned as the source of cub. As a general rule, when a borrowing is posited, it is useful to know the origin of the source, but cuib is hopelessly obscure, so that tracing it and cub to the same ancient etymon holds out little promise, especially because English cub is so late. Equally unappealing is the idea of borrowing.

In the history of English etymology, two periods can be discerned. In the rather remote past, Celtomania prevailed: hundreds of English words having Irish or Welsh lookalikes were declared to be of Celtic origin. Some time later, the pendulum swung in the opposite direction, for that is what the pendulum always does. Now etymologists took it for granted that the Celts could only borrow from the English. Later, a reasonable approach prevailed. Researchers stopped jumping to conclusions, nationalistic fervor (in this area of scholarship) gave way to careful study, and the existence of the two-way English-Celtic street replaced fantasies and demagoguery. At present, an influential school exists that ascribes some of the most important features of English phonetics and grammar to Celtic, but it seems to have had no influence on English etymology. In any case, cub and cuib are unlike: the vowels do not match, and in the Irish word, final b is pronounced as v. It will be safer to assume that they are not connected.
A considerably more promising analog is Icelandic kobbi “seal,” which is sometimes believed to be a pet name of the older kópr (the same meaning), but the convergence between kópr and kobbi may be late, a product of folk etymology. A close neighbor of Icelandic kobbi is Icelandic kubbi “a block of wood,” and the English parallel cob ~ cub springs to mind at once. Unfortunately, cob is an awkward word for comparison, because it refers to so many things: a male swan, several fishes, a short-legged stout, a variety of horse, a gull, and a spider (known from cobweb), all kinds of lumpy objects, and “head” (compare corncob), among others. The seal’s head is a conspicuous part of its body, and kubbi may indeed be understood as a “something round.” We seem to be returning to calf, with its posited original meaning “swelling; round object” (see the post for 6 January 2021). Cop is so obscure that it is better to stay away from its history while discussing cob. The oldest history of German Kopf “head” remains undiscovered despite several centuries of attempts to find its source. Also, “head” and “round object” are not synonyms, a circumstance noted in the original version of the OED.

Despite all the difficulties we face, certain things are known. The string k-b and k-p occur in numerous animal names, with the vowels randomly alternating according to the format described more than once in this blog. Here are a few examples of obsolete or dialectal words: Dutch kabbe ~ kebbe “little pig,” English kebbe “ewe that has lost her lamb or whose lamb is stillborn,” German kibbe ~ kippe “ewe,” Swedish kibb ~ kubbe “calf,” and dozens of other similar formations in Germanic. The problem is that in stringing together such words, one does not know where to stop. Some of them refer to sticks and blocks of wood (English chip, from kipp) or conversely to things fat or round (Old Icelandic kjabbi “fat person”). And each of those coinages may and sometimes does also designate a young creature or a small animal. Few of those formations were recorded in the oldest languages, and many are regional. They sound like primitive coinages, as do many other monosyllables whose etymology has never been discovered to everybody’s satisfaction: dig, dog, cut, put, and so forth.
In dealing with cub (and cob) and its likes, one faces an unexpected situation: the origin of the entire group looks rather transparent, but every word is a riddle. In any case, cub and cob (to the extent that the latter refers to round objects) seem to belong together. If this conclusion is correct, then cub, like Old Germanic kal-b and English cil-d “child”, are synonyms, except that calf and child are ancient, while cub is relatively late. Old age does not make kalb– more important to a language historian. It only confirms the well-known fact that the impulses behind word formation never vary.

It may perhaps be useful for the authors of etymological dictionaries to adopt two formats: some entries will have a traditional shape (one entry for one word), while others will be devoted to small groups. Something like this is occasionally done, but rarely. Both cub and cob have similar referents and are upstarts. To a certain extent, the same etymology covers both. In this series, which is devoted to animal names, we may be satisfied with the conclusion that, for whatever reason, cub and calf made people think of things round and swollen; hence the names of those baby animals. The recorded words might and probably did migrate from community to community, merge, and be applied to different creatures. The complex k-b fits equally well piglets, lambs, and whelps, among others and is so indeterminate that it easily acquired a more general sense. Hence so many animals—from seals to giraffes—have cubs. Once they grow up, they acquire other identities. But before that, all of them are their mothers’ kids.
Now you are probably wondering where kid came from. A good question, but not for a postscript to cub.
The post Cubs galore appeared first on OUPblog.

Questions on a Trump impeachment and invoking the Twenty-fifth Amendment
The past few weeks have been a tumultuous time in US politics and a historic second impeachment for President Trump could be on the cards at the end of a presidency that has often been hard to predict. Taken from Impeachment: What Everyone Needs to Know ®, we look at some of the key questions surrounding such an action to remove him from office:
If President Trump is incompetent, may he be impeached for that?There is a general expectation that most issues pertaining to a president’s performance in office are to be dealt with through the electoral process (if the president runs for re-election) and the other checks recognized as applying to presidential conduct, such as popularity, the press, the judgment of history, and congressional oversight. Impeachment is a last resort for handling misconduct that cannot be dealt with by other means and that involves misconduct sufficiently serious to constitute “treason, bribery, or other high crimes and misdemeanors.”
Instead of being subject to a statutory mechanism like the Judicial Discipline and Disability Act, presidents are subject to the Twenty-fifth Amendment, which was ratified in 1967. It provides a mechanism for handling a president’s becoming subject to some disability that prevents him from doing his job, such as a major stroke or serious mental illness. This mechanism seems better suited than impeachment for dealing with incompetence resulting from some mental or physical disability.
How does the Twenty-fifth Amendment work?The Twenty-fifth Amendment has four sections. The first section codifies the precedent set by John Tyler, which clarified who became president when a president died in office. Tyler claimed that the president’s death automatically elevated him from the vice presidency to the presidency. The Twenty-fifth Amendment’s first section now makes that practice a constitutional directive.
Section 2 of the Twenty-fifth Amendment provides a procedure for replacing a vice president who resigns, dies, or is incapable of further performing the duties of his office. If any of those things happens, the president is empowered to nominate a replacement, who has to be approved by a majority of each chamber of Congress.
Section 3 of the Twenty-fifth Amendment provides a procedure for temporarily empowering the vice president to take over the responsibilities and duties of the presidency. It provides that when a president transmits a written declaration to the president pro tempore of the Senate and the speaker of the House that he is unable to perform his duties, the vice president assumes those duties until the president sends another written communication to the same officials declaring that he is capable of resuming his duties.
The fourth section of the Twenty-fifth Amendment provides a procedure to be followed if the president becomes disabled but is unable to produce the written communications required in Section 3. This procedure allows the vice president, together “with a majority of either the principal officers of the executive departments or of such body as Congress may by law provide,” to declare the president “unable to discharge the powers and duties of his office” through a written declaration submitted to the speaker of the House and the president pro tempore of the Senate.
Section 4 is the only section of the Twenty-fifth Amendment that has never been invoked. Sections 1 and 2 were invoked three times during the Watergate scandal and Section 3 has been invoked three times to appoint vice presidents as acting presidents all for medical reasons.
Could President Trump have a problem with the Twenty-fifth Amendment?The short answer is that it depends on the facts, but as we know from the plain language of this section, it comes into play if the vice president and a majority of the cabinet (or some other authority that the Congress has designated by statute) determine that the president has become disabled because of some mental illness or other problem.
This analysis cannot be a substitute for the kind of fact-finding that would have to be undertaken if this portion of the amendment was ever invoked. We know, from the congressional debates on the Twenty-fifth Amendment that these provisions were intended to address mental or physical incapacitation, as well as situations where a president might be out of reliable communication or kidnapped. We know as well that the purpose of this section is not to provide a means for a “no-confidence” vote but is designed to provide clarity and therefore some safeguards on circumstances when presidential incapacity requires putting his second in command in charge of the government, at least temporarily. The requirements themselves suggest a high threshold for its implementation, depending on the president’s own allies and appointees to come together to a significant degree for the sake of the country.
If Congress has to determine a Section 4 dispute between the vice president and the president, the Constitution makes it highly likely that the president will win (as he should, given the likelihood that he is the one who has been elected to the office). The requirements (1) for the acting president and a majority of the cabinet to send a second declaration that the president is incapacitated in response to the president’s issuing a challenge within four days of their initial declaration and (2) for two-thirds of each chamber of Congress within twenty-one days to express their agreement with the second declaration of the president’s incapacity (as a prerequisite for the vice president’s continuing to serve as acting president) are powerful checks on the vice president and cabinet stealing the office from the president. The act’s high thresholds create a default rule that the president remains in office unless they can be met.
Whether that two-thirds support actually exists would of course depend on the facts and public perception at the time as well as the congressional and public perceptions of the vice president and the cabinet. If, for example, the vice president and the majority of the cabinet were widely considered to be acting out of the best motives and perceived to have been loyal and credible, the public and members of Congress, particularly the president’s partisan allies, might be more receptive to the determination of the need to replace the president temporarily. The presumption underlying the structure is that if the two-thirds threshold were met there must be compelling or strong evidence to declare the president incapacitated and thus unable to perform his duties.
Featured image by Alejandro Barba.
The post Questions on a Trump impeachment and invoking the Twenty-fifth Amendment appeared first on OUPblog.

January 12, 2021
Good news for honey bees from 150-year-old museum specimens
The past several decades have been hard on Apis mellifera, the Western honey bee. Originally native to Europe, Africa, and the Middle East, Western honey bees have spread worldwide thanks to the nutritional and medicinal value of their honey, pollen, beeswax, and other hive products. Even more recently, the rise of the mobile hive and increased demand for pollination services have resulted in an army of bees being unleashed on crops each year, most notably almonds, which require several million bee visits per acre.
At the same time, the last 50 years have seen dramatic declines in honey bee populations due to pesticide use, climate change, and habitat destruction. Most notably, the spread of the parasitic mite Varroa destructor from Asia to Western Europe and North America in the 1970s decimated honey bee colonies, making it nearly impossible for them to survive without human intervention and resulting in the loss of the vast majority of wild and feral honey bee colonies. Given this decline, scientists have speculated that loss of genetic diversity among honey bees may be contributing to further losses in bee populations. A new study provides evidence that disputes this theory, however, suggesting that loss of genetic diversity may not be among the long list of threats to bee survival.
The study, led by Melanie Parejo, a postdoctoral researcher at the University of the Basque Country in Spain, involved the genomic sequencing of 22 bee specimens—some nearly 150 years old—from the Natural History Museum in Bern, Switzerland. The study represents the first whole-genome analysis of museum bee specimens, an accomplishment made possible by recent advances in sequencing that overcome the challenges of working with historic DNA, which is often highly fragmented. By comparing the genome sequences of the historic bee samples to those of modern bees collected across Switzerland, the authors sought to uncover how changes in agricultural practices over the last 50 years had influenced the evolution of the Western honey bee.
Due to recent declines in wild bee populations and increased breeding efforts, the researchers expected to see a reduction in the genetic diversity of the modern bees compared to that of the historic specimens. However, Parejo and her colleagues actually observed higher genetic diversity in the modern honey bees. “This finding was particularly surprising to us,” notes Parejo. “It was quite the opposite of what we expected and of the general narrative regarding honey bee diversity in the scientific literature, which points toward loss of genetic diversity as one of the many threats facing honey bees today.”

To explain their unexpected findings, the authors suggest that the honey bee’s unique mating system or long-distance mating flights may help populations maintain intrinsically high levels of variation. Moreover, the movement of hives and introduction of bees from different regions may be promoting increased levels of diversity in modern hives. Whatever the mechanism, this is good news for honey bees, as high levels of diversity have been shown to be crucial for colony fitness. Indeed, intra-colony genetic diversity is associated with lower pathogen loads and a better chance of survival, perhaps owing to an enhanced ability to adapt to local environmental conditions.
The researchers also identified signatures of natural selection between historic and modern Western honey bee populations. In modern bees, they found evidence for selection in immune-related genes, which may reflect the recent emergence or increasing prevalence of parasites and pathogens like Varroa or the bacteria that cause European foulbrood disease. Other genes encoded nervous system proteins that are the targets of several widely used pesticides. According to Parejo, these results “suggest that bees have had to adapt quickly to new challenges, particularly the increased use of chemicals in modern agriculture and beekeeping and the arrival of new diseases and parasites. These adaptations have left traces in the genomes of honey bees, allowing us to observe a small step in evolution.”
Overall, the results of the study should be reassuring to bee lovers as they suggest that Western honey bees maintain sufficient adaptive potential to face future human-induced and environmental changes. The authors caution however that specific, locally adapted genetic variants may be more important for colony survival than high levels of genetic diversity overall, and that such variants may still be lost due to population declines. In fact, there is a recent trend toward focusing conservation efforts on “functional” diversity, rather than total genetic diversity. Toward this end, genomic analysis of museum specimens may be of further use, enabling the identification of beneficial genetic variants that should be targeted for conservation.
Featured image by Filipe Resmini
The post Good news for honey bees from 150-year-old museum specimens appeared first on OUPblog.

January 11, 2021
Droplets, aerosols and COVID-19: updating the disease transmission paradigm
The severity of the COVID-19 pandemic and the subsequent torrent of research has brought a simmering debate about how respiratory infectious diseases are transmitted to a boil, in full view of the public. The words airborne, aerosol, and droplets are now part of the daily news—but, why?
Over the last decade there have been calls within the scientific community to change the vocabulary of disease transmission routes for respiratory infectious diseases because the definitions of airborne and droplet transmission routes, which came into use in the mid-1950s, are not supported by emergent evidence. People with respiratory infectious disease release pathogens from the respiratory tract, typically in droplets of respiratory fluids. Airborne transmission is the event that small droplets containing pathogens are inhaled by a susceptible person who is far away from the infectious person. Droplet transmission is the event that large droplets project onto the facial mucous membranes when the susceptible person is close to the infectious source, e.g. a cough in the face. These airborne and droplet transmission definitions fail because we now have very good evidence that the distinctions by droplet size and distance from the infectious source are artificial. Respiratory fluids are emitted in droplets that vary widely in size, small droplets can be inhaled by people close to the source, and the distance over which small and large droplets are transported is highly variable, dependent upon environmental conditions. To address the gap between airborne and droplet transmission, some scientists have proposed a third route: short-range airborne transmission. Others, myself included, have advocated replacing the two routes with a new concept: aerosol transmission.
Simultaneously, there has been a debate about which transmission route or routes are dominant for certain infectious diseases. For most viral respiratory pathogens, the convention is that droplet and contact transmission are more important than airborne transmission. Evidence, particularly that which has emerged from research on influenza and SARS, has led to recognition that some viral respiratory pathogens may be transmitted through the airborne or aerosol route. That is, virus inhalation can result in infection. Regarding COVID-19, the evidence of airborne or aerosol transmission has become overwhelmingly persuasive to many in the scientific community, leading to vocal criticism of public health organizations like the World Health Organization and the United States Centers for Disease Control and Prevention which argued, at least initially, against this possibility.
Identifying the transmission route of an infectious disease, including COVID-19, is not an esoteric scientific question because it is the transmission route that drives the infection prevention and control strategies. Consistent with initial determinations that COVID-19 is transmitted through the droplet and contact routes, the primary public health interventions were physical barriers, physical distancing, face shields, and hand hygiene. Later, cloth masks were added. These strategies have little impact on the movement of small droplets that can be inhaled, though new research suggests that cloth masks can prevent the emission and inhalation of small droplets. The performance of cloth masks (and surgical or medical masks), however, is meaningfully inferior to that of respirators, including N95 or FFP2 filtering facepiece respirators.
Acknowledging the contribution of airborne or aerosol transmission to COVID-19 requires that we take different precautions to protect workers, including the use respirators to prevent inhalation of respiratory droplets (large and small), and ventilation or filtration devices that capture respiratory droplets at the source or while traveling through the air. While updating the disease transmission paradigm during a global pandemic may seem overwhelming, we owe it to the workers who sustain the functions of daily living and are at risk of COVID-19—and a myriad of endemic and future pandemic diseases—to use the best, current scientific evidence to guide prevention strategies.
Feature image: CDC/ Debora Cartagena 2013.
The post Droplets, aerosols and COVID-19: updating the disease transmission paradigm appeared first on OUPblog.

Droplets, Aerosols and COVID-19: Updating the Disease Transmission Paradigm
The severity of the COVID-19 pandemic and the subsequent torrent of research has brought a simmering debate about how respiratory infectious diseases are transmitted to a boil, in full view of the public. The words airborne, aerosol, and droplets are now part of the daily news—but, why?
Over the last decade there have been calls within the scientific community to change the vocabulary of disease transmission routes for respiratory infectious diseases because the definitions of airborne and droplet transmission routes, which came into use in the mid-1950s, are not supported by emergent evidence. People with respiratory infectious disease release pathogens from the respiratory tract, typically in droplets of respiratory fluids. Airborne transmission is the event that small droplets containing pathogens are inhaled by a susceptible person who is far away from the infectious person. Droplet transmission is the event that large droplets project onto the facial mucous membranes when the susceptible person is close to the infectious source, e.g. a cough in the face. These airborne and droplet transmission definitions fail because we now have very good evidence that the distinctions by droplet size and distance from the infectious source are artificial. Respiratory fluids are emitted in droplets that vary widely in size, small droplets can be inhaled by people close to the source, and the distance over which small and large droplets are transported is highly variable, dependent upon environmental conditions. To address the gap between airborne and droplet transmission, some scientists have proposed a third route: short-range airborne transmission. Others, myself included, have advocated replacing the two routes with a new concept: aerosol transmission.
Simultaneously, there has been a debate about which transmission route or routes are dominant for certain infectious diseases. For most viral respiratory pathogens, the convention is that droplet and contact transmission are more important than airborne transmission. Evidence, particularly that which has emerged from research on influenza and SARS, has led to recognition that some viral respiratory pathogens may be transmitted through the airborne or aerosol route. That is, virus inhalation can result in infection. Regarding COVID-19, the evidence of airborne or aerosol transmission has become overwhelmingly persuasive to many in the scientific community, leading to vocal criticism of public health organizations like the World Health Organization and the United States Centers for Disease Control and Prevention which argued, at least initially, against this possibility.
Identifying the transmission route of an infectious disease, including COVID-19, is not an esoteric scientific question because it is the transmission route that drives the infection prevention and control strategies. Consistent with initial determinations that COVID-19 is transmitted through the droplet and contact routes, the primary public health interventions were physical barriers, physical distancing, face shields, and hand hygiene. Later, cloth masks were added. These strategies have little impact on the movement of small droplets that can be inhaled, though new research suggests that cloth masks can prevent the emission and inhalation of small droplets. The performance of cloth masks (and surgical or medical masks), however, is meaningfully inferior to that of respirators, including N95 or FFP2 filtering facepiece respirators.
Acknowledging the contribution of airborne or aerosol transmission to COVID-19 requires that we take different precautions to protect workers, including the use respirators to prevent inhalation of respiratory droplets (large and small), and ventilation or filtration devices that capture respiratory droplets at the source or while traveling through the air. While updating the disease transmission paradigm during a global pandemic may seem overwhelming, we owe it to the workers who sustain the functions of daily living and are at risk of COVID-19—and a myriad of endemic and future pandemic diseases—to use the best, current scientific evidence to guide prevention strategies.
Feature image: CDC/ Debora Cartagena 2013.
The post Droplets, Aerosols and COVID-19: Updating the Disease Transmission Paradigm appeared first on OUPblog.

January 9, 2021
Playing to lose: transhumanism, autonomy, and liberal democracy [long read]
The debate over human “enhancement,” or the biotechnological heightening of human abilities, is prominent in bioethics. The most controversial stance is transhumanism, whose advocates urge us to develop biotechnologies enabling the “radical” elevation of select capacities, above all, rationality.
Transhumanists insist that their vision of the radical bioenhancement of human capacities is light-years removed from prior eugenics, which was state managed. Decisions about how far and even whether to enhance oneself and one’s children-to-be would stem strictly from personal discretion. Since autonomy is retained—indeed, powerful biotechnologies would offer individuals marvelous new avenues for its expression—transhumanists’ vision fits squarely within liberal democracy. Or so we are told.
This reassuring, empowering picture is undercut by transhumanists’ own arguments, which offer incompatible pictures of personal autonomy in relation to decisions about the use of bioenhancement technologies. Autonomy is, indeed, front and center when transhumanists’ immediate goal is debunking the charge of substantive ties to eugenic history. It recedes, however, when they focus on why one proceeding rationally should find their “posthuman” ideal compelling. Here, transhumanists depend on rationales from utilitarian ethics, within which autonomy cannot be valued in its own right, to support the strong desirability of bioenhancement and even its moral requirement.
Utilitarian ethics and its ties to politicsFor utilitarians, only well-being, gauged in terms of states of affairs, is intrinsically worthwhile. Utilitarians aim to maximize well-being, calculated in terms of the overall balance of benefit and harm. Decisions are to be made impartially, their reference point not individuals or families, but, instead, generations. From a utilitarian perspective, the course deemed to maximize generational well-being is the rational and, thus, morally required path.
Ethical and political stances are always connected, but in utilitarianism, this tie is especially tight. A utilitarian umbrella for law and policy would run counter to liberal democracy, a cornerstone of which is personal autonomy. Familiar measures undertaken to promote public health, whose dominant warrant is utilitarian, are a visible exception to the wide rein that individual discretion usually enjoys. Though they are exceptional in this sense, such measures in the United States—among them laws precluding smoking in public areas and parental immunization of children as a condition of school admission—are consistent with liberal democracy and foster individuals’ ability to be active members thereof. By and large, these measures and, thus, at least implicitly, their utilitarian justification, are accepted. Furthermore, in deference to autonomy, some of these requirements allow for exceptions (e.g., most states permit religious exemptions to the immunization requirement).
Contra the above scenario, within transhumanism, contextually variant emphases on autonomy and broader welfare are not distinct aspects of a unified perspective. Rather, transhumanist writings include two separate lines of argument whose implications for personal autonomy are deeply at odds.
What are these antithetical positions on autonomy?Although transhumanists vaunt autonomy when insisting that their thought is remote from eugenic history, their dependence on utilitarian rationales when directly supporting bioenhancement is usually tacit or denied. An outlier is Ingmar Persson and Julian Savulescu’s contention that, to avoid human extinction, bioenhancement of two moral attitudes, altruism and “a sense of justice,” is morally required. They assimilate moral bioenhancement to fluoridation and education, familiar measures aimed at public health and welfare, pointedly leveraging those measures’ utilitarian justification. In addition, Persson and Savulescu’s fluid, open segue from a moral requirement to a legal obligation is vintage utilitarianism. What’s more, to ensure that no one was psychologically primed to wreak disaster on humanity, the use of moral bioenhancement would have to be exceptionless.
“…to ensure that no one was psychologically primed to wreak disaster on humanity, the use of moral bioenhancement would have to be exceptionless.”
Moral bioenhancement emerged as an area of focus within transhumanist writings only in 2008, and, thus far, its advocacy has been tied closely to Persson and Savulescu. The utilitarian reasoning that they employ, however, is also evident, albeit tacitly, in two focal areas of transhumanist concern: cognitive bioenhancement and procreative decision-making.
Cognition is the flagship capacity that transhumanists would augment: this ability, as instantiated by the most gifted scientists and technologists, would drive humanity’s self-transcendence; moreover, in a radically augmented form, it would be the fulcrum of posthuman existence. When urging cognitive bioenhancement upon us, transhumanists draw parallels with familiar public-health measures, including fluoridation, vaccines, and seatbelt laws. But they forge these connections to public health without owning the utilitarian justification that anchors all such measures. Beyond that, multiple transhumanists showcase what they deem societal boons of cognitive bioenhancement, including greater economic productivity.
Transhumanists’ union of cognitive bioenhancement with familiar public-health measures lends it the same ethical justification, and this utilitarian anchor brings in its train the prospect of sociopolitical requirements. That state enforcement is a desired or logical outcome of the moral imperative to enhance is evident when Nick Bostrom favors pressure on individuals, in realms such as health insurance, health care, and education, as an interim tactic to ready society for legal mandates. Contrary to what occurs with moral bioenhancement, however, when pressing for the cognitive variety, transhumanists themselves do not own—let alone connect—the utilitarian dots.
Transhumanists pin their hopes on cognitive ability to steer the production of humanity’s “godlike” successors. Implementation of their ideas in procreative decisions is, therefore, key. It is also the arena where transhumanists’ dichotomous handling of personal discretion comes most vividly to light: their welcoming stance to create distance from earlier eugenics and quashing of it when locked on supporting their own ideal.
When we are told that the availability of powerful biotechnologies would dramatically augment parental freedom of choice, procedural autonomy reigns. Here, far from ranking decisions due to the caliber of their content, transhumanists presume that what renders decisions legitimate is that they reflect the values and priorities of their makers.
The Principle of Procreative Beneficence (PB) is transhumanists’ best-known prescription involving reproductive decisions. According to PB, parents-to-be are morally obliged “to aim to have the child who, given her genetic endowment, can be expected to enjoy most well-being in her life” (275). Concisely put, parents should “create children with the best chance of the best life.” From this “ideal” utilitarian standpoint focused on capacities, dedication to their maximization is the sole rational course.
Counterintuitively—given their occupation with radical bioenhancement—transhumanists often unpack well-being in terms of harm-avoidance, not benefit-provision; thus, measures like cognitive bioenhancement are touted in terms of limitations that biotechnology would erase. Here, the scope of “harm” swells vastly beyond its usual sense, for, as John Harris asserts, a condition is deemed harmful “relative to possible alternatives” (92). From this vantage point, harm is wrought whenever a biotechnology that is available to heighten a feature is not deployed.
“Counterintuitively—given their occupation with radical bioenhancement—transhumanists often unpack well-being in terms of harm-avoidance, not benefit-provision”
According to Harris, we should eliminate “condition[s] that someone has a strong rational preference not to be in” (91). Here, “someone” means “anyone at all.” This generic frame of reference meshes with decision-making in public health. As Melinda Hall observes, PB covers “the population,” with individuals enjoined to do what they can to avoid harming it (xi, 22). Transhumanists’ tendency to see the provision of benefit in terms of harm-reduction itself reflects the lens of public health, whose rightful scope at a given juncture relates closely to what the “harm principle” is thought to cover. Within liberal democracy, harm-avoidance is the preeminent warrant for public-health requirements that constrain personal autonomy.
How remote are we now from a scenario in which personal discretion is front and center? Where autonomy guides decision-making, all of the following are morally legitimate paths: opting against bioenhancement, for oneself and/or one’s children; welcoming all bioenhancements available at a given time; and embracing some, while rejecting others. In contrast, when reproductive decisions are filtered through PB, the procedural autonomy that transhumanists extol with eugenic history in view must be scrapped as welfare reducing, for it preserves decisional options that are objectively illegitimate, that is to say, irrational.
Though Julian Savulescu and Guy Kahane recognize that the Principle of Procreative Beneficence lends itself to an impersonal defense, they do not glean the utilitarian anchor of that defense. Allegedly, PB is fully compatible with liberal democracy, for the moral requirement it levies should not be legally enforced. The utilitarian frame of PB, however, undermines an attempt to cordon off moral from legal obligations. Since transhumanism is advocacy of radical bioenhancement, transhumanists’ argumentative strategy when supporting their posthuman vision is ultimately decisive.
Impact on transhumanists’ denial of ties to prior eugenicsThe implications for transhumanists’ disavowal of substantive ties between their thought and prior eugenics are stark because their insistence that what they propose fits within liberal democracy is the bulwark of these denials. When transhumanists distance themselves from eugenic history, their denials tend to feature Nazi eugenics. This gives their repudiations an unwarranted argumentative edge, for the existence of substantive connections is clearly evident when the comparison point is Anglo-American eugenics, which antedates the Nazi variety. While transhumanists’ reliance on utilitarian rationales is usually tacit or denied, Anglo-American eugenicists, including Hermann Muller, Julian Huxley, Karl Pearson, and J. B. S. Haldane, openly gave primacy to overall well-being, whether arguing for the erasure of antisociality or the dramatic boosting of reason and prosocial attitudes. They stressed, as well, the entwining of ethical and political warrants for their favored biological measures. Enacting this interrelation, American eugenics included legislative strictures in the areas of immigration, sterilization, and marriage.
Ties of transhumanism to concerning trends todayTranshumanism is also tied revealingly to current, expansive trends involving “population health” and individual responsibility for conduct deemed to be health-related. Transhumanists’ aim—humanity’s self-transformation into “posthumanity”—is extreme. Nonetheless, their subordination of personal autonomy in areas—including procreative decision-making—where liberal democracy gives it broad scope reflects a more general relaxing of boundaries today between the arenas of “health” and “public health,” and an expansion of the latter’s scope beyond familiar initiatives.
“Transhumanists’ aim—humanity’s self-transformation into “posthumanity”—is extreme.”
For some decades in the United States, individual and population health have been the focus of discrete professional settings—clinical medicine and public health, respectively. Though questions of broader resource allocation have relevance, the guiding idea has been that in clinical medicine, values and priorities of individual patients should be given central weight. Today, as Madison Powers, Ruth Faden, and Yashar Saghai point out, “Traditional lines between clinical and other aspects of health promotion are blurring at a rapid pace” (6). Indeed, the very “paradigm” for considering “health” is shifting from clinical medicine to “public health”—an interpretive change that “risks running afoul of normative strictures concerning the bounds of legitimate state action in the area of health” (425). At the same time, due, for instance, to increased attention to social determinants of health, the scope of what is taken to affect population health has swelled. If this trajectory continues, emphasis may shift from the question of what “public health” legitimately encompasses to what it can reasonably exclude.
Because utilitarian rationales translate readily into sociopolitical requirements, this trajectory involving public health should concern us. Public health is “by definition an arm of the state.” Therefore, as Ruth Faden and Sirine Shebaya observe, lodging an activity or behavior under a public-health umbrella can be “an effective way of taking it out of the realm of legitimate discussion.… Government actions aimed at securing health may be less scrutinized than actions aimed at more controversial ends, leaving public health officials with too much power and too little democratic accountability.”
Linguistic appearances notwithstanding, a mounting emphasis on “personal responsibility” for health fits with the notion that we are shifting to a public-health paradigm, with its preventive core. Dorothy Porter ties this tendency, which has been building for quite a while, to the waning dominance of epidemics of infectious disease over the course of the twentieth century, as this prompted fresh scrutiny of how personal conduct influenced population health (314). If the sphere of individual accountability continues to grow, individuals might eventually “be obligated to submit before actually becoming ill to policies which enforce this responsibility, including those which interfere with ordinary liberties” (90). Here, individuals are equally accountable for acts and omissions. If this trajectory continues, such that we enact constructions of health, public health, and personal responsibility that liberal democracy cannot readily absorb, there is no guarantee that it will remain intact.
“This perfectionism … yields a dangerous mix that stands to jeopardize the ‘value pluralism’ and personal autonomy that are cornerstones of liberal democracy.”
As if this were not enough, transhumanists embrace perfectionism, as did prior eugenicists, when suggesting that the maximal upgrading of capacities, based on available biotechnologies at a given time, is highly desirable and even a moral obligation. Here, transhumanists take to an extreme a currently surging perfectionism, such that, in tandem with an internalization of “irrational social ideals of the perfectible self,” parental investment in children’s autonomy is waning (413–14). This perfectionism, combined with the aforementioned trends involving health, public health, and individual responsibility, yields a dangerous mix that stands to jeopardize the “value pluralism” and personal autonomy that are cornerstones of liberal democracy.
Transhumanism is legitimately critiqued for proponents’ insistence that nothing short of humanity’s self-transcendence is a rational aim. Although the substance of this ideal is not a logical consequence of the aforementioned broader trends, the line of reasoning that transhumanists employ in its defense reflects the utilitarian springboard from which these developments gain their ethical purchase. In liberal democracy, fostering public health and welfare without jeopardizing the pillar of personal liberty requires ongoing navigation and reflection. If we took our marching orders from transhumanists, and were able to produce humanity’s “godlike” successors, the question of how to foster overall welfare without devitalizing autonomy would be moot. For posthumanity would have supplanted us, the very beings for whom this matter is of urgent concern.
Featured image by Amine M’Siouri
The post Playing to lose: transhumanism, autonomy, and liberal democracy [long read] appeared first on OUPblog.

January 8, 2021
Impressionism’s sibling rivalry
Sixty world-famous impressionist paintings arrived at the Royal Academy of Arts in London from Copenhagen in March of this year, a whisker before lockdown was imposed. Instead of drawing box-office crowds, they sat in storage for four months. But then the Academy reopened its doors in August with the Covid-secure “Gauguin and the Impressionists.” That this exhibition sold out so quickly is testament not only to our hunger for unmediated culture after a period of captivity, but also to the enduring popularity of impressionism. What many in those socially-distanced crowds may not have realized is that impressionist painting has a less widely appreciated younger sibling, literary impressionism. While this literary category can encompass a range of canonical writers—Flaubert, Proust, James, Mansfield, and Woolf, for example—it has nothing like the fame of simple “impressionism,” which immediately evokes a cluster of famous images in which painters recorded their first impression of a scene, with hasty and broad brushstrokes. Why?
Five years after the first impressionist exhibition of 1874, which included works by Monet, Degas and Renoir, an article appeared in Paris called “Impressionism in the Novel.” Here the critic Ferdinand Brunetière described literary impressionism as a successor to realism which sought to pictorialize realist narrative. The term caught on. Novelists such as Flaubert, Zola, and Maupassant were soon claimed for the new school. Across the channel in Britain, over the next fifty years, famous novelists such as Hardy, James, Conrad, Ford, and Woolf would all claim that the novel too was an “impression,” or tried to capture impressions, in various artistic manifestoes. Just as impressionist painters recorded their first impressions of a scene, impressionist novelists represented their characters’ conscious experience as it unfolds, via a narrator who seems to be as ignorant as the reader as to future events. These are psychological novels in which characters try to make sense of their lives, often by teasing out the significance of a fleeting impression of an event, in the hours or years which follow it. Yet “literary impressionism” rarely figures as a term in the lecture lists and textbooks of universities’ English literature departments. Why not?
One reason is that the concept of literary impressionism argues for continuities between the nineteenth and the twentieth centuries, bridging realism and modernism. But this sits uneasily with literary modernists’ Oedipal struggle with their nineteenth-century parents, their efforts, in the words of Ezra Pound, to “make it new.” Virginia Woolf told us that “on or about December 1910 human character changed,” and since then “modernism” has become totemic for literary critics as the notion that the twentieth century brought in a decisive shift in human experience and in its literary representation.
A conceptual problem also accounts for the low profile of the term “literary impressionism.” The term seems to rely on a comparison of painting and literature, and—what is more—to imply the priority of painting over literature. But are impressionist novels really the offspring of impressionist paintings? How can painting give birth to literature? The potential pictorialism of literary impressionism has caused widespread anxiety because, in Jesse Matz’s words, “literature… means ideas, reflection… and so it has no place for the merely perceptual impression.” Can a novelist ever really “pictorialize” their narrative? Anyway, surely some of these novelists, James or perhaps Proust, are not particularly pictorial, are they?
Perhaps literary impressionism can be reclaimed as a term once we also reclaim for this period the ancient notion of “sister arts.” Rather than seeing literary impressionism as the genre-defying daughter of painterly impressionism, we can see it instead as a younger sister who has been informed by similar cultural and intellectual contexts. We can see this more clearly if we remove the “-ism” from these two sister arts. After all, the term “impressionism” came from critics, not painters or novelists. To think about “Claude Monet’s art of impressions” or “Henry James’s art of impressions” helps us to see each artist as distinct, but informed by the same representations of impressions, whether philosophical, psychological, or cultural.
Among the contexts which the impression unlocks for us, one dominates: empiricism, a philosophical movement which began in Britain but had global influence, including on painterly and literary impressionism. The early empiricists, John Locke and David Hume, argued that all our knowledge is founded in experience, and, more or less, that such experience is first encountered through impressions. Impressions provide the raw material of thought: all of our ideas derive from these impressions. Our minds then combine these ideas to create more complex ideas which themselves make further impressions on us. For empiricists, then, experience comprises two kinds of perception: external perception, perceiving the outside world, and internal perception, perceiving ideas within our minds.
Given this philosophical context, one way to understand literary impressionism is that it harnesses the power of words to understand how the images we see interact with the images in our minds when we think or remember. It stages for us what Henry James called the “drama of consciousness” by putting impressions in contact with ideas, external with internal perception. By contrast, impressionist painters focus on our external perception. Literary impressionism is perhaps then neither the warring sibling of painterly impressionism, nor its retiring child, but instead brings its own talents and interests to the family. Some of us are lucky enough at the moment to have the health and time to crave culture during the current crisis. While we are locked down, and impressionist paintings are locked away, we shouldn’t forget that many of the books on our shelves have their own impressions to offer us.
Feature image: Landscape of the Moon’s First Quarter, by Paul Nash
The post Impressionism’s sibling rivalry appeared first on OUPblog.

January 7, 2021
Was the dog-demon of Ephesus a werewolf?
Apollonius of Tyana was a Pythagorean sage and miracle-worker whose life was roughly conterminous with the first century AD. He is often, accordingly, referred to as “the pagan Jesus.” We owe almost all we know about him to a Life written by Philostratus shortly after AD 217.
In one of the biography’s more striking episodes (4.10), the great man eliminates a plague (a timely subject indeed for us!) that has fallen upon the people of Ephesus. The suffering citizens send an appeal for help to him in Smyrna, some 35 miles distant. Apollonius does not delay, but presents himself in the afflicted city instantaneously, either by teleporting himself or by projecting his soul from his body in visible form and sending it flying (a familiar Pythagorean feat). He undertakes to put an end to the disease at once and leads the townspeople into their theatre. There he points out an old, ragged beggar, toting a bag with a morsel of bread in it, and squinting. He assembles the citizens around the beggar and tells them to collect as many rocks as they can and stone him with them, as he is an enemy of the gods. Understandably, they are taken aback, and reluctant to kill a stranger, not least one in such an unfortunate condition. The beggar begs Apollonius for pity, but the sage is uncompromising and eventually prevails upon the citizens to start pelting him. At this point the beggar looks up and opens his eyes, revealing them to be full of fire and proving that he is a demon (daimōn). The Ephesians now proceed with the act of stoning, with all the greater confidence and alacrity. In the end, the beggar is completely buried beneath a great pile of rocks. After a little while, Apollonius asks them to remove the rocks at see what beast (thērion) they have killed. They find that the man they think they have stoned has disappeared, and in his place is the carcass of an enormous dog, resembling a Molossian hound in form, but a lion in size. It has been crushed by the stones and is spitting foam from its mouth, as if rabid. The act is in due course commemorated by the erection of a statue of Heracles the Averter in the place where the apparition (phasma) had been pelted to death.
We must bear in it mind that the Life is complex and ironic product of the so-called “Second Sophistic” movement. The story need not be taken to document folk beliefs in any simple or unmediated way, nor need it be fully coherent or consistent with any one set of beliefs. Nonetheless, there is much about this episode that makes appeal to the ancient imagery of the werewolf.
The terminology Philostratus applies to his antagonist is centrifugal, to say the least. Phasma (“apparition”, “manifestation”) might be applied to a god, a demon or, especially, a ghost. Normally it would denote something intangible, but our antagonist is clearly highly tangible, at least in his dog form; perhaps the term phasma is intended to relate more specifically to the physical dog’s cloaking of itself with the beggar’s form. “Demon” (daimōn) seems an appropriate enough term for an entity able to inflict plague—but then one would expect such things to be immortal, not killable; a subcategory of demon, the neky-daimōn (“dead-demon”, i.e. “ghost”) is already dead, but ought not to be killable a second time.
It may be, nonetheless, that the imagery of the ghost is evoked. In antiquity, werewolves had rather greater affinities with ghosts and the dead than they do in the modern imagination. In Petronius’ famous werewolf story (Satyricon 61-2; AD 66) a soldier transforms himself into a wolf in a cemetery; after witnessing the change, his companion, Niceros, the narrator, imagines himself to be beset by ghosts on all sides. The second-century AD physician Marcellus of Side used the metaphor of werewolfism to characterize a condition he identified. He termed the condition lykanthrōpia (“wolf-human-ism”, the source, of course, of our word “lycanthropy”) and its principal symptom was compulsion to hang around tombs (Aëtius of Amida Libri medicinales6.11; iv AD). But let us note that, in more recent times, the vrikolakas of Balkan folklore is found in the forms of the werewolf and the vampire alike.
Marcellus and Aëtius help to clarify a point that will already have been troubling most readers hitherto: can a dog really count as a wolf, or be regarded as sufficiently similar for our purposes? Yes, it can: they give kynanthrōpia (“dog-human-ism”) as an alternative term for the condition in question. Furthermore, our dog’s foaming mouth also makes appeal to the wolf: the term Philostratus uses for “rabid” (lyttōntes) literally signifies “going wolf.” And indeed the Molossian, the large and fierce hunting dog to which Philostratus compares the form of his beast, was surely the most lupine of all the ancient breeds.
One detail in particular that seems to draw the demon close to the ancient paradigms of werewolfism is that of the beggar’s bag with its morsel of bread: a colourful detail, though one that initially seems to lead nowhere within the swiftly narrated tale. But in the case of Petronius’ werewolf story once more, the narrator, Niceros, concludes his tale with the affirmation that, having discovered his friend to be a werewolf (versipellis), he refused ever again to break bread with him. We must not be misled by the English idiom here: Niceros does not metaphorically refuse to converse with his friend, but rather literally refuses to share a piece of bread with him. The implication evidently is that the wolf-transformation can be triggered by the ingestion of a bit of bread. So Philostratus may be offering us the idea that the beggar had used his bread in his final moments to bring on his own transformation, in hopes of being able to escape or to get the better of the townspeople in his dog form.
Philostratus’ tale shares a range of common motifs with another problematic werewolf tale from the ancient world: Pausanias’ story of the Hero of Temesa (6.6; later ii AD). Like our antagonist, the Hero is presented as both a demon and a ghost; like our antagonist, he is clothed in rough fashion—in an all-important wolfskin, no less; like our antagonist he is stoned to death, albeit at the beginning rather than at the end of his story.
All in all, there surely is enough here to justify the dog-demon of Ephesus’ inclusion in, or strong association with, the canon of the werewolves of the ancient world.
The central motif of Philostratus’ tale is saluted (consciously or otherwise) in the appendix to Guy Endore’s 1933 novel The Werewolf of Paris, which has the name—amongst werewolf aficionados at any rate—of being the only werewolf novel of any literary merit. Here Bertrand, the werewolf, is buried, after his suicide, in humanoid form. When his coffin is accidentally opened some years later, it is found to contain only the remains of a dog.
Feature image by David Dibert
The post Was the dog-demon of Ephesus a werewolf? appeared first on OUPblog.

January 6, 2021
A mild case of etymological calf love
A Happier New Year! After a short break, the Oxford Etymologist begins with a post having the magic number 777. (Yes, since 1 March 2006, this blog series has appeared on the OUPblog 776 times.) A quick reminder: in December 2020, there were no “gleanings” (wait for the last Wednesday of this month). My topic was a farm inhabited by very small animals: one post bore the title “A Zoological Kindergarten” and the other “The Ubiquitous Whelp.”

In passing, I wrote that one day I might perhaps deal with calf. There was a reason I was not sure whether calf deserves a special essay. As far as I can judge, the origin of this word contains relatively few riddles, and in this blog, I prefer not to repeat what can be found in solid dictionaries and on reliable websites. But there is a hitch in relation to the frolicsome calf. When the letter C in OED was being put together, the etymology of calf remained almost unknown, and James A. H. Murray only listed a few indubitable cognates. In January 2021, calf remains unedited in OED Online, while The Oxford Dictionary of English Etymology (1966) missed the later literature and repeated the original version. The same holds for the other calf (the calf of the leg), whose origin has also been discovered. Several later dictionaries do give part of the needed information but offer little discussion. That is why I decided to give calf a chance. Also, it may be reasonable to begin the year we rang in with such great hopes on a tranquil note: everything is clear, everybody is happy, and the etymology of all the seemingly impenetrable words has been found.

Calf is a Common Germanic word, and it sounded almost the same in all the older languages. The Old English form was cealf, with ea going back to a, so that the form has not changed since Day One (only l has been lost in the middle). Dutch kalf, Modern German Kalb, and Danish kalf, to cite a few examples, are still recognizable as related to the English noun. Even the Gothic Bible, recorded in the fourth century, had kalb-o. The meaning of the old word has not changed either, but today, calf refers not only to the young of the cow: giraffes, whales, elephants, and many other large mammals also give birth to calves. Perhaps the same usage prevailed a thousand years ago (see below).

The vowels in our word’s root varied by ablaut. Old High German kilbur ~ kilburra meant “ewe,” apparently, with reference to that animal’s ability to give birth. If so, calf may have been a cover term for several young mammals even two thousand years ago. An exact cognate of Germanic kalb– is Latin Galba, a vulgar nickname (later an accepted cognomen), familiar from Roman history. The Romans believed that the word was of Celtic origin, and there is no reason to doubt their belief. Galba meant “pouch, fat belly.” Since outside Germanic, k corresponds to g, galba and calf are a perfect match. Latin globus also belongs here (the vowels vary by ablaut).
The ancient Indo-European root of our word appears to have been gal-, with -b being some sort of suffix, or “extension,” as such opaque suffixes are called in special works. If so, Gothic kil-þ-ei “womb, uterus” (þ = th, as in English thick) is related to cal-f. I mentioned kilþei in the post “An Etymological Kindergarten.” Its English cognate is child, another “young animal.” “Fat belly; womb” (and “the fruit of the womb”)—such are the words having the old root gel– ~ gal– (Germanic kel- ~ kal-). All such observations were made at the end of the nineteenth century and expanded in numerous works in the twentieth. Etymologists concluded that the root meant “swelling,” and this conclusion seems acceptable. I’ll skip one or two fanciful derivations of calf as being of no interest in the present context. Among the animal names usually listed as having this root, a few may not belong here, and perhaps the root gelb- ~ galb- had a variant beginning with gw. All such obstacles notwithstanding, calf, rather certainly, meant either “a round mass” or ”the fruit of a round mass” (that is, “womb”).

We can now go on to calf of the leg. The word surfaced in English texts only in the fourteenth century and was believed to be a borrowing from Scandinavian (Old Norse kálfi; á designates a long vowel, but it is the product of later lengthening and won’t affect our reasoning). This conclusion looked plausible, because leg is certainly a loan from Scandinavian; it superseded the native name of the lower extremity, which was shank. But kálfi did not supersede anything. Similar-sounding words occur in the Modern Celtic languages (calpa and the like), and some researchers looked for Irish Gaelic as the source of English calf. However, Irish lp would not have become English lf. The borrowing must have gone in the opposite direction, the more so as the Celtic words have no ascertained origin.

Yet no such conjectures are needed. It may seem incredible but calf “the young of the cow” and calf “part of the leg” are two senses of the same word. Old Norse had kálf-r and kálf-i, and before concluding that the English noun was a borrowing, etymologists, naturally, asked how the Scandinavian words are related. They failed to find an answer and said that the origin of kálfi is unknown. But note the definition of calf, noun 2 in the OED: “the fleshy hinder part of the shank of the leg,” that is, again “a swelling, the bulging part of the body.” The connection seems strained only at first sight. Convincing analogs will dispel all doubts. Manx bolg designates “belly,” while balgane, literally “little belly,” means “calf of the leg.” George Hempl, a distinguished language historian, wrote an excellent article about such words as early as 1901 (Modern Language Notes XVI, 280-81).

An especially illuminating case is Russian ikra “roe, spawn of a fish” (stress on the second syllable) and (!) “calf of the leg.” The analogy is of course not between “shank” and “roe” but between two “swellings”: the organ that produces roe and the fleshy muscle of the leg. The Russian pair has exact analogs elsewhere in Slavic. In Dutch, too, kuit has the same two meanings as in Russian. Those interested in other similar cases will find them in several indigenous languages of Alaska: “roe” and “calf of the leg”; “roe” and “kidney” (International Journal of American Linguistics 51, 1985, p. 485).
The Slavic-Dutch-Alaskan case does not only provide us with a most interesting association. It clinches the entire deal. Calf did mean “a round mass,” as was suggested long ago, and it is no wonder that quite a few other animal names have the same root. To be sure, it would be good to know why a round mass or a swelling was called gal– by our distant Indo-European ancestors, for what do we gain when at the end of a long journey we triumphantly produce such a meager trophy as a monosyllabic unit endowed with a certain meaning? A sound-symbolic or sound-imitative complex? I am afraid we’ll never be able to go so far. This is where etymology stops, and vague, though not unprofitable, psychological musings begin.
Feature image by Natalia Kollegova
The post A mild case of etymological calf love appeared first on OUPblog.

January 5, 2021
The economic and environmental case for electric vehicles
Electricity generation comes from many energy sources, including fossil fuels such as natural gas and coal, nuclear energy, and a variety of renewable sources such as wind, solar, hydroelectric, and biomass. For the transportation sector, however, energy comes primarily from crude oil. In 2019, 91% of energy for the transportation sector came from crude oil with the bulk of the remainder coming from compressed natural gas (CNG) and ethanol. Of the two emerging transportation technologies, the electric vehicle (EV) made up less than 1% and the hydrogen-powered fuel cell vehicle (FCV) was two orders of magnitude below this.
Technically, EVs and FCVs are both electric vehicles. The EV is powered by electricity stored in a battery while an FCV is powered by a fuel cell, such that the electricity is generated from hydrogen. FCVs have the advantage of refueling times equivalent to an internal combustion engine (ICE) powered by gasoline, but suffer from lack of infrastructure, high fuel prices, and technical difficulties storing hydrogen onboard in a form other than high pressure hydrogen. As well, hydrogen has to be made by either reforming, gasification, or electrolysis. There are currently only 46 hydrogen fuel stations in the US, most of them in California versus more than 150,000 gasoline and diesel stations. And, while the Department of Energy models project around $4-6 per gallon of gasoline equivalent (gge, where 1 kg of hydrogen is equivalent in energy to 1 gallon of gasoline), real world numbers have been $15/gge.
Compared to an FCV, EVs provide a bridge between transportation and electricity, and there is already substantial infrastructure in the form of the electric grid to move electricity around the country. There are also advantages for the EV relative to an ICE in terms of fuel price and CO2emissions. For an ICE, the current CAFE (corporate average fuel economy) is about 25 mpg. This means that for a given automotive manufacturer, the vehicles in their fleet need to have an average fuel economy of 25 mpg. Using a typical pre-pandemic price for gasoline of $3/gallon, the cost per mile driven is $0.12. To calculate this for EVs, we first need the cost of a gge. Using an US average cost of about $0.11/kWh (kilowatt-hour) for electricity and recognizing that there are 33.4 kWh/gge, the price of electricity as a fuel is $3.67/gge. A typical fuel economy for EVs is 120 mpgge, or 3.6 miles/kWh, so the cost per mile driven for an EV is $0.03, ¼ of an ICE.
The EV also has an advantage in CO2 emissions, an important factor since around 50% of all CO2 emissions in the US come from transportation. Although refineries do not currently capture and sequester CO2, only about 10% could be captured as the other 90% are emitted by the vehicle. Some methods have been examined to capture emissions from the vehicle tailpipe, but this is difficult, far from 100% effective, and would require transferring the CO2 from the vehicle to some processing facility. For an ICE, the total life cycle CO2 emissions are about 1 lb per mile driven, decreasing to about 0.6 lb per mile driven for a hybrid. A life cycle analysis considers CO2 made for all stages of the process including raw material processing, manufacturing, distribution, use, and final disposal. For EVs, CO2 emissions depend on the local fuel mix used to generate the electricity. Emissions can be as low as 0.2 lb per mile driven when the electricity comes from zero carbon sources or can increase to as much as a hybrid, if the electricity mix contains fossil fuel energy sources. In the case of zero carbon sources, the 0.2 lb CO2/mile production comes from vehicle and battery manufacturing.
Another factor to consider is the eventuality that someday crude oil will not be readily available. Currently, worldwide reserves are around 1.7 trillion barrels, a number which can grow with improved oil development, such as deep-water drilling and hydraulic fracking. However, this will also increase crude oil prices. Using typical pre-pandemic worldwide consumption of 98 million barrels per day, there are about 47 years remaining for crude oil. The beauty of the EV is that it links transportation with electricity, and electricity can be generated with many energy sources besides fossil fuels.
EVs also have some issues, most notably charging time, range, and price. With respect to charging time, a typical 110 V Level One outlet will add about 5-10 miles of range per hour of charging (RPH). A Level Two 240 V system has an RPH of about 15-25, and the direct current (DC) fast chargers have an RPH of about 100. The number of DC units in the US is currently more than 10,000, a number expected to grow rapidly but lagging far behind the approximately 150,000 gasoline and diesel stations in the US. Also, a typical fueling time for an ICE is 5 minutes during which 500 miles of range can be added.
Range is also an issue, as EVs typically have a driving range of less than 300 miles and more typically about 100-200 miles. In contrast, many ICEs have ranges of 500 miles or more. And the EV is more expensive than ICEs, although the difference will depend on the EV model. For example, the Tesla Model X EV costs around $80,000 versus $32,000 for the Nissan Leaf EV. Earlier it was shown that an ICE costs about $0.12 per mile driven versus $0.03 for an EV, a $0.09 per mile advantage for the EV. According to the US Department of Transportation Federal Highway Administration, Americans drive about 13,500 miles per year. Therefore, on the basis of fuel cost alone, the EV saves about $1,200/year. Considering sticker price alone, it will take close to 17 years to break even if the EV costs $20,000 more than the ICE.
In summary, if the US and world are ever going to have a transportation market that is fossil-fuel free, the current choices are the hydrogen-powered FCV and the EV. While the FCV has the advantage of a rapid charging time equivalent to gasoline and a decent driving range, it will require a lot of new infrastructure, a reduction in hydrogen fuel cost, and better ways to store hydrogen onboard the vehicle than the current method of high-pressure hydrogen. Since the EV can use the existing electric infrastructure, only charging stations are needed versus hydrogen production, distribution, and fuel stations for the FCV. However, EVs currently suffer from long recharging time, short driving range, and price. Current US market penetration shows more than one million EVs and 6,500 FCVs compared to more than 278 million gasoline vehicles. As battery technology and range improve, prices decrease, and more DC fast charging stations become available, market penetration is expected to rapidly increase.
Featured image from Pexels
The post The economic and environmental case for electric vehicles appeared first on OUPblog.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
