Oxford University Press's Blog, page 223
October 10, 2018
Of gutters and ecosystems: the 50th anniversary of the Wild and Scenic Rivers Act
“Rivers are the gutters down which flow the ruins of continents.” – Luna B. Leopold
Luna Leopold understood that rivers are far more than gutters. In a 1964 textbook, he wrote figuratively of the role of river channels in transporting sediment to lower elevations. In other writings, however, Leopold’s understanding of rivers was closer to that of his father, Aldo, author of A Sand County Almanac, who understood rivers as ecosystems.
Straight and uniform in shape, gutters and canals are designed to convey water downstream as efficiently as possible. Since the Industrial Revolution, people have done everything possible to make rivers more like gutters. We have straightened and dredged channels, cut off their bends, and hardened their banks. We have constricted rivers between levees, regulated their flow with dams, and removed large wood, large boulders, and beaver dams that might slow the passage of water. In the name of flood control, navigation, dilution of pollution, water storage, or humanity’s hubris in believing we can make Nature prettier, we have engineered rivers to be more like gutters.
We have had mixed success in these endeavors. On the one hand, this engineering provides the illusion of control. Belief that floods will not occur here allows people to build homes, cities, and infrastructure right up to the brink of river channels. Not surprisingly, costs of flood damage, adjusted for inflation, continue to rise in the United States. On the other hand, thousands of tons of material are barged along US rivers each year, and irrigated agriculture and large cities exist in our deserts, largely on the back of water stored behind dams.
Our rivers, however, are unruly gutters. They not only rise up over their banks during floods; they also move across their floodplains, taking out a road or rearranging our property as they go. Because we have intricately intertwined our property and our communities with rivers, we cannot seem to leave them alone. Attending a recent conference on river management in urban areas, I was dismayed to learn of the constraints imposed on river restoration by the utility lines that run under and along channels like a network of veins and arteries woven around a framework of bones. And then there is the small matter that we depend upon rivers for our water supply.
By the 1960s, the quality of that riverine water supply was clearly in trouble. When the Cuyahoga River caught fire in Cleveland, Ohio in 1969 (the thirteenth river fire since 1868), growing public concern finally led to political action. Probably the most famous outcome of that action is the 1972 version of the Clean Water Act, which is currently threatened by the ever-changing political landscape of the U.S. Another significant outcome was the 1968 Wild and Scenic Rivers Act.
A portion of the Poudre River in Colorado channelized during the 1960s exemplifying the loss of spatial complexity and river ecosystem function. Used with permission of the author.
The National Wild and Scenic Rivers Act states that “… certain selected rivers of the Nation which, with their immediate environments, possess outstandingly remarkable scenic, recreational, geologic, fish and wildlife, historic, cultural or other similar values, shall be preserved in free-flowing condition, and … shall be protected for the benefit and enjoyment of present and future generations.”
I read these lines and imagine the underlying changing attitudes. I can infer the desire to preserve something of the pristine, wilderness that American people of European descent assumed their forebears encountered. I can recognize that rivers cannot fully function—cannot support fish and birds or supply drinking water or safe recreation—if they are polluted, channelized, and dammed. I can imagine the novel idea that rivers, as ecosystems, have an inherent right to exist.
The last idea is the most radical, yet countries including New Zealand and India have begun to treat rivers as legal persons, just as we treat corporations in the United States. This in turn raises a number of complicated questions. Who speaks for the river and by what right? How do we judge what constitutes a healthy or fully functional river? If a river is sick, how do we help the river become healthier? Such questions will not be simple to answer, but I believe that we must change the way we manage our rivers to explicitly recognize that a river is an ecosystem.
Managing a river as an ecosystem is difficult because a river ecosystem integrates everything in the watershed. The watershed is the landscape that drains to the river and, given atmospheric transport of material such as mercury or nitrate across entire hemispheres, managing a river as an ecosystem means managing or at least accounting for influences far beyond the channel boundaries.
An aerial view of a river in western Alaska exemplifying naturally occurring sources of diversity, including multiple parallel channels, gravel bars exposed during low flow, and floodplain forests of differing age and tree species. Used with permission of the author.
Above all else, a damaged gutter can be easily fixed, but experience indicates that a damaged ecosystem can remain altered for centuries. Managing an ecosystem requires humility. We have no right to be otherwise than humble when taking actions that affect the survival of other living beings, and we cannot be certain of the long-term effects of our actions.
Words matter: think about the connotations of jungle—dark, menacing—versus those of rainforest—valuable, vulnerable. On this anniversary of celebrating wild and scenic rivers, I hope that we find the wisdom to recognize our rivers as ecosystems and to proceed with humility in caring for these ecosystems.
Featured Image credit: CC0 via pixabay.
The post Of gutters and ecosystems: the 50th anniversary of the Wild and Scenic Rivers Act appeared first on OUPblog.

October 9, 2018
Dystopia: an update
True aficionados of the earthly apocalypse cannot fail to have noted the deepening pessimism in discourses on what is often euphemistically referred to as “climate change”, but what should be designated “environmental catastrophe”. The Paris Agreement of 2015 conceded the need to limit global warming to 1.5 degrees Celsius, albeit without binding nations to either achieve this specific target or impose specific binding targets in turn on the worst offenders, namely the fossil fuel industries. By 2016 reports were beginning to circulate suggesting that two degrees Celsius was a real possibility, and by 2017 this, in turn, rose in some assessments to three degrees. Those familiar with a now-dated literature estimating the probable results of specific temperature rises will know that even two degrees of warming presents extreme difficulties for humanity. Any rise above this is likely to trigger massive population shifts from the coasts and hotter tropical regions, a massive growth in desertification, species loss, the shrinkage of agricultural land, and so on. So the “global warming” narrative needs to be seen quite differently from even a few years ago, and alarm bells need to ring loudly and insistently across the globe.
The accession to power of Donald Trump has on the one hand, of course, made such alarm less likely to be sounded. Trump’s appointment to head the Environmental Protection Agency, Scott Pruitt, has in turn appointed lobbyists from the fossil fuel industries to direct key aspects of the Agency’s programmes. Yet much of the rest of the world, aghast here as with much else at the audacity of such a damagingly retrogressive agenda, has reacted by reinforcing commitments to something like the Paris agreement.

We know of course—it was obvious even in 2015—that this will not be enough. The glaciers and polar ice caps continue to melt at a rate which astonishes even climatologists. Deforestation and species loss advance equally rapidly. In only one area has there been significant evidence of hope respecting our capacity to reverse these patterns. Two million plastic bags are used world-wide every minute—a trillion a year. Their residue, as David Attenborough so elegantly demonstrated in Blue Planet II (2017), can now be found in the deepest ocean trenches and most remote and inaccessible parts of the earth. Yet a number of nations have now banned not only plastic bags, but disposable plastic straws, cups and other items. Manufacturers have been urged to invent sustainable substitutes for such single-use tools. We may yet escape the fate of being drowned in plastic waste. But the sight of urban workers, students, and others clutching their disposable single-use plastic cups still remains a common one. The necessary stigma which needs to be attached to such consumption has not yet been widely manifested.
This, in turn, indicates a still wider shift in popular attitudes towards consumption which will need to occur in coming decades. I have argued that one thing Marxism was never able to attain was a fully environmentalist outlook on the cycles of consumption and production. Like many of his bourgeois opponents, Marx anticipated an expansion of both—the difference being that the results of production were to be distributed much more justly amongst the producers themselves. But Marx curiously failed to anticipate what now, in light of what we know about Soviet patterns of consumption in particular, seems obvious: that a bourgeois revolution in taste and patterns of consumption might equally well follow a proletarian as a bourgeois political revolution. The working classes might, in other words, succumb to a pattern of fixation upon luxury, upon conspicuous consumption, and the expectation of an incessant pattern of the satisfaction of one need being succeeded by the demand to satisfy a new, more intense, shinier, more expensive or rare commodity.
It is exactly this mentality, however, which is our greatest enemy in the 21st century. Planned obsolescence—making things which break down sooner rather than later in order to generate more wealth—needs to be replaced by patterns and habits of sustainable consumption. Where possible, we need to shift to public and away from private luxury. Automobiles can in many instances not merely be electrified, but eliminated entirely simply by making all forms of public transport free. Anything else made from non-renewable sources can be made to have a life vastly longer than at present.
We still have many other problems to solve if we are to avoid the worst dystopian outcomes which still insistently appear more likely than not. Overpopulation, in particular, remains an insuperable problem. But if we are both to solve the environmental problem and to move towards a post-capitalist society (and we cannot do one without the other) then we must begin to construct a mentality appropriate to a better future.
Featured image credit: Facility in foggy environment by Jason Blackeye. Public Domain via Unsplash .
The post Dystopia: an update appeared first on OUPblog.

Serena redux: waiting to exhale
y now, much has been written about the Serena Williams-Naomi Osaka-Carlos Ramos fiasco at the 2018 US Open. During the women’s final, the umpire, Carlos Ramos, issued Williams a warning for suspected coaching from her player’s box. When Williams strongly denied she was being coached, which is strictly prohibited in tennis, Ramos levied another penalty against her, stripping her of a single point. However, during a critical juncture in the second set, Williams revisited the discussion and demanded an apology from Ramos for suggesting she cheated. Ramos ultimately took a game away from Williams making the score 3-5 and placing Williams at a deficit from which she could not recover. Taking a game away from a player in this manner has never happened in a Grand Slam final.
The debate has fallen along expected contours with white feminists (see Billie Jean King) and many male tennis players (see Andy Roddick and Novak Djokovic), coming to Williams’ defense while many others have lambasted Williams for “breaking the rules” (see the racist depiction of Serena in this cartoon; and Martina Navratilova’s op-ed in the NYT; etc.).
However, as a black woman and former NCAA Division 1 tennis player, I feel compelled to write about the debacle from a different point of view. What happened to Serena Williams, and by extension to Naomi Osaka, was not only about breaking a racquet and incurring a significant penalty that ultimately cost Williams the match (and possibly her standing at the pinnacle of tennis). Rather it was about Williams’s long history of disrespect in game she has remade and reinvigorated.
Williams’s treatment by dominant powers of her sport may also explain the way she was treated one year ago when she almost became another casualty of the United States’ black maternal mortality crisis (see ProPublica). According to the Centers for Disease Control and Prevention, black women are 3 to 4 times more likely to die of complications related to pregnancy and childbirth compared to their white counterparts. Even black women who are well educated are not immune. In 2016, an analysis of black maternal deaths in New York City found that black women with college degrees were more likely to suffer significant maternal complications than white women who did not graduate from high school.
Ultimately, Williams, whose very real vulnerability, pain, and excellence are breathtakingly beautiful to watch, is a metaphor for every black woman. She excels under duress; toils and fights and picks herself up by the bootstraps; and is an inspiration for activism on behalf of others. But in the end for all her stellar accomplishments, Carlos Ramos reminded her not to be such an uppity Negress.
That Williams felt she could call out the obvious and overwhelming sexism of the moment is telling not for what she said but for what she could not say. What happened to her and Naomi Osaka is as much about racism as it is about sexism. It is about the particularly virulent form of disgust reserved for black women.
While many pundits have deftly noted that a game penalty would never have been levied against a male player like Andy Roddick or Andre Agassi, they fail to acknowledge that what happened to Williams could not have happened to Maria Sharapova, the blond Russian who has always been cast as Williams’s nemesis despite not having the tennis chops to serve as her rightful foil. No, what happened to Williams occurred in the much larger context of misogynoir: or anti-black sexism.
For example, Williams is the most drug-tested athlete on either the men’s or women’s professional tour. She has repeatedly been subjected to taunts about her body (see Russian tennis federation official calling her and her sister the Williams brothers; and French tennis federation head saying Serena’s outfit disrespected the); accusations of match throwing and outright racial epithets (see Indian Wells saga).
All of Williams’s monumental accomplishments have always been subjected to extra scrutiny. She has endured the outrageous claims that she cheated her way to the top. Like most black people, she has had to be twice as good. On the September evening of the women’s final, she sharply admonished Ramos for attacking her character. Still the umpire’s suggestion that she cheated was not an isolated incident. Serena has lived like this since she was a teenager and she has excelled in spite of it.
But watching her profound frustration play out on a Saturday evening at Arthur Ashe Stadium, I could not help but think about her fighting for her life after delivering her daughter just last year. Williams’s emergency C-section, pulmonary embolism, hematoma and multiple post-partum surgeries have been documented elsewhere.
As Williams tells it, she knew she was having a pulmonary embolism and told the healthcare staff she needed a blood thinner immediately. She did not receive the diagnostic test or the medication she requested and was later told it was because the nurses thought she was just loopy from pain medication. But in spite of the fact that Williams is a larger than life icon, she is still just a black woman. Her protestations about the treatment she needed did not matter because just like other black women she is not a credible witness to her own life; she is definitely not allowed to be angry, and she is certainly not given any forbearance. On the tennis court she is not allowed to break an inanimate tennis racquet and in the hospital she is not allowed to “know” she was having a life threatening health emergency.
In addition to being a former collegiate tennis player and black woman, I am also a social scientist who studies the implications of race and gender discrimination on black women’s health. We know that the day-to-day slights black people endure are bad for our health and result in shorter life expectancy and poorer health outcomes (see my work; and the work of public health researchers David R. Williams; Amani Nuru-Jeter; Nancy Krieger; etc.)
And, we know that being middle class, or in Williams’s case an international superstar, does not protect black people from this fate. If being Serena Williams cannot protect you, then what chance do workaday black women have? In fact, we know that this type of stress is an enormous health burden causing physiological problems that may ultimately cost black women their lives.
What happened to Williams at the US Open was about much more than just that one evening. It was about a lifetime of cuts that pushed her to demand an apology for an arbitrary penalty that may cost her a place in the record books. Given everything Williams has done for tennis and sport in general, she deserves that apology. But more than anything she has earned a certain freedom that should come simply from being human. Ultimately, Williams, Naomi Osaka, and all of us deserve this levity. We are all still waiting to exhale.
Book cover by Carlos Javier Ortiz.
Featured Image: “Tennis” by Ben Hershey. CC0 via Unsplash.
The post Serena redux: waiting to exhale appeared first on OUPblog.

How well do you know Arthur Schopenhauer? [quiz]
In September, Arthur Schopenhauer (1788 – 1860) was featured as the Philosopher of the Month. Schopenhauer was largely ignored by the academic philosophical community during his lifetime, but gained recognition and fame posthumously. His philosophy can be seen as a synthesis of Plato and Kant, whom he greatly admired, along with the Upanishads and Buddhist literatures. Schopenhauer was best known for his work, The World as Will and Representation, which he published in 1818.
You may have read his work, but how much do you really know about Schopenhauer? Test your knowledge with our quiz below.
Featured image credit: Frankfurt New Old Town, Frankfurt. CC0 Creative Commons via Pixabay .
The post How well do you know Arthur Schopenhauer? [quiz] appeared first on OUPblog.

October 8, 2018
Why was Jerusalem important to the first Muslims?
With the completion of the Dome of the Rock and the Aqsa mosque on the Temple Mount in the reign of ‘Abd al-Malik (685-705), Muslims demonstrated the importance of Jerusalem to the world. But why should Islam have had any interest in this city? Mecca is 1500 kilometers from Jerusalem and Muhammad’s career took place in central and northwest Arabia.
Jews and Christians inevitably connected Muslim building activities on the Temple Mount with the restoration of the ancient Jewish temple. The renowned seventh-century monk Anastasius of Sinai recounts to us how clearing work on the Temple Mount had given rise to rumors that the “Temple of God” was about to be rebuilt, an action that had eschatological significance for Christians in the light of Jesus’ prediction that the Temple would be cast down and “not one stone left upon another” before going on to discuss the signs of the end of the world (Mark 13:2). Whereas the Muslims’ construction work evoked fear among many Christians, it elicited joy from some Jewish groups. The Muslims had defeated their persecutors, the Byzantines, and had allowed them to worship once more in the Holy City, so could it be that they were to be the liberators of the Jews?
Some found support for this idea in the Bible, in such verses as Isaiah 60:6: “The caravans of camels shall cover (protect and redeem) you,” and Isaiah 21:7, which speaks of a rider on a camel and a rider on a donkey. Readers could interpret this as a reference to the Arabs coming first as warriors then as redeemers. The sight of the Muslims raising a place of prayer on the Temple Mount by the Muslims appears to have raised this speculation to fever pitch. The residue of these early expectations survives in a number of Jewish apocalyptic texts attribute to some revered authority the prediction that “the Almighty will bring forth the kingdom of Ishmael (the Arabs) in order to deliver you (Jews) from this wicked one (Edom/Byzantium)” and that “the second Ishmaelite king will be a lover of Israel…who will build a place of worship on the Temple rock” (Secrets of Rabbi Simon ben Yohay).
Medieval Muslim historians offered two main explanations for ‘Abd al-Malik’s actions in Jerusalem. Either, they said, he sought to outdo their enemy, the Byzantines, by building something more magnificent than they had ever managed, in particular seeking to surpass the grandeur of the Church of the Holy Sepulchre, or he sought to divert the Muslim pilgrimage away from Mecca, which his political rival, ‘Abdallah ibn al-Zubayr, had captured in 683. But these theories do not explain the value of Jerusalem in the Muslim imagination – it is either a site for ‘Abd al-Malik’s artistic display or an alternative to Mecca, but no indication is given that it was chosen for its own intrinsic significance to Muslim belief.
One possible sign as to why Jerusalem was important to the first Muslims is found in the episode of the change in the direction of prayer (qibla) of the early Muslims in the second year of their move to Medina. Muslims famously face Mecca when at prayer. But it wasn’t always so.
The Qur’an alludes to this when it tells us that Muhammad’s detractors asked: “What made them turn away from the qibla that they used to face?” (2:142). The original qibla is not specified in the Qur’an, but biographers of Muhammad give us a second clue when they make the claim that “Jerusalem was the first qibla of the Muslims.” They do not discuss why Jerusalem served this function, but it is in any case an unambiguous declaration of the high status of the city in the eyes of Muhammad and his first followers.
The fist qibla was called “the qibla of Abraham.” The Qur’an’s account of the career of Abraham details how God allocated to him a house of worship, “the first House of mankind,” where he and his people could pray and do pilgrimage (2:125-26, 3:96-97, 14:35-41, 22:26-27). Later Muslim commentators would say that all of this account refers to Mecca, but it seems unlikely that Muhammad thought of Mecca as the “first House of mankind.” Jerusalem was the older sanctuary, but Muhammad was now arguing that the time had come for Mecca, the sanctuary of his people, to be added to the list of monotheist cult sites, just as he himself was to be added to the list of divine messengers and the Qur’an to the list of sacred scriptures.
It seems very likely, then, that Jerusalem was important to the first Muslims because Muhammad felt that he was following in the footsteps of Abraham: just as Abraham had founded a place for his people to worship the one true God in Jerusalem, so too was Muhammad founding a place of worship in Mecca. Both cast down the idols of their fathers and elaborated the rites for prayer and pilgrimage at their respective sanctuaries.
Featured Image: “Architecture” by Mauricio Artieda. CC0 via Pexels.
The post Why was Jerusalem important to the first Muslims? appeared first on OUPblog.

The strange and unusual laws of Italy [interactive map]
The International Bar Association Annual Conference will be held in Rome from 7th October through 12th October. It is one of the largest annual events for international lawyers, renowned for its exceptional line-up of speakers from around the world, excellent networking opportunities, and global mission to promote and develop key issues in law. With a programme of over 200 sessions, all covering a broad range of topics concerning the legal world today, the conference provides opportunities for lawyers in all practice areas.
In celebration of this year’s host country, Italy, we’ve taken a look at some of the more unusual laws that can be found across the country. From beachside bans on building sandcastles, to mayoral declarations that townsfolk are forbidden from biting the dust, explore our interactive map of Italy and discover these legal oddities for yourself.
Featured image: Colosseum Exterior photo by Mathew Schwartz (@cadop). Public domain via Unsplash .
The post The strange and unusual laws of Italy [interactive map] appeared first on OUPblog.

October 7, 2018
Is there a comma after BUT?
If you type “comma” and “but” into Google, the search engine will give you some autosuggestions including: “comma after but at beginning of sentence” and “is there a comma before or after but.”
According to editors and grammarians, there is no comma after the word but at the beginning of a sentence. But it is something I see a lot in sentences like “But, there were too many of them to count” or “But, we were afraid the situation would get worse.”
When I see these commas in the work of writers, I invariably cross them out. If I find just one, I’ll squiggle it out and put a question mark (or sometimes a frowny face) in the margin, hoping it is a typo. If I see another instance of but followed by a comma, I’ll strike it out again and write “no comma after but.” If I see lots of instances of the initial but with a comma, I’ll suggest that the writer see me. They rarely do.
It’s a small problem in the grand scheme of things, but I can’t help but wonder why writers adopt this punctuation. There is really only one comma rule that mentions conjunctions: a comma goes before a coordinating conjunction that separates two independent clauses.
So why would a writer put a comma after sentence-initial but?
I’ve got a few hypotheses.
One possibility is that it is an error of analogy. Writers see examples of the adverb however followed by a comma at the beginning of a sentence and make a false analogy: however means the same thing as but; a comma is needed after however; therefore a comma is needed after but. However, adverbs and conjunctions are different grammatical categories, so the analogy does not yield the right punctuation.
Another possibility is that a writer is punctuating by ear, relying on the old idea that you put a comma where you take a breath. Since but signals a disjunction, a writer might imagine a pause and insert a comma on that basis. But punctuation is not determined solely by pauses heard in our mental ear. It is (mostly) keyed to grammatical and rhetorical categories like coordinating conjunction, independent or introductory clause, essential and inessential phrases, coordinate adjectives, and so on. If pausing is the basis for the comma after but, we are dealing with a false underlying assumption leading to an error.
If we can understand why writers make the wrong analogy or internalize the wrong underlying assumption or adopt the wrong generalization, then perhaps we can get punctuation to make more sense to future generations of writers.
A third possibility is that writers notice instances of paired, parenthetical commas the first of which happens to occur after but. They might generalize from that observation to the idea that a comma is always needed. It is easy to find examples of this pattern like “But, as my music teacher always reminded me, you must practice every day,” “But, as any driver will tell you, the commute seems endless,” or “But, always remember, you must never put your finger in a light socket.” If someone is focused too locally on the comma after but and ignores the fact that it is part of a pair of commas, they might make a false generalization.
I think it is important to puzzle about problems like this. Grammar is more than just correcting errors. If we can understand why writers make the wrong analogy or internalize the wrong underlying assumption or adopt the wrong generalization, then perhaps we can get punctuation to make more sense to future generations of writers.
I’m still not sure how to get to the bottom of this. Someday, I may create a little questionnaire that I will attach as a comment to papers to see what more I can learn. Something like this:
Why did you use a comma here?
(a) To indicate a breath or pause.
(b) Because commas are used with words like However, Well, Yes, or No at the beginning of a sentence.
(c) Because a comma always follows but.
(d) Some other reason _______________________________.
Someday I will figure out this puzzling comma.
Featured image credit: “Helvetica Paintings : , comma” by veganstraightedge. CC BY 2.0 via Flickr.
The post Is there a comma after BUT? appeared first on OUPblog.

October 6, 2018
John Kerry and the Logan Act
he Logan Act won’t go away. Most recently, prominent commentators criticized former Secretary of State John Kerry’s conversations with the leaders of Iran, arguing that such discussions violated the Logan Act.
As a matter of policy, Secretary Kerry’s meetings in Tehran were inappropriate. The Obama-Kerry approach to Iran looks worse every day. But this should be a topic of political debate, not of criminal law. In a world of instantaneous global communications, the Logan Act is an unworkable anachronism which should be repealed.
The Logan Act is named after Dr. George Logan, a physician who, during the administration of President John Adams, took it upon himself to talk in France with the revolutionary regime which had overthrown King Louis XVI. The Federalists were outraged by what they perceived as Dr. Logan’s freelance diplomacy. In response, they made it a crime under U.S. law for a U.S. citizen to “directly or indirectly” conduct “any correspondence or intercourse with any foreign government or any officer or agent thereof…to defeat the measures of the United States.”
The Logan Act has rarely been enforced. Its constitutionality has been questioned. But the Logan Act remains on the books and periodically is invoked in the context of foreign policy controversies like Secretary Kerry’s recent discussions with Iranian officials.
If the Logan Act was ever justified, it is unworkable today in a world of instantaneous worldwide communications. Today, everybody is communicating electronically with everyone else across the globe. When Secretary Kerry makes a statement on social media or to news outlets disparaging the foreign policy of the Trump Administration, that statement is known everywhere seconds later – including the halls of government in Tehran and other capitals around the globe.
Similarly, when then President-elect Trump tweeted his opposition to the post-election posture of the Obama Administration towards Israel, that opposition became known instantly throughout the world including every government in the Middle East.
This is a profoundly different world from the world of John Adams and the Federalists.
It is unseemly for Secretary Kerry to manifest his opposition to current U.S. policy in personal meetings with officers of the government of Iran. However, there is no indication that in these meetings Secretary Kerry improperly disclosed confidential information or engaged in any similar activity.
Unseemliness should not be a criminal offense.
Since it is in practice not enforced, the Logan Act has become nothing more than a rhetorical bludgeon, casting in criminal terms matters which should be debated as questions of foreign policy, not criminal law. Senator Kerry’s meetings in Tehran showed bad judgment, as did his handling of Iran while he was Secretary of State. However, we should discuss such matters without the ghost of Dr. Logan peering over our shoulders. In a world of instantaneous, global communications, Congress should repeal the Logan Act.
Featured Image: “Justice” by WilliamCho. CC0 via Pixabay.
The post John Kerry and the Logan Act appeared first on OUPblog.

Pros and cons of GMO crop farming [infographic]
In the agricultural industry, recombinant DNA technology allows for DNA to be transferred from one organism to another, creating Genetically Modified Organisms (GMOs). Four crops constitute the vast majority of the GM crop production: maize, canola, soybean, and cotton. Since 1995, GM crops have been grown commercially and the global area sown to these crops has expanded over 100-fold over the past two decades. While many champion this innovation for its ability to enhance food production across the world, others are raising concerns about the possible impacts on human and environmental health.
Using the Oxford Research Encyclopedia of Environmental Science, we’ll explore both sides of the argument.
Featured Image: “Countryside Harvest Agriculture Farm Nature Field” by TheDigitalArtist. CC0 via Pixabay.
The post Pros and cons of GMO crop farming [infographic] appeared first on OUPblog.

‘Service included?’: tipping in the 19th and early 20th century London restaurant
If the letters and commentary sections of national newspapers are anything to go by, the question of whether, and how much, to tip is a source of vexation for restaurant patrons in early 21st century London. There has also been more recent criticism of proprietors not passing on tips to their wait staff. These concerns were no less prevalent in the Victorian and Edwardian period, and produced an intense discussion that encompassed diners, proprietors, and waiters. Tipping in the late 19th and early 20th century restaurant is not just an esoteric curiosity. The London restaurant was a critical part of the urban fabric. We need to remember that there was an increasing need for people to eat out because of the growing distance between residence and work, and the dramatic rise in the population. The issue of service and remuneration also tells us something about how contemporaries conceived the workings of capitalism and social mobility.
Unlike today, it was generally not a convention in the 19th and early 20th century restaurant to add a service charge to a bill of fare, so tipping was often left to the discretion of the individual customer. On the question of whether to tip or not to tip, the esteemed restaurateur, Henry Roberts believed that ‘perhaps [the diner] is nervous and wonders what the waiter will think of him; but whatever it is, the waiter gets his tip all the same.’ However, there is actually very little evidence of how regularly diners honoured a commitment to leaving a gratuity, and we have even less ability to substantiate how much they might have left. Contemporary commentators suggested that tips should be substantial, but such prescriptions might indicate the ideal, rather than the realized.

Even when tips were left by diners, there was no guarantee that the money ended up in the pocket of the waiter. Many restaurants operated the so-called tronc system under which all the tips received by the entire waiting staff were pooled and then divided each week according to an agreed scale of payment, with head waiters getting more than casual ones. In practice, much could go wrong, for instance when one of the waiters acting as treasurer for staff at a restaurant in the Strand in 1897 absconded with the week’s collection. In such cases, justice was sometimes meted out in the magistrates’ court or alleyway, or both. In still other cases, the restaurateur took the lion’s share of the pot, leaving his staff out of pocket. What might make this seem particularly egregious is that in some restaurants, waiters received no salary at all, or even paid their employers for the privilege of working. The apparently stark inequities of such arrangements, however, should not be allowed to obscure the fact that some waiters were able to work the system to their advantage. This was particularly true of waiters in high-end restaurants, where tips could potentially be not inconsiderable. One observer at the Monico in Shaftesbury Avenue in 1894 claimed that, while they received no salaries and paid between 3s and 4s a day for the privilege of serving, some waiters took home between £12 and £20 a day. Albert Thomas, recalling his career as a waiter at the beginning of the twentieth century, asserted that even in the more modest establishments in which he served, he could “on a good night” take home 10s, which paid for his supper, a drink, and maybe a visit to the music hall and a cigar.
The fact that the financial rewards of waiting were not insubstantial may explain the attraction of employment in the restaurant to two important new categories of social actor in late Victorian and Edwardian London: the foreign waiter, and the waitress. London was a nodal point in a broader international labour market, in which waiters, who began their careers in Italy, Austria or Switzerland relocated to Paris, and then London, and sometimes onward to New York. In so doing, they were often able to build-up not merely an important resumê of experience, but also useful contacts, which allowed many of them to progress from waiting at tables to management, and even ultimately proprietorship of restaurants. For example, Mario Gallati began as a modest commis in his native Italy before becoming a waiter in Paris and London, and then went on to found the illustrious Caprice restaurant. Similarly, waitresses became increasingly prominent in the closing decades of the 19th century, especially in the new chain restaurants, such as J. Lyons and Co., where their presence reflected a broader shift to heterosociability in public spaces that included both customers and workers.

The potential allure and glamour of the waitress, celebrated in popular song and fiction, did not preclude episodes of resentment at employer exploitation. Even the illustrious waitresses of J. Lyons and Co., with their legendary starched uniforms and white frilled caps, resorted to strike action in 1895 when their bosses tried to halve their commissions. (The employers were forced to retract these in the face of public sympathy for the waitresses.) However, there is no doubt that women were attracted to waitressing not least because it compared favourably to alternative employment, particularly domestic service. The columns of ladies’ journals in the 1890s regularly disclosed the dudgeon of upper-middle class women who were finding that the refreshment room and tea shop were appropriating young women who would normally work in their households. Whatever its more vexing connotations, the significance of the tip in the restaurant culture of London at this time reflects a metropolitan world that was characterized by heterogeneity, internationalism, and social mobility.
Featured image credit: ‘The Grand Salon’ in Frederick Leal, ‘Holborn Restaurant Illustrated’ (1894). Courtesy of the Bishopsgate Institute.
The post ‘Service included?’: tipping in the 19th and early 20th century London restaurant appeared first on OUPblog.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
