Oxford University Press's Blog, page 344

July 20, 2017

Living in a material world

The singer Madonna had a worldwide hit record in the 1980s (‘Material Girl’) in which she described herself as ‘the material girl living in a material world’. This is a prescient phrase for the world of today some 30 years after the release of this record. Although Madonna may have been referring to wealth and ‘cold hard cash’ in her song, the rapid development of goods for professional and consumer use really do put us at the mercy of all things material.


Today, there is frequent reference in the media to Kevlar, a polymer (i.e. a plastic) used in lightweight protective bullet-proof body armour. The impetus for developing Kevlar in the early 1960s was as a lightweight replacement for steel cords used to reinforce automobile tyres. The inventor, Stephanie Kwolek, was an American chemist and inventor or co-inventor of 17 granted US patents, an admirable role model for any younger person interested in pursuing a career in STEM subjects. Kevlar is a polymer that can be spun from solution into continuous fibres that can then be woven into fabrics. Although Kevlar is a very different material to candy floss, there are analogies in the manufacturing process for their production because both are produced by spinning molten liquid quickly through a spinning disc (spinerette) that contains holes to create a superfine material.



Golden yellow aramid fiber (Kevlar) by Cjp24. CC BY-SA 3.0 via Wikimedia Commons.

Another polymer system attracting much attention nowadays involves polymer banknotes, for example the five pound sterling note issued by the Bank of England in September 2016. Although widely used in everyday life, the problem of recycling plastics is a pressing issue for protection of the environment. The desire to protect the environment and to reduce the requirement to manufacture plastics from petrochemical sources (i.e. oil) is leading to research on the production of biodegradable plastics and plastics from renewable sources. Polylactic acid is a biodegradable plastic that has medical applications, for example as sutures (i.e. stitches).


The Material Girl from the 1980s will be amazed at the vast range of consumer goods available today, particularly electronic devices such as mobile phones, laptop computers, and tablet computers. Why is this? People walking along the street immersed in looking at their mobile phone screens oblivious of the surrounding environment, making phone calls, downloading music, and monitoring social media are scenes unknown to people in the 1980s. Large-screen televisions, some of which can be wall-hung may be compared with bulkier models from previous decades. The miniaturisation of consumer goods has been driven by the availability of materials with novel properties. Thus liquid crystals, light-emitting diodes (LEDs), quantum dot, and organic light-emitting diodes (OLEDs) are critical for clearer and sharper displays. Exploitation of materials can take a very long time. The phenomenon of electroluminescence that underpins the physical processes involved in LEDs was first observed in the early 20th century by Henry Round but commercial exploitation of LEDs – and wide use of the acronym – has only taken off in recent years.


The ability to pack more and more transistors onto silicon chips by photolithography whereby the number of transistors approximately follows Moore’s Law contributes to the increasing computer power of electronic devices. Whether quantum computers replace conventional digital computers in the 21st century remains to be seen, but if this happens then they may lead to easier methods for breaking encryption codes that are currently used, for example, in internet banking. If such codes are broken then internet banking in its present form would be less secure.


Advances in materials have also been instrumental in the development of medical diagnostics since the 1970s. For example magnetic resonance imaging (MRI) is taken for granted nowadays as a routine technique. Interestingly, MRI has its origins in the technique of nuclear magnetic resonance (NMR) used in structure determination in chemistry but for branding purposes the word nuclear is avoided when discussing MRI with the general public. The term ‘nuclear magnetic resonance’ was first used by the physicists E M Purcell and F Bloch in the 1940s. Paul Lauterbur showed, in the early 1970s, how to make two-dimensional images of the body with magnetic resonance by using gradients in the external magnetic field. Sir Peter Mansfield developed the technique further and derived mathematical methods for quickly deciphering the radio signals and turning them into three-dimensional images. Lauterbur and Mansfield were awarded the Noble Prize for Physiology or Medicine in 2003. MRI depends on the generation of strong magnetic fields using superconducting metallic alloys immersed in liquid helium. There is now potential for using high-temperature ceramic superconductors which have economic benefits since liquid nitrogen can replace liquid helium as a cheaper alternative. This example shows the importance of materials in the development of diagnostic techniques.


One area of pharmaceuticals that offers hope for treatment of currently-incurable diseases involves use of biologic drugs, developed throughout the 1990s. These are proteins known as monoclonal antibodies. They have high molecular weights in the tens of thousands compared to several hundred for conventional pharmaceutical compounds and have a larger molecular size. An example of a monoclonal antibody is Herceptin for the treatment of breast cancer where it binds to a protein that is present in excess in this condition.


If the Material Girl of the 1980s surveyed the world today, she would undoubtedly find herself living in a much more advanced material world.


Featured image credit: LED screen wall by Photonic Syntropy. CC BY 2.0 via Flickr.


The post Living in a material world appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on July 20, 2017 04:30

Bluegrass festivals: a summertime staple

For more than fifty years, bluegrass musicians and fans from around the world have gathered in shady bowers and open fields to trade songs in parking lot picking sessions; hear top local, regional, and national bluegrass bands as they present onstage performances; and buy instruments, books, recordings, and memorabilia from vendors. These bluegrass festivals serve as vital meeting spaces for members of the bluegrass community, and they play a key role in the music’s ongoing economic vitality. (Bluegrass Unlimited, the leading publication in the field, even devotes an entire issue each year to festival listings.) With this blog post, I’d like to introduce you to a few great bluegrass festivals in the United States with the hope that you might seek out a festival in your own community.


Bill Monroe, who is frequently cited as “the father of bluegrass,” started his Bean Blossom Bluegrass Music Festival in Brown County, Indiana in 1967. Although it is not the oldest bluegrass festival in the U.S. (that honor goes to a short-lived festival held in Fincastle, Virginia), it is perhaps one of the most significant, holding a special place in bluegrass mythology. Monroe and his various band members built the facilities and frequently roamed the campgrounds in search of fellowship with parking lot pickers, so traces of Monroe can be felt throughout the festival grounds. In this video clip, we can see Monroe—along with his long-time fiddler Kenny Baker—not only playing the music that he helped to develop but also encouraging the formation of community among his participants.



Ralph Stanley, a long-time competitor of Monroe’s, hosted the McClure, Virginia Hills of Home Bluegrass Festival on Memorial Day weekend from 1971 until his death in 2016; it was also held in 2017. Stanley’s music has been widely celebrated for its efforts to capture the sound of Primitive Baptist singing and other forms of traditional Anglo-Appalachian music in his brand of bluegrass. In this clip, we see Frank Newsome—an elder in the Old Regular Baptist Church and a 2011 National Endowment for the Arts National Heritage Fellow—singing “Gone Away with a Friend.”



In Colorado, the Telluride Bluegrass Festival has featured an expansive variety of bluegrass and bluegrass-related styles, focusing especially on those acts that push the genre’s boundaries. The Colorado-based group Hot Rize as well as progressive pickers such as mandolinist Sam Bush and guitarist Tony Rice have frequently graced the Telluride stage. In this video, we will see a 2008 Telluride performance by the Yonder Mountain String Band, featuring Sam Bush on fiddle.



A more recent addition to the festival lineup is Grey Fox, which is held in the Catskill Mountains of New York every June. Hosted by the bluegrass band Dry Branch Fire Squad, Grey Fox also maintains a fairly broad definition of bluegrass music, often programming some of the more progressive acts in the field today. Additionally, like many festivals, Grey Fox also offers bluegrass instruction to help amateur pickers develop their skills and to help guarantee the genre’s continued growth. In this clip, we will see Abigail Washburn and the Sparrow Quartet (Bela Fleck, Ben Sollee, and Casey Driessen) performing “Song of the Traveling Daughter,” a song that reflects Washburn’s continued interest in Chinese culture, performing at the 2007 Grey Fox Bluegrass Festival.



Bluegrass festivals have played a significant role in the dissemination, preservation, and expansion of bluegrass music for more than a half-century. In addition to the handful of major festivals discussed here, smaller festivals—often drawing heavily from regional and local talent, as well as some national touring groups—can be found throughout North America, as well as in such bluegrass hotspots as the Czech Republic. They are great places for amateur pickers, bluegrass enthusiasts, and fans of local culture to spend a weekend, and they frequently boast a “family-friendly” environment, making them ideal events for people with children. So if you’re looking for a great way to take in good music this summer, check out a bluegrass festival near you!


Featured image credit: “Guitar” by Coyot. CC0 Public Domain via Pixabay.


The post Bluegrass festivals: a summertime staple appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on July 20, 2017 01:30

July 19, 2017

Two numerals: “six” and “hundred,” part 2: “hundred”

Like the history of some other words denoting numbers, the history of hundred is full of sticks and stones. To begin with, we notice that hundred, like dozen, thousand, million, and billion, is a noun rather than a numeral and requires an article (compare six people versus a hundred people); it also has a regular plural (a numeral, to have the plural form, has to be turned into a noun, or substantivized, as in twos and threes, at sixes and sevens, on all fours, and the like). Finally, it resembles and indeed is a compound (hund-red). Eleven and twelve are also compounds (see the previous post), but, to use a technical term, disguised ones, that is, we can hardly or not at all discern their ancient elements. However, though hundred does fall into two parts, neither hund- nor –red means anything to a modern speaker.


Before going on, let us note that in the remotest past people hardly needed words designating exact high numbers. One sheep, two sheep…, perhaps ten sheep, and then a lot (lots) of sheep. It is amazing how many words for multitudes we have: herd, flock, pack, drove, shoal (school), and so forth. They usually refer to animals, and we can sometimes guess their origin. Thus, if we know the verb to brood “to sit on eggs,” we won’t be surprised that the birds hatched in one nest are called a brood. Nor does a gaggle of geese present an insoluble riddle.  Some such nouns can refer to both human beings and animals, for example, troop and bevy. Apparently, all of them were coined because their existence served a useful purpose. However, many of us must have been puzzled by the enormous number of such words. What is a multitude of badgers called? Is there such a word? Oh, yes: cete. Does anyone speak about a cete of badgers? You bet. And look up a skulk of foxes. The Internet page is full of references. But back to our muttons, or rather, moutons.


One sheep, two sheep… many sheep.

In the remotest past, hund– must have meant “ten” rather than “hundred”; however, the picture is confusing. In Gothic, a Germanic language recorded in the fourth century, the word hunda (a neuter plural noun) means “a hundred” (like Latin centum). Yet taihun-tehund (read the digraph ai as English short e), either “ten-ten” or “tenth-ten,” depending on how we divide this word (not inconceivably, taihunte-hund), also existed and also meant “a hundred.” In Old English, we find similar words, for instance, hund-seofontig “seventy,” and wonder how hund “ten” and –tig, another word for “ten,” coexisted in one language and in one numeral.  There can be only one answer. By the time of our recorded monuments (and Gothic predates the texts in Old English by more than three centuries), at least some of those compounds must have become so opaque (“disguised”) that the tautology was no longer heard. Let us keep in mind that Engl. ten goes back to Old Engl. tēn and further to some form like Gothic taihun. Since Germanic h corresponds to non-Germanic k, the pair taihun ~ Latin decem is perfect. With regard to ten, whose distant origin does not interest us at this moment, we have no problems.


Napoleon’s hundred days: the peak.

The natural question arises whether hund– and ten, the alleged synonyms, can be related, and why have two words for “ten”? As we can see, they share a single sound, namely, n. Is this enough? Here we should consider several factors. From post to post (I was almost tempted to say “from pillar to post!”), I invoke the assistance of ablaut (vowel alteration, as in rise-rose-risen, get-got, and so forth). Ablaut is no longer productive. Whether one’s past tense of the verb wet is wet (on the analogy of setset-set) or wetted, no adult speaker of Modern English, inspired by get-got, will suggest wot as its preterit. Some strange (atavistic?) words arise from time to time, for which I have no explanation: brolly for umbrella, wodge for wedge (both chiefly British), and frosh for freshman. They are usually dismissed as expressive formations, but that is probably how the venerable Indo-European ablaut arose: compare pit-a-pat, tit for tat, and many others like them. Within the context of the present discussion, we may consider such words as Engl. know (from cnāwan) and ken (compare Gothic kannjan “to make known”). A look at cn-, which is another spelling of kn-, and kan– shows that kn– has no vowel between k and n, while kan– does. The alternating vowels are called grades of ablaut, and, when the vowel is absent, as in kn-, the relevant term is zero grade. It arose when stress fell on the next syllable. If we need an analogy from the modern language, note that some people pronounce come on! as c’m on! (stress on the adverb and the “zero grade” of come) and canoeing as k’noeing.


That is a brolly indeed.

Fortified with this information, we may look at hund- ~ cent and ten. In Germanic, the zero grade was usually filled by the vowel u. And this is exactly the vowel we find in hund-. Consequently, the initial stage of hund– was hnd- in an unstressed syllable. Germanic h corresponds to non-Germanic k, just as t corresponds to d (taihun ~ decem). Thus, Engl. what, from hwæt, is a cognate of Latin quod (= kwod). Hund– (from hnd-) is a good match for cent(um), except that it has the zero grade, while centum has a so-called full grade. It appears that Indo-European did have two words for “ten.” In Germanic, they were represented by some forms like hnd– and tehn-.


How could that happen? Here historical linguists bend over backwards and reconstruct the protoform dek’m-tón (k’ is a symbol for a special kind of k and, as in the previous post, need not bother us here), which split into what we find in our texts. Is this a probable scenario? Let us say that it is not improbable. By the same token, the word for “hundred” may have sounded approximately as dakandakanda, like Gothic taihuntehund (or taihuntaihund; both forms have been recorded).


We can also ask why the Goths needed hunda and taihun-taihund for the noun or numeral. A specific difficulty for Germanic speakers consisted in distinguishing between 100, that is, ten times ten, and 120, a so-called great, or long hundred.  They used the decimal and the duodecimal systems, and this fact gives readers of Old English and especially of Old Icelandic some trouble, because in the sagas, the Vikings’ main occupation was fighting, and we constantly read that there were a hundred people on board. The statement means 120.


Contrary to the circus stunts we have witnessed above, –red in hundred poses no difficulties. It meant “reckoning; account; number” and is related to Latin ratio (compare Engl. ratio and ration), so that hundred must have meant “a hundred things” and was indeed a noun. Hundred, denoting an administrative division of a shire or county, got its name from the circumstance that it was reckoned as a hundred hides of land in Old English (this hide has nothing to do with hide “skin”).


Until late in the nineteenth century, educated people pronounced hundred as hunderd, and two variants were distinguished: solemn (hundred) and colloquial (hunderd), a curious case of metathesis. Incidentally, the Dutch for “hundred” is honderd, but German and Icelandic are close to English: hundert and hundrað.


Featured image credit: “Viking Boat” by OpenClipart-Vectors, Public Domain via Pixabay.


The post Two numerals: “six” and “hundred,” part 2: “hundred” appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on July 19, 2017 04:30

Does “buying local” help communities or conflict with basic economics?

As summer approaches, picturesque roadside stands, farmer’s markets, and fields growing Community Supported Agriculture (CSA) dot the horizon from the Golden Gate to the Garden State. Consumers go to their local Farmer’s Market to keep spending local and to hopefully create jobs in the community. They “buy local” to reduce environmental impacts. Perhaps they believe that locally-produced goods simply taste better and have health benefits non-locally produced food lacks. Some believe interacting with neighbors builds trust within the community, while others believe local offers greater food security.


But “buying local” conflicts with basic economic principles. Two hundred years ago David Ricardo presented the theory of comparative advantage. He demonstrated that relative costs, not absolute costs, determine the efficient allocation of resources. The theory of comparative advantage explains why, for example, even though Lebron James may be the best typist in the Cavaliers organization, it makes economic sense that he does not write all his own messages. His time is better spent honing his basketball skills. Since relative costs vary, there are mutually beneficial gains from trade. The “buy local” movement resists Ricardo’s argument and attempts to explain why “local” makes more sense.


But does the welfare of local citizens improve when they “buy local”? Let’s consider the cost and benefits to better understand when buying local makes economic sense and when it does not.


First, the preferences of the consumer matter for welfare analysis. Suppose every Saturday morning ‘Chris’ goes to the local farmers’ market in order to purchase locally grown vegetables. Even if it costs more than non-locally produced vegetables and is less efficient, Chris is willing to pay that higher cost.


Second, proponents claim that “buy local” yields environmental benefits. Buying local allegedly limits the costs of pollution and cuts down on transportation costs. It is generally better for the environment for one large semi-truck full of a product to travel across several states than one small pick-up truck going back and forth repeatedly in the same community. The cost per food mile is lower in the former case.


Third, buying local may increase social capital that increases long-term economic growth. Repeated interactions between buyers and sellers at the farmer’s market builds trust within a community. Higher levels of trust tend to promote better governance and economic development.



Lettuce row agriculture by Pexels. Public domain via Pixabay.

Fourth, some backers claim that buying local provides food security for the local community. In case of a crisis, the availability of locally produced food may mitigate any potential harms. However, it encourages farmers to produce inefficient amounts that may be used when a crisis occurs. A more economically sound approach recognizes the costs of inefficiencies exceed the benefits.


The formal model generally concludes that the traditional case for comparative advantage remains largely unaffected by these concerns. In fact, in many instances, the buy local movement harms the local economy. One of the basic tenets of economics is that two regions can be made better-off through trade. Buying local generates inefficiencies that reduce social welfare. The policies intended to support the “buy local” movement results in a region producing a good where they do not have a comparative advantage. The costs of policies increase because the locally produced good forgoes the benefits of specialization and the division of labor.


Consider the case of negative externalities generated by foods brought in from distant locales. Proponents claim that pollution generated from transporting non-local goods to local markets justifies their claim. However, if the externalities require some kind of public response, a Pigovian tax makes more economic sense than encouraging “buy local.” The tax addresses the source of the externality. Buying local leaves the externality in place and does not address the inefficiency associated with deviating from comparative advantage.


“Buy local” may raise welfare in the event that the local community wishes to raise the profits of local producers. Consumers pay a higher price for local food. They do so not to promote efficiency but rather to encourage local production and consumption. In this case, the buy local movement has a compelling case.


While “buy local” may raise the profits of local producers however, it decreases welfare of other regions. Similarly, if all regions “buy local,” then all regions are hurt because of the absence of trade. The benefits of specialization disappear. In other words, a region should produce what it is relatively good at producing and then trade with others. It simply does not make sense to buy oranges produced in Michigan or coffee grown in Nebraska. If all residents of Michigan bought their oranges only from neighbors, they will hurt Florida producers and Michigan consumers. If other regions have the advantage to grow and produce foods, let them grow where it makes the most economic sense.


Recognizing the role of each is critical when analyzing the welfare effects of “buy local.” A theoretical model offers a useful, although not definitive, framework to consider the welfare effects of “buy local.”


Featured image credit: vegatable, basket and food by Markus Spiske. Public domain via

 •  0 comments  •  flag
Share on Twitter
Published on July 19, 2017 02:30

July 18, 2017

Embedded librarianship: the future of libraries

With the rise of the internet and electronic research resources, it is not uncommon for a librarian to hear that libraries are no longer necessary. “You can find anything on the internet” is an often heard phrase. What most of those people do not realize is how integrated librarians (and information scientists) are in organizing and providing information to the public. Libraries must be able to offer resources across multiple formats, not solely through the internet and single books, but through models and creative and organizational programs as well.


Librarians used to be seen as the gatekeepers of information, allowing access to the library’s resources, now the librarian’s role has pivoted to that of tour guide. No sane academic would deny the depth of information resources that technology has provided to modern society, but the issue lies with information literacy and if or how people use the information they find. While earning a Master’s of Library/Information Science, a budding librarian is taught how to absorb and analyze information, how to organize said information, and how to present that information to the library’s audience.


For many health science librarians, their audience is a wide range of people from physical therapy and nursing students to physicians and clinical researchers. And with evidence-based medicine or evidence-based practice (EBM or EBP) on the rise, many librarians realize that to help guide users to the best resources, they need to be proactive, not reactive, in their teaching of information literacy. This means being embedded in the subjects for which they are offering resources. Librarians continue to provide the tools so that health providers can continue to make the best practice decisions, but librarians also teach users how to get the best use out of those tools.


There is a rise of librarians becoming embedded in teaching classes for medical students and showing clinicians where to find the best practice guidelines. Plenty of these health science resources are freely available to the public but are not widely known because WebMD is listed before them on a Google search results listing. (While Google is a helpful tool, their search algorithm is based on popularity of the websites that have been chosen from Google in the past.) Many health science librarians are becoming part of the academic and clinical teams that helps to provide better health outcomes through research and evidence-based practices.


Now that librarians are charged with the task of making themselves actively involved in the learning process, rather than being there when someone has a project due at the end of the course, it is important for librarians to stay on top of their embedded areas’ current research. While webinars and listservs have been common continuing education staples for a handful of years now, subject-intensive conferences are also gaining more momentum.


Multiple regions across the United States (the Great Lakes, the Northeast and Southeast, Texas, etc.) are now offering “Science Boot Camps for Librarians”. These conferences offer concentrated learning about science specific subjects from experts in their respective fields that librarians can then take back to teach their users and students. Being able to look at multiple disciplines within the span of a few days also helps to connect subjects that may not have interacted before. For example, in Augusta, Georgia, there is one of three Medical Illustration programs in the United States at Augusta University. In addition to needing to know advanced medical anatomy, the program must also connect with the latest and greatest in illustrative technology. Nowadays, the focus is on 3D printing and being able to do more than ever before with negative space. So much precision goes into one illustrative representation.


As a librarian embedded in the College of Allied Health Sciences at Augusta University, I have been fortunate to work with multiple fields such as occupational therapy, nutrition and dietetics, public health, respiratory therapy, medical illustration, dental hygiene, and others. As clinically intensive disciplines, most of their preliminary students want enough of a cursory knowledge to find information for a project that will amount to a grade. However, for those that delve further into research or realize how evidence-based practice will infinitely help their fast-approaching career, the library is often the first place they turn.


As a librarian, I am there to perform duties such as guiding clinical laboratory students and faculty on how to perform systematic reviews (whose definition includes the term “exhaustive search”). I am here to teach what resources people should be looking for, why to use those resources, and to sometimes interrupt those results for users. A couple months ago, I needed to find dietary guidelines on a food preservative for the hospital’s nutrition services to make sure the transition to a new food service would not be harmful to patients.


Each discipline needs different requirements. Public health often needs statistics and raw data on certain populations. Often the government and various foundations that do keep precise records of various health populations are not well known and will not pop up on the first page of a Google search.


It is common for me to enter a classroom and ask the students what they are working on are and to then walk the students through the resources that are available to them, explaining each one as I go as to why it may or may not be helpful. “You’re doing a project on dementia? Well, you will want to look at PubMed because it will offer you the most resources and is essentially the Google of the health science world. You will also want to look at PsycINFO….”


It is a balance of finding out what resources the health sciences want and being able to provide the resources that users need, and the best way to find that balance is to have a librarian at the health science table.


Featured image credit: “Books, Library, Bookshelf” by smithcarolyn01, Public Domain via Pixabay.


The post Embedded librarianship: the future of libraries appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on July 18, 2017 05:30

Why can mailboxes only be used for U.S. mail?

Because it is against Federal law to put anything in a mailbox, “on which no postage has been paid,”. If a person is caught doing so, they could be fined up to $5,000 and an organization could be fined up to $10,000. This is called the “Mailbox Restriction Law”, which does not exist in most countries. In addition, in the U.S., people receiving mail must pay for mailboxes, which have to meet government specifications, or provide slots in their front doors through which postal carriers deliver mail. The Postal Service also “owns” our mailboxes and sets all the regulations involving them. Why?


If you go to the USPS website, the answer you will find is that mailboxes could get so full with other items and papers that there would be no room for mail. Secondly, the USPS says it wants “to ensure the integrity of our customer’s mailbox,” meaning only postal workers are allowed to place or remove mail from our mailboxes. History teaches us that while all this is true, there is always more to the story.


Use of First Class mail and package delivery expanded sharply in the early 1900’s. Commercial users of postal services found that the expense of postage higher than if they delivered their own mail. They began using their own carriers to deliver what otherwise would be primarily First Class Mail, in order to avoid paying U.S. postage. At the time, the biggest source of revenue for the Post Office was First Class Mail, so private carriers were reducing the revenue coming into the postal agency. The U.S. Post Office went to Congress and asked for a law to constrain this competition by making it against the law for anyone else to use a mailbox. In 1934, the New Deal Democratic Congress complied, as the postal system had enormous political power within the Democratic Party. Every town and city had postal employees and they voted to have the “mailbox restriction” law (18 U.S.C. 1725). This gave the Post Office what one government official observed as “a virtual monopoly over mailboxes”. In addition, if any flyer or other item was found in the mailbox without postage, the Post Office could force the person putting it in to pay postage for it, even if it had not been delivered by postal carriers.


Did it work? Yes, more or less. It seemed as though the Post Office had crushed its competition—at least for a while. Flyers, advertisements and newspapers continued to be delivered, but they were now instead stuck inside front doors, underneath welcome mats, and left on stoops and front yards. First Class mail still had to go through the Post Office.



Mailbox by ms.akr via CC BY-SA 2.0 via Flickr.

E-mail was introduced in the 1980’s, followed by online shopping and banking in the 1990’s and early 2000’s. The volume of First Class mail dropped every year, and as the quantity dropped, the Post Service increased the price of a First Class stamp, which motivated people to increase their usage of e-mail and to start paying their bills online. This further reduced the demand for First Class stamps. The utility companies that had initially created problems for the Post Office made it increasingly possible for bills to be paid online. Package delivery services, which offered better services often at less cost than the Post Office became widely available in the 1990’s, which took even more business away from the Postal Service.


Until the arrival of the Internet and e-mail, the American postal system was the nation’s largest and most sophisticated information delivery infrastructure. Its power and legacy stemmed from its role of the movement of facts and all manners of paper-based reading materials. For example, the design of a round “tunnel type” mailbox was designed by a postal employee designed in 1915, used in front of homes and businesses which are still popular a century later. Therefore, the U.S. Postal Service remains an important part of the nation’s information infrastructure.


In 2016, the Postal Service handled 154 billion pieces of mail, employed 600,000 people, and operated out of over 31,000 post offices. The total revenue from the U.S. mailing industry was $1.4 trillion. From this revenue, $71.4 billion came from the U.S. Postal Service. First Class mail brought in $27.3 billion, still constituting the biggest part of its revenues. As the Postal Service likes to point out, “If it were a private sector company, the U.S. Postal Service would rank 39th in the 2016 Fortune 500.” The humble mail box continues to be an integral part of our twenty-first century information infrastructure, even if the Post Office no longer has a lock on mail delivery.


Headline image credit: Mailboxes by Moosealope via CC BY-SA 2.0 via Flickr.


The post Why can mailboxes only be used for U.S. mail? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on July 18, 2017 04:30

Society is ready for a new kind of science—is academia?

In her 1998 essay in Science, Jane Lubchenco called for a “Social Contract for Science,” one that would acknowledge the scale of environmental problems and have “scientists devote their energies and talents to the most pressing problems of the day.” We were entering a new millennium, and Lubchenco was worried that the scientific enterprise was unprepared to address challenges related to climate change, pollution, health, and technology.


Twenty years later, our global challenges have only grown in complexity and urgency. Never before have we had such a clear understanding of our environmental crises and yet also been so far from delivering the investment in actionable research that Lubchenco called for. If the March for Science was any indication, researchers are ready to engage. But will universities acknowledge the need for reform?


Academic institutions are increasingly seen as elite enclaves, out of touch with real-world problems. We cannot afford to wait decades more for universities to provide the infrastructure and foster the culture needed to turn ideas into action. If we want science to serve society and the planet, as Lubchenco argued it must, we all must take responsibility for institutional innovation in five key areas. We need to:


1. Produce not only professors but also future environmental leaders


Few faculty members can mentor students interested in real-world problem-solving, because most do not engage in use-inspired science or cultivate the relationships needed. Employers are increasingly demanding hybrid skill sets, but most graduate programs produce individuals with highly specific training and uncertain prospects. More faculty conducting applied work will help, but institutions can do their part by incentivizing partnerships between scientists and practitioners, and providing training and career paths for scientists whose focus is engagement with business, government, and communities.


2. Cultivate a culture that values use-inspired research


In many basic-science departments, research with immediate relevance to societal issues is seen as second-class work. But the problems of the so-called real world are wondrously complex; they require a level of creativity that matches the most abstruse theoretical pursuit. Scientists need guidance on how to codevelop research with external partners and a greater appreciation for the time and resources required to effectively engage. And if scientists make this effort, then universities must incentivize this work by rewarding those who deliver real-world impacts in promotion and retention decisions. The bias against applied science needs to go extinct.


“When science is paralyzed by precision, society misses out on progress.”

3. Move ideas into action faster


The “price we pay for precision,” wrote Nobel Prize–winning economist Douglass North, “is an inability to deal with real-world problems.” If we have learned anything from the climate-change debate, it is that a small degree of uncertainty is not an excuse for inaction. Academics should emulate the tech sector and employ tools from design thinking to prototype ideas and iterate solutions with end users. Decision-makers and risk analysts can help researchers determine when they know enough to take action—and what the risks are for inaction. When science is paralyzed by precision, society misses out on progress.


4. Put people at the center of environmental science


People make decisions, shape policies, and face the consequences of environmental change. However, individuals and communities are largely sidelined in environmental research, too often seen as recipients of knowledge or objects of study rather than true research partners. Recent calls for scientists to “establish dialogues” with the wider world are valid, but fail to acknowledge decades of applied work at land-grant institutions and by social science on the human dimensions of natural-resource issues. Putting people front and center in environmental science requires natural scientists to prioritize partnership with the social sciences, arts, and humanities. Authentic partnership with individuals and communities can also expand the frontiers of traditional disciplines, leading to new insights.


5. Reimagine academic structures to encourage innovation


Many scientists are housed in discipline-specific departments with few incentives to collaborate; even fewer engage meaningfully in the broader world. Furthermore, academic administration, finance, and legal departments move slowly whereas external decision-makers need time-sensitive solutions. Even within land-grant institutions, applied departments (agriculture, natural resources, and agricultural economics) are separate from basic departments (biology, ecology, and economics). Progress will come in the form of outward-facing units, infrastructure dedicated to bridging science to practice, and new positions that reward impact. When institutions support work of societal relevance, researchers will not have to wait until tenure to explore controversial topics and to develop the partnerships that lead to long-term engagement and discovery.


There are signs of progress. For example, impact-oriented training programs for students, faculty, and leaders of all sorts are expanding in response to demand for applied skills. University and nongovernmental-organization partnerships and industry-university links have led to innovations, including technologies that detect and mitigate methane leakage; open-source software that enables leaders to account for nature’s contributions to society; and new financial models designed to fight poverty and expand access to clean energy. In all of these cases, the ingredients for success were the cultivation of partnerships, buy-in from university leadership, and researchers with the expertise to codevelop solutions with end users. Other bright spots include action-oriented policy institutes that link academics with decision-makers, new impact-oriented metrics for academic research, and university-sponsored grants employing evaluation criteria that prioritize impact over publications.


Individual initiatives, however, will not deliver solutions at the scale needed to address the formidable challenges of our time. We need systemic change spanning incentives, culture, and research design in order to cultivate a generation of scholars who will increase the relevance and influence of academia. It is time for university presidents, provosts, faculty, and philanthropists to double down on the interdisciplinary, solutions-oriented work that our complex world needs.


In February, Jane Lubchenco reiterated her call for a “quantum leap into relevance” driven by greater engagement and reforms that reward societal impact as a core responsibility of academia. We are living in times of revolution on many fronts. Perhaps one of them can be to reinvent our centers of learning—to harness their power to address the critical challenges of our time.


Featured image credit: glass, experiment, science by chuttersnap. CC0 Public Domain via Unsplash.


The post Society is ready for a new kind of science—is academia? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on July 18, 2017 03:30

Jane Austen’s writing – a reading list

Jane Austen wrote six novels and thousands of letters in her lifetime, creating a formula of social realism, comedic satire, and romance that is still loved today. Her works were originally published anonymously, bringing this now celebrated author little personal renown – with nineteenth century audiences preferring the Romantic and Victorian tropes of Charles Dickens and George Eliot. Since then, literary tastes and opinions have changed dramatically, and many people have written about, interpreted, and adapted Austen’s writings. But why do we like her stories so much? What can they tell us about her world, and ours?


We’ve brought together a selection of some of the questions (and possible answers) people have asked over the years in a reading list below. What else have you wondered about this iconic female writer?


Why is Austen so well-loved?


Free Indirect Filmmaking: Jane Austen and the Renditions (On Emma among Its Others)” by Ian Balfour, in Constellations of a Contemporary Romanticism, edited by Jacques Khalip and Forest Pyle


From spin-off books (Pride and Prejudice and Zombies, 2009) to countless TV shows and films, including straight adaptations and those that have been inspired by her life or stories (Clueless, 1995), Austen has a modern community of adoring fans. But, is it her writing that has made her so famous and loved, or is it more the association of her name with all of these adaptations that surround her?


What can Austen’s language tell us?


Letter-Writing” in In Search of Jane Austen: The Language of the Letters by Ingrid Tieken-Boon van Ostade



“James Tissot – The Farewell, 1871” uploaded by Austriacus, Public Domain via Wikimedia Commons

It has been claimed that Austen wrote around 3,000 letters in her lifetime, seeing it as an important way to stay in touch and share news with her family and friends. These letters don’t only provide an insight in to her life, but they also show the process of writing letters – from the materials she used to the postal service itself. In this chapter, Tieken-Boon van Ostade examines the language of Austen to explore the importance of letters in her life.


What can we learn from Austen’s novels?


Moral Development in Pride and Prejudice” by Alan H. Goldman, in Fictional Characters, Real Problems: The Search for Ethical Content in Literature, edited by Garry L. Hagberg.


In Pride and Prejudice Austen shows the moral development of her protagonists in direct contrast to the minor characters of the story. Through Elizabeth Bennet and Mr. Darcy, Goldman argues that Austen helps us (as readers) to reflect on our own moral growth and, in turn, start to fully understand how hard it is to reach full moral maturity.


What do Austen’s stories reveal about her world?


Tory Daughters and the Politics of Marriage: Jane Austen, Charlotte Brontë, and Elizabeth Gaskell” in Nation & Novel: The English Novel from its Origins to the Present Day by Patrick Parrinder


During the Victorian era the ‘practice’ of marriage marked the division between the gentry (those just below the nobility in a good social position) and the aristocracy (those in the highest class in society). Marriage is a common theme in Austen’s works, which frequently reflect the very real nuptial anxieties of early nineteenth century society.


How should we read Austen’s works today?


Why We Reread Jane Austen” in Why Jane Austen? by Rachel Brownstein


Austen’s novels may seem trivial and unexciting when compared with the abundance of media available today – there are no car chases in Emma or vampires in Pride and Prejudice – and Austen seems obsessed with the small details. In this chapter Brownstein argues that you have to teach Austen’s novels very carefully, coaching students to look past reading only the plot, and helping them them to see Austen’s “fabric of words”.


Featured image credit: Photograph by Annie Spratt. Public Domain via Unsplash .


The post Jane Austen’s writing – a reading list appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on July 18, 2017 02:30

July 17, 2017

Planetary astronomy in ancient Greece

As eclipse 2017 quickly approaches, Americans—from astronomers to photographers to space enthusiasts—are preparing to witness the celestial wonder that is totality.


Phenomenon found within planetary science has long driven us to observe and study space. Through a shared desire to dismantle and reconstruct the theories behind our solar system, ancient Greek philosophers and scientists built the foundation of planetary astronomy.


The following shortened excerpt from The Oxford Illustrated History of Science discusses the evolution of planetary astronomy in ancient Greece.


The Republic of Plato ends with a cosmic vision. A hero named Er is killed in battle. His body lies on the field for ten days but does not decay. When Er comes back to life, he tells his companions what he saw while he was out of this world. Here Plato draws on the familiar sight of a woman spinning yarn, for Er saw the spindle of Necessity (Anangke, personified as a woman). The spindle and the yarn represent the axis of the universe, while the spindle whorl (the spinning bob to which the newly formed yarn is attached) represents the cosmos itself. But the whorl that Er saw is not like ordinary spindle whorls. Rather it consists of eight whorls nestled one inside another. Plato says they are like nested boxes one can find (but we might think of Russian dolls). The outermost whorl is the sphere of the fixed stars. Nested inside are the whorls for the five planets and the Sun and Moon. Thus each celestial body is carried around on its own spherical shell. The outer whorl turns westward (representing the daily rotation of the cosmos), but the inner ones rotate within in it, to the east, each in its own characteristic time (representing the motions of the planets around the zodiac). Riding on each whorl is a Siren who sings a single clear note. This is Plato’s nod to the Pythagorean doctrine of celestial harmony. And ranged round the whole affair are the three daughters of Neccessity, the Fates who were mentioned by Hesiod in the Theogony. Clotho, with her right hand, helps to turn the outermost whorl. Atropos, with her left hand, helps to turn the inner spheres. And Lacheisis, alternately with either hand, touches one then the other. This seems to be a reference to the three movements of a planet—the westward daily motion, the eastward motion around the zodiac, and the oscillation responsible for the occasional retrograde motion.



“Representation of Ptolemy” by Blanche Marantin and Guillaume Chaudiere, Paris, circa 1584. Public Domain via Wikimedia Commons.

This is the first appearance in literature of the ‘cosmic onion’—the view of the universe as a set of concentric spherical shells. Plato here stands midway between science and myth. On the one hand, he geometrizes the cosmos, postulating a simple model to explain the complex motions of the planets, but on the other hand he has draped his image in traditional mythology.


Nevertheless, geometers took this model seriously. Eudoxus of Cnidus wrote a book On Speeds in which he considered a planet that rides on the innermost of a set of four nested spheres. That is, each of the planetary shells in Plato’s account now consists of four nested spheres. The outermost sphere for each planet is responsible for the daily rotation. The next sphere in is responsible for the eastward motion around the zodiac. And the innermost two spheres together produce a figure-of-eight, back-and- forth motion that accounts for retrograde motion. So, in Eudoxus, the Fates have been removed and replaced by rotating spheres. Plato and Eudoxus overlapped in Athens, so we have no way to know whether the geometer was inspired by the philosopher, or the philosopher poeticized the models of the geometer. Eudoxus’ work has not survived, but we have a short account of it in Aristotle’s Metaphysics and it was still known to Simplicius in the sixth century CE.


The nested spheres of Eudoxus were soon abandoned in planetary theory (though they dominated cosmological thought until the Renaissance). Ancient critics pointed out that in this system, although a planet is slung about on multiple spheres, since each sphere is concentric with the Earth, the planet’s distance from the Earth never changes. This made it hard to understand how some planets vary in brightness in the course of their cycles. Mars, for example, is much brighter in the middle of its retrograde motion.


Around 200 BCE, Apollonius of Perga discussed the theory of epicycles and deferent circles. The new idea is that each planet travels around a circle called the epicycle, while the centre of the epicycle moves around the Earth on another circle, called the deferent. Both of these motions take place at uniform speed, in keeping with the nature of celestial things. Retrograde motion occurs when the planet is close to the Earth, on the inner part of the epicycle. For then the westward motion on the epicycle is more than enough to overcome the eastward motion on the deferent. At first, the model was intended only to be broadly explanatory, and to provide a field of play for the geometer. Apollonius’ theory explains how a planet could move around the zodiac and occasionally retrograde while really executing a combination of uniform circular motions. It also nicely explains why Mars is brightest in the middle of retrograde motion. The theory (being planar) was also mathematically far simpler than Eudoxus’ spherical system.


Ptolemy’s work came at the very end of the creative period of Greek science. And in the case of deferent-and-epicycle theory, he built on three hundred years of work. But Ptolemy also introduced a new idea. For Ptolemy allows the centre of the planet’s epicycle to travel at a non-uniform speed around the deferent. The non-uniformity is, however expressed in the language of uniformity. Ptolemy imagines a third centre, distinct from the Earth and from the centre of the deferent. This third centre, which is the centre of uniform motion, came in the Middle Ages to be called the equant point. If we could stand at the equant point of Mars, we would see the centre of Mars’s epicycle travelling around us at a uniform angular speed—about half a degree per day. The complete theory—epicycle and deferent, with the deferent off-centre from the Earth and a separate equant point—is very successful. For the first time, it became possible to calculate the positions of planets accurately from a geometrical theory.


Featured image credit: “Takeshi DSC 0590” by Takeshi Kuboki. CC BY 2.0 via Wikimedia Commons.


The post Planetary astronomy in ancient Greece appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on July 17, 2017 03:30

July 16, 2017

Winter is coming: the zombie apocalypse on TV

Zombies have swept across the planet. We see it in movies, of course—from one-third to one-half of all zombie apocalypse films have been released since 9/11, among them such works as 28 Days Later, Shaun of the Dead, and Zombieland. We see it in literature, including the Pulitzer Prize-winning novel The Road by Cormac McCarthy, the acclaimed literary novel The Making of Zombie Wars by Aleksandar Hemon, amazing short stories by Manuel Gonzales like “Escape from the Mall,” and in comics like Afterlife with Archie, Marvel Zombies, and DC’s Dark Night event. We see it in games and apps, in zombie runs and zombie pub crawls, in stick-figure zombie families on the back window of cars and SUVs.


But maybe the greatest horde of zombies these days is on television, from cult shows like iZombie and Santa Clarita Diet to two of the most popular shows on the planet, The Walking Dead and Game of Thrones. The Walking Dead has recently wrapped its seventh season, but Game of Thrones is getting ready to ramp up for its final two seasons, with the world premiere of its Season Seven coming tonight.


Sometimes people forget that Game of Thrones is a Zombie Apocalypse narrative, because in the world of Westeros, although winter is coming, the dead have not yet swept completely over the living. Still, the threat of apocalypse looms large. While rulers and potential rulers jockey for position, execute rivals, and prosecute wars, their petty machinations all take place against the backdrop of the oncoming night that never ends. This is made obvious at the conclusion of the Game of Thrones Season Five episode “Hardhome.” After a hard-fought battle, the living dead not only reduced a Wildling village to ruin, but in front of a disbelieving Jon Snow (Kit Harington), The Night’s King raises his hands and all the fallen rise to new life in death. The pretensions of all the humans playing their Game of Thrones are suddenly, painfully, revealed. As the Atlantic’s Amy Sullivan writes, “Is it over yet? And by ‘it,’ I mean all of humanity? There’s nothing like a horrifying White Walker infestation and bloodbath to put things in perspective.”


What hope is there against an enemy such as this that grows more powerful with every human who falls?


As we say in my house, “I predict disaster.”


It is this backdrop of human beings in danger, both in The Walking Dead and Game of Thrones that makes them such powerful post-9/11 and post-7/7 narratives. As The Walking Dead writer/executive producer Angela Kang and I Am Legend writer Mark Protosevich told me in a public interview at the Austin Film Festival, these stories are not about zombies. They’re about survival in a world full of oncoming menace. In such a world—a world actually very much like our own, full of multiple threats—what will people do to survive? What choices will they make? And how will those choices save them—or destroy them?



Publicity picture of the Night’s King in the episode Hardhome, from the HBO series’ Game of Thrones. (c) HBO via Game of Thrones Wiki.

In Game of Thrones, the character who has had the most contact with the walking dead is Jon Snow. He has been attacked by—and dispatched—wights, the Game of Thrones version of zombies. He has battled and beaten one of the White Walkers, the supernatural creatures who animate them. He has thrown his weight and influence behind the rescue of barbarians from north of the Wall where he and the Night’s Watch guard the Realms of Men—and ended up getting killed for doing the right thing! Finally, in Season Six, he is restored to life, not reanimated like a zombie, but resurrected to rejoin the battle against the oncoming dead. Clearly someone or something approves of his actions, and has decided that Jon Snow is not done yet.


Jon Snow’s decisions highlight the ethical challenges that characters in the Zombie Apocalypse and we in the post-9/11 West face in the face of our fear: will we contract, react with fear and suspicion, and throw up walls? Or will we act with kindness and compassion, build communities, and band together against common threats to our species? In this brave new world that has such people (and monsters) in it, how much is too much? How far too far? Early in The Walking Dead, Rick Grimes (Andrew Lincoln) tries to hold onto the values of the old world, but he soon discovers that it can be hard even to know—let alone do—what is right. In the Second Season episode “Bloodletting,” Rick reenters a church they have cleared of walkers. He approaches an image of the crucified Jesus hanging over the altar and, although he says he is not a religious man, he prays for guidance, for “some indication that I’m doing the right thing. You don’t know how hard that is to know. Well…maybe you do.”


What makes these shows about the Zombie Apocalypse so powerful is that while our own monsters don’t look like walking corpses, we too have them by the horde, and we too wrestle day by day with the question of how to know the right thing to do in response to all the threats we face, let alone to do it.


Featured image credit: Zombie Walk 2012, São Paulo, Brazil by Gianluca Ramalho Misiti. CC BY 2.0 via Wikimedia Commons .


The post Winter is coming: the zombie apocalypse on TV appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on July 16, 2017 04:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.