Oxford University Press's Blog, page 491
July 7, 2016
Which mammal are you? [quiz]
Mammals are defined as warm-blooded vertebrates that are distinguished by the possession of hair or fur, of which the females secrete milk for the nourishment of their young, and typically birth live young (except five known species, including the duck-billed platypus). Small mammals evolved from reptiles about 200 million years ago and have diversified rapidly after the extinction of dinosaurs, but they’ve been roaming the planet for an incredibly short amount of time compared to the other classes of animals.
Different orders of mammals have survived for hundreds of thousands of years by adapting to the environment. This process of evolution has honed and sharpened certain characteristics within families, such as fur to keep warm, a circulatory system that allows for the regulation of body temperature, and the possession of a neocortex (the region of the brain that controls sight and hearing).
The neocortex in humans involves higher functions, such as sensory perception, adding to the idea that humans are the most intelligent mammal on the planet. Human evolution started several million years ago, when the apes began to walk upright. But it is only in the last 30,000 years our species (Homo sapiens) has inhabited the world. Did we outsmart our mammalian cohabitants, or did we pick up any of their characteristics in order to survive? Can these be traced back in our personalities today?
Can we match up your personality traits to those of our mammalian friends? Find out which mammal you most closely resemble!
Featured image credit: Wildebeest in Masai Mara during the Great Migration by Bjørn Christian Tørrissen. CC BY-SA 3.0 via Wikimedia Commons.
Quiz background image credit: Sheep agriculture by PublicDomainImages. Public domain via Pixabay.
Quiz outcome image credits: Elephant Boy Botswana by hbieser. Public domain via Pixabay; Meerkat family by meineresterampe. Public domain via Pixabay; Common dolphin, Delphinus genus by NOAA NMFS. Public domain via Wikimedia Commons; Cougar Puma by skeeze. Public domain via Pixabay; and Bradypus by Stefan Laube. Public domain via Wikimedia Commons.
The post Which mammal are you? [quiz] appeared first on OUPblog.

Astronomy’s next big thing: the Square Kilometre Array
When I started research in radio astronomy in 1947, the only known sources of cosmic radio waves were the Sun and the Milky Way. Observing techniques were simple: receivers were insensitive, there was no expectation that other radio sources could be located or even existed. A few years later, a whole vast radio sky was revealed, populated with supernova remnants, galaxies, and quasars. New techniques followed, with sensitive receivers and the big dishes which we now call radio telescopes.
Radio astronomy is due to take another huge leap forward from late 2016, when the construction of the Square Kilometre Array (SKA) begins. Combining the techniques of radio astronomy, telecommunications, and vast computer power, the SKA will in due course provide a completely new level of information about objects such as distant galaxies.
The great leap forward in radio astronomy came in the early 1960s, from a technique peculiar to radio astronomy called aperture synthesis. Radio receivers have a fundamental advantage over optical detectors: they collect radio waves as voltages rather than detecting them as energy. ‘Aperture synthesis’ is the technique by which signals from a large number of different receivers, some separated by large distances, is combined to create the equivalent of a single gigantic radio telescope with the collecting area and sensitivity of the whole combined area.
This is the basis of the SKA – thousands of individual small radio dishes will be combined, making a single telescope with orders of magnitude greater sensitivity than existing radio telescopes. Furthermore, the individual components will be spread over a large area, which is important since the precision with which maps of the sky can be made depends on the spacing between the components of the array. The signals from the individual elements will be combined to form a signal ‘beam’ that maximizes information from a region of the sky, and with modern data processing a number of independent ‘beams’ can be formed simultaneously within a large area. The result is that many regions of the sky can be observed at the same time.
“Thousands of individual small radio dishes will be combined, making a single telescope with orders of magnitude greater sensitivity than existing radio telescopes (…) many regions of the sky can be observed at the same time.”
Radio, optical, X-ray, and gamma-ray telescopes all have the same task of mapping the multitude of sources of radiation in the sky. The fundamental differences in technique between these different regimes are due to the huge range of wavelengths in the electromagnetic spectrum. Radio telescopes are dealing with wavelengths a hundred thousand times larger than light wavelengths, which accounts for the difference in scale between the SKA and the largest optical telescopes. Forming pictures of the sky, or individual objects, has to be entirely different; the CCD detector arrays universally used in cameras and large telescopes do not work for radio waves. Combining the signals from the elements of a radio telescope array must be done entirely within electronic circuits, and presented as the output of a digital computer.
Aperture synthesis radio telescopes started with only two elements, which could be moved to successive spacings in a long series of observations. This was very early in the digital era, and the recordings, made at Cambridge on punched paper tape, were analysed by EDSAC, the first digital computer. As capabilities of recording, transmitting, and processing data developed, multi-element arrays grew in size. Arrays with large numbers of elements, spread out in various ways, are now in use for different regions of the radio spectrum: the eMERLIN array in the UK uses six fixed telescopes spread over more than 200 km; the GMRT in India has 30 large dishes spaced up to 25 km; the Very Large Array in New Mexico comprises 28 parabolic dish telescopes, spaced over distances of up to 35 kilometres, and ALMA, the millimetre wave array in the Atacama Desert (Chile) has 66 antennas spaced up to 15 kilometres apart. Much larger spacings are possible for long radio wavelengths; the LOFAR array based in the Netherlands spreads over 1000 km in six European countries.

Building on the experience of these powerful array telescopes, it became possible to design the international project of the SKA. This would assemble elements with a combined collecting area hundreds of time larger than existing arrays, spread over large distances, and connected so that multiple beams could be used simultaneously, operating at a large range of wavelengths from several metres to around one centimetre. A project of this scale could only be achieved as an international collaboration. Selecting a site was critical; there are very few places where the level of radio interference would be low enough, and with an available large flat area. Two desert areas were chosen, in Australia and South Africa; the SKA will be divided between them. Construction Phase 1 starts this year in both locations; the full SKA will take ten years to complete.
The SKA will look very different at these two locations. The short wavelength array, in South Africa, will use 2000 (200 in Phase 1) steerable dishes, each 15 metres diameter, while the long wavelength array in Australia will use half a million small fixed antennas (130,000 in Phase 1). Prototypes for both already exist, known respectively as MeerKAT and the MWA. Although the majority of the array elements will be concentrated in areas several kilometers across, there will be outlying elements spreading widely over the African and Australian continents, and both SKAs will extend to intercontinental baselines.
Ten member countries, and over 200 organisations, are contributing to the design of the SKA. The dish and antenna designs already exist. The immense scale of data-processing involved in interconnecting the arrays and forming multiple beams simultaneously at several frequencies is hard to grasp: a supercomputer will be needed working at tens of teraflops (equivalent to 100 million domestic PCs). The length of connecting optical fibres would stretch several times round the Earth, and the total digital traffic will exceed the present-day traffic on the world-wide web.
We already know that the SKA will penetrate to the early years of the Universe, test Einstein’s General Relativity, and tackle the mysteries of neutron stars and black holes. There will be much more; a huge step in observing power is coming as the SKA is built over the next decade. Astronomy will be transformed.
Featured image credit: Artist’s impression of the 5km diameter central core of SKA antennas. CC BY-SA 3.0 via Wikimedia Commons.
The post Astronomy’s next big thing: the Square Kilometre Array appeared first on OUPblog.

July 6, 2016
Clouds with and without a silver lining
Engl. cloud belongs so obviously with clod and its kin that there might not even be a question of its origin (just one more lump), but for the first recorded sense of clūd in Old English, which was “rock, cliff.” Some etymologists even doubted whether we are dealing with the same word (Skeat’s reference to the old root meaning “stick together” does not go far enough for “rock”): perhaps in the remote past English had clūd “rock” and its homonym clūd “cloud”? This was the opinion of Friedrich Kluge. I could not find the place in the early editions of his dictionary where he says so (perhaps this statement occurs elsewhere; unfortunately, the reference does not show up in my bibliography). However, in 1899 a thin book called English Etymology appeared in London. Its authors were Friedrich Kluge and Fredrick Lutz, but Lutz must have been only the translator of the German text. Although the booklet was widely consulted, it contains little durable information. In the entry cloud, one reads that OE clūd and Engl. cloud are “scarcely identical…. source and history quite unknown.”
Kluge missed the solution, while the OED came close to it, for in the entry on cloud James Murray mentioned cumulus. One more step would have broken the spell on the puzzle. Weekley also realized that cumulus contained the key to the riddle but did not go further. The editors of The Oxford Dictionary of English Etymology removed the reference to cumulus and only said that cloud is “probably” related to clod. They also repeated the well-known fact that clūd had superseded Old Engl. wolcen (a word already featured in the present series on kl-formations). Wolcen is related to the now archaic welkin “sky” and has cognates elsewhere in West Germanic.
The question that should have been asked is: “What was the difference between wolcen and clūd?” I think we can risk an almost certain answer. Let us look at the situation in Russian. That language has two words: oblako “cloud” and tucha “dark cloud” (stress on the first syllable in both). The mass of vapor we see in the sky can be white or nearly black, and it is convenient to have a separate word for each. Russian also has morok (now used only in a figurative sense, but in the past it meant “dark cloud,” a synonym of tucha; compare mrak “darkness”). Among the Indo-European cognates of tucha we find nouns and verbs designating “thunder” (so in Gothic) and (perhaps unexpectedly) “rainbow” (so in Polish; the thunderstorm has apparently passed). But what matters is that many allied words of tucha mean “compact; thick; dense; compressed; strong; coagulated” (by the way, the Russian adjective tuchnyi means “fat”). All of them point to some sort of accumulation of matter (remember cumulus!).

The English word cumulus has no menacing overtones. When we see round masses heaped one on the other in the sky, we may think of thunder, but usually we don’t, because the color of the cumulus is more often light than dark. I believe that Old English had a pair of words similar to Russian tucha and oblako and that clūd corresponded to tucha. It denoted a dark storm cloud, while wolcen was reserved for the counterpart of oblako. The origin of wolcen is not entirely clear. It may even be related to oblako (which means “something that enwraps”), but the traditional hypothesis connects it with words for “wetness; moisture.” Anyway, if my guess is right, a wolcen did not make people think of rain and thunder, while a clūd did. And it was the clūd that looked like a rock or a cliff: it resembled a lump (the meaning present in so many kl-words), something hard and strong. Indubitable analogs of a cloud likened to a rock or a cliff are hard to find, but Sanskrit ghana– “solid mass” and possibly “mountain” has been attested with the sense of “cloud.” For the disappearance of Engl. wolcen we have no explanation, but it made English poorer.
Even though many kl-words are sound-imitative or sound symbolic, it does not follow that they can have no cognates outside Germanic. So far, we have looked at the kl-words that were limited to West Germanic, but, when researching onomatopoeia and sound symbolism, one can expect to run into suspicious siblings all over the map. As always, I’ll leave out unprofitable fantasies but will mention the proximity of cloud, understood as a mass, and crowd. Francis A. Wood, once (a hundred years ago and in the last quarter of the nineteenth century) a leading American etymologist, even though his name appears in many dictionaries with the sole intent of reprimanding him and showing that he was wrong, made conjectures that are a constant source of inspiration (at least to me), once wrote an article titled “Rime-Words and Rime-Ideas” (rime stands for rhyme) and compared cloud and crowd. He did not say that they were related; yet he pointed out that words tend to form alliances regardless of their origin.

The only non-Germanic noun that has been suggested as a congener of cloud is Classical Greek gloutós “buttocks, rump.” The comparison was prompted by a perfect phonetic match (I’ll dispense with the details). But the meanings made researchers wonder. As long as we agree that cloud referred to an object that looked hard and solid (“a cliff, a rock, a mountain”), one’s backside may fit the agenda. Gloutós appeared in respectable dictionaries long ago, and quite a few cautious etymologists still find this derivation feasible, even though they usually hedge enough to secure a retreat. If gloutós belongs here, then Slovenian glúta “swelling” also does. Germanic kl– corresponds to non-Germanic gl-, and gl– occupies a prominent place in the formation of sound-imitative and sound symbolic words everywhere.

Thus, the rights of cloud to belong to the venerable kl-group have been restituted. It may be of some interest to our readers to know what ideas give rise to designating “cloud” in various languages. The choices are not too numerous. Wetness, enwrapping, and being a lump-like mass have already been mentioned. The English word sky goes back to the concept of “shadow” and means “cloud” in all the Scandinavian languages (sky is indeed a borrowing from Scandinavian: Old Icelandic ský; it pushed aside but did not kill the native heaven). The union of “cloud” and “sky” is common. Equally common is the union between “cloud” and “mist.” Here, the anthologized example is Latin nūbēs and its continuation in the Romance languages (Italian and Spanish nube, French nue and nuage). Engl. nebulous makes us think of mist and fog (and so does German Nebel), but the semantics of such words is fluid: Russian nebo means “sky” (perhaps at one time it meant “overcast skies”; we don’t know). The situation in English (the development from “congested mass resembling a rock” to “cloud of any shape and color”) is by far not the most common one, but, if we take into consideration the difference between the clouds driven by the wind (Shakespeare called them rack) and dark storm clouds, a portent of a thunderstorm, and remember that the modern language has one word for “cloud” instead of two, we will see that there is nothing out of the ordinary in this choice.
I promised a silver lining. Surely, having a viable etymology of a hard word qualifies for one.
Images: (1) Storm Clouds by Unsplash, Public Domain via Pixabay. (2) Sun Clouds by azalee36, Public Domain via Pixabay (3) “head in the clouds” by Charles Kremenak, CC BY 2.0 via Flickr.
Featured image: Sky by giografiche, Public Domain via Pixabay.
The post Clouds with and without a silver lining appeared first on OUPblog.

Philosopher of the month: Hypatia
This July, the OUP Philosophy team honours Hypatia (c. 355—415) as their Philosopher of the Month. An astronomer, mathematician, philosopher, and active public figure, Hypatia played a leading role in Alexandrian civic affairs. Her public lectures were popular, and her technical contributions to geometry, astronomy, number theory, and philosophy made Hypatia a highly regarded teacher and scholar. She was at one time the world’s foremost mathematician.
Hypatia was the daughter of the mathematician Theon, and taught both mathematics and philosophy in what was then the Greek city of Alexandria. She was widely acclaimed during her lifetime, and achieved recognition in several fields of mathematics including algebra, geometry, and astronomy. It is known that she wrote commentaries on the Conics of Apollonius of Perga, an early work of higher geometry, and the Arithmetic of Diophantus, treating what today would be called number theory. In astronomy, she published a table of some sort—opinions differ on its precise nature—and collaborated in the design of an astrolabe.
Her philosophy is known to have been Neoplatonist. This somewhat imprecise term covers many variant doctrines, but all derive from Plato’s theory of forms and endow it with a religious dimension. Neoplatonism aspired to a spiritual world and saw material reality as a poor shadow of that world. In the context of fourth- and fifth-century Alexandria, where the prevailing religion was by then Christianity, it was seen as a form of paganism.
Hypatia’s Neoplatonism led her to a life dedicated to the service of knowledge and learning, with mathematics being an important key to the life of higher contemplation. Neoplatonism envisages the use of abstraction from individual instances to the Platonic forms (such as truth, beauty, etc.). Further abstraction leads the adept to the One, an underlying principle of all nature. Mathematics thus held a special place for many Neoplatonists. Hypatia’s technical mathematical activity is best seen as a continuation of that of her father, who was concerned with conserving the classic works of Ptolemy and Euclid. Hypatia collaborated with him in his work on Ptolemy but also sought to extend the program by producing commentaries on the subsequent work of Apollonius and Diophantus.

A respected, charismatic teacher beloved of her pupils, Hypatia taught the intricacies of technical mathematics and astronomy to her students—among them Synesius of Cyrene, whose letters to Hypatia inform our knowledge of her today. Unlike other professors, Hypatia was not content to live quietly, doing her research and teaching her classes. Instead she chose to play an active role in Alexandrian public life, undergoing administrative and political training, and establishing relationships in the government of Alexandria and beyond.
Prior to Hypatia’s death, Alexandria was shaken by a series of civic disturbances involving three principal groups: Christians, pagans, and Jews. The city was beset by interfactional rivalry among them, and this rivalry often took violent form. The Christians of the time were headed by their archbishop, Cyril (St. Cyril of Alexandria), a towering intellectual figure but a man of violent and quarrelsome personal disposition. Cyril was certainly fully involved in earlier episodes of civil disorder, especially an attempt to expel the Jews from the city. What part, if any, he played in Hypatia’s murder has been hotly but quite inconclusively debated.
Today, Hypatia remains a widely accepted feminist symbol, and has inspired countless works of literature, visual art, and film.
Featured image: aerial photo of the Nile River Delta. Public domain via Pixabay.
The post Philosopher of the month: Hypatia appeared first on OUPblog.

The Corn Laws and Donald Trump
One of the issues that distinguishes Donald Trump from mainstream Republicans — aside from his bigotry towards Mexicans, women, and Muslims—is his opposition to free trade, which has been a staple of Republican ideology since shortly after World War II.
Trump’s repudiation of free trade flies in the face of economic opinion. A recent University of Chicago survey of a diverse group of high-profile economists found that a substantial majority believe that free trade agreements have benefited most Americans.
Trump’s break with free trade is reminiscent of a similar about face that culminated almost exactly 170 years ago, when a British politician made a similarly abrupt policy reversal—albeit in the opposite direction—and both lost his job and condemned his party to nearly three decades in the political wilderness.
Sir Robert Peel was elected prime minister at the British general election of 1841. Peel’s Conservative Party won a clear majority of both the popular vote and seats in Parliament. The Conservatives were staunch proponents of the Corn Laws, decades-old legislation that imposed tariffs on imported grain. These tariffs raised grain prices and made it easier for more expensive British grain to compete with cheaper imports from central and eastern Europe. Because the Corn Laws favored domestic agricultural interests, British and Irish landholders were strong supporters of the Conservative Party.
The outbreak of the potato blight in Ireland during the autumn of 1845 forced Peel to reconsider the established Tory trade doctrine. Repealing the Corn Laws would make it cheaper for the Irish to purchase grain and would alleviate what was rapidly becoming a devastating famine, one that would eventually claim the lives of a million Irish and lead to the emigration of an even larger number.

In the early hours of 16 May, 1846, supported by Conservatives loyal to Peel and the pro-free trade opposition, the House of Commons passed the repeal of the Corn Laws. Peel quickly lost the support of enough members of his own party to be driven from office and replaced the Liberal Party’s Lord John Russell. The Conservatives would not secure a Parliamentary majority on their own again until the general election of 1874.
Although repeal did not by any means save the Irish from starvation, repealing the Corn Laws was clearly an appropriate policy response to the Famine. It also set in motion a gradual liberalization of trade across Europe that contributed to strong economic growth in the late 19th century.
The outbreak of WW1 set globalization back. The trade restrictions enacted during the Great Depression and the remainder of the interwar period marked the end of the “first era of globalization.” It was not until after World War II and the gradual lifting of trade restrictions under the General Agreement on Tariffs and Trade (subsequently replaced by the World Trade Organization) and other trade agreements that the second, and current, era of globalization began and, with it, increased prosperity.
Trump’s efforts to sink free trade fly in the face of both economic theory and historical experience. Some economic sectors will, of course, be hurt by free trade—theory and history are clear on that point—however, there is overwhelming evidence that free trade has a net positive effect on the economy as a whole. This evidence has not deterred Trump from currying favor with those who have been—or believe themselves to have been–hurt by free trade and demand a return to protectionism. If Trump succeeds in rolling back free trade, he will make the United States—and the rest of the world—poorer.
Trump’s success may lead to a schism in the Republican party. The signs of such a split are easy to spot. High-profile Republicans, including both former presidents Bush, 2016 presidential candidate Senator Lindsay Graham (R-SC), and 2012 GOP nominee Mitt Romney have declined to endorse Trump. Republicans should remember that after the Conservatives split over trade, they were not able to form a majority government for another 28 years.
Many of the Republicans opposing Trump do so because of his noxious views, rather than specifically because of free trade. Nonetheless, when Trump passes from the political stage—and sensible Republicans and Democrats can only hope that this will take place sooner rather than later–the split in the Republican Party may well endure. If it does, the Republican’s troubles may leave them in the political wilderness for a long time to come.
Featured image credit: wheat field close up by Devanath. Public domain via Wikimedia Commons.
The post The Corn Laws and Donald Trump appeared first on OUPblog.

The influence of premodern theories about sex and gender
Have you ever wondered why women are having such a hard time achieving equality with men in the church and the world? Or why intersex and transgender people are having such a hard time to be accepted as they are? Or why same-gender attraction still evokes visceral reactions among millions of straight people? Or why official theology barely understands the questions?
But there are answers! This is how one of them goes. In the ancient world, women were regarded as inferior versions of men. In the single continuum, “man,” there was a “gender slide” from more to less perfect. Women were men (as the language of Christian hymns and prayers still insist), but cooler, less rational versions of the male. This state of affairs is usually called the “one-sex model,” or the “one-sex continuum,” and appears in many guises. There was “male and female,” but the female was always a weaker version of the male.
By the seventeenth century, and with advances in anatomy and microscopy, the basis of biological differences began to be better understood, and the idea of two “opposite” sexes was born. Almost everyone reads this modern idea back into biblical and social history, without realizing its modern origin. Arguments persist about whether the two sexes are equal, equal but different, or unequal and different. The idea of opposite sexes has been popularized very recently by the notion of “complementarity.” Complementarity became a new argument for marginalizing the lives and loves of lesbian and gay people. If by nature we are made to complement the sex we are not, then only heterosexual behaviour, desire and coupling is acceptable. But now the idea of existing in two opposite sexes, with individuals being either one or the other, has itself been challenged.
There have always been people who have known, and were known to be, straightforwardly neither men nor women. Sometimes they are called a “third sex.” In a more liberal climate, transgender and intersex people have found the courage to become more visible and vocal. In earlier times, when the binary of opposite sexes was less pronounced, it may have been easier for them to lead normal lives, and to be accepted for themselves.
Christian teaching about men and women, i.e., about gender, is a mix of one-sex and two-sex models. What Roman Catholic, Orthodox, and many Anglican and conservative evangelical Church leaders think about women becomes clear when they talk about why women can’t be ordained. They are “one-sexers.” They replicate the ancient view that women are imperfect, malformed men, so cannot represent the perfect male God and His Son. Yet the more liberal progressive churches that have women ministers, priests and bishops usually argue for it on the basis of there being two equal sexes, with the added theological gloss that their equality is something implanted by God. They are “two-sexers.”
Secular theories also run with two biological sexes, with gender providing the social context where we become women or men. Only in the last forty years or so, has the sex/gender distinction weakened. It is now generally recognized that alleged sex and character differences have been much over-emphasized, while the distinction between biological and social influences is much more complex than previously thought.
There is a middle ground between one-sex and two-sex models. The ancient theory was right insofar as it asserted a common humanity; right to assume a single continuum running from male to female; wrong in attributing to powerful males a higher moral, intellectual and social position in a hierarchy where women, slaves and animals were below them.
The modern theory is right insofar as it asserts that women and men are equal in status and worth; right in asserting that basic human rights belong to people irrespective of gender (and much else); but wrong in making the distinction between two sexes into a separation; wrong in assuming the sexes are “opposites;” wrong in marginalizing everyone who doesn’t fit the assumed binary; wrong in inviting a huge exaggeration of sexual difference in the selling of children’s toys, cosmetics, shoes, clothing, and so on.
An adequate Christian theology of gender has its own version of a middle-ground, but this ground is Jesus Christ. Jesus Christ, not Adam and Eve, is the revealed image of God. Christians believe Jesus founded a new realm, variously called a new kingdom, a new creation, a new body, even a new or renewed humanity. In this realm, hierarchies of value, status, class and gender disappear, for “There is no longer Jew or Greek, there is no longer slave or free, there is no longer male and female.” These markers of fallen humanity have no place in the new community of love, justice and peace. The churches have much to offer the world when it thinks about gender, but they must first recover their own teaching about Jesus, believe it, and joyfully put it into practice.
Featured image credit: celebrating gender freedom by naturalflow. CC BY-SA 2.0 via Flickr.
The post The influence of premodern theories about sex and gender appeared first on OUPblog.

Twelve interesting facts about chocolate
Perhaps one of the most popular sweets around the world is chocolate. This versatile and delicious food can be enjoyed as anything from a warm drink to a crunchy bar, or even a dense, flourless cake. But whatever its form, while you are savoring each bite of chocolaty goodness, keep in mind that behind the sweet flavor is a long and dynamic history that has travelled across oceans and transcended cultural boundaries. In fact the tale of the cacao bean is one filled with conquest, experimentation, and technological innovation, all on a global scale. It has been touched and moulded by everyone from sixteenth century sailors to the workers of the industrial revolution. So, as you savor the sweet taste of history, learn about what it took to make your favorite food by reading these fun facts about chocolate:
1. Europeans first came into contact with chocolate in 1519 when conquistadors of the Aztec Empire brought it back to the Spanish court of King Phillip II. At this time, it was served as a luxurious beverage to only the highest social classes: royalty, military, long-distance traders, and Catholic clergy.
2. Along with chocolate, in the form of the cacao bean, the Spanish conquistadors brought back potatoes and tomatoes from their excursions in the New World.
3. The Spanish were quick to adopt cacao as an exotic alternative to the familiar coffee bean. Chocolate drinks were especially popular during fast days in the Catholic country, when the high levels of fat provided more sustenance than tea or coffee, and did not break the rules of the fast.

4. During the 1600s, an increased presence of coffeehouses and cafes created an opportunity for the lower classes to indulge in the chocolate drink.
5. It was not until the nineteenth century that the right technology made it possible for chocolate to be made into other products, like bars and sweets. At the turn of the twentieth century, chocolate was accessible and affordable to everyone. However, this increase in production tended to yield a decrease in quality as manufacturers decided to use cheaper ingredients like Forastero cacao beans.
6. Cacao is one of the few crops that has benefitted from a highly mechanized process, without which we would not have smooth melted chocolate. (This comes from a refining technique called “conching,” developed by Mr. Rodolphe Lindt.)
7. The distinction between luxury chocolate and ordinary chocolate that we have today began in the twentieth century when chocolate started being mass produced. Before then, it was always considered a luxury good. In fact, the differences between the two have much more to do with the marketing strategy, packaging, and price than with the quality of the chocolate itself.

8. The continued exchange of chocolate between the New World and Europe led to experimentation and new recipes that were quite different from the original, but better suited to a European taste. Cinnamon and nutmeg, among other more familiar spices, replaced chilli peppers and achiote. When the Europeans introduced cows to the New World, a hot chocolate recipe with milk was soon developed.
9. During the Enlightenment, chocolate drinks became popular in London for their taste, nutritional value, and ability to maintain clear thinking. People could remain productive and focused while drinking chocolate all day long, which is not the case when consuming alcoholic beverages.
10. Countries that grow cacao like Grenada, Ecuador, and Madagascar have recently started producing chocolate within their own borders, completely revolutionizing the traditional process of chocolate production. This is a way to maximize profits for their own cacao farmers, who are historically some of the most impoverished workers worldwide.
11. The term ‘single origin’ chocolate indicates that the cacao beans used to produce that product are not a haphazard combination but were sourced from one particular location. This term is seen as a marker of quality, but can in fact be used to describe anything from beans that were sourced within the same country to beans that came from the same plantation.
12. Today, countries like Thailand, India, and Australia that have no previous experience with the cacao bean are planting trees so that they too might have a stake in the globalized chocolate market.
Featured image credit: Chocolate desserts on sticks, CC0 via Public Domain Pictures.
The post Twelve interesting facts about chocolate appeared first on OUPblog.

July 5, 2016
The effects of patient suicide on general practitioners
Suicide is a major health problem. In England, around 5,000 people end their own lives annually – that is one death every two hours and at least ten times that number of attempts, according to the Office for National Statistics. Suicide is a tragedy that is life-altering for those bereaved and can be an upsetting event for the community and local services involved. Our previous research demonstrated the:
Majority of suicide patients (over 90%) have consulted their General Practitioner (GP) shortly before death;
Variation in risk assessment between professional groups and a lack of suicide risk assessment training in primary care;
Dilemma GPs faced when managing patients who were non-adherent to treatment;
Very real struggles experienced by GPs in their attempts to make sense of patient communication of suicidality, to get patients the treatment they need and to respect patient autonomy while fulfilling their professional responsibilities;
Concerns GPs expressed about the quality of primary care mental health service provision and difficulties with access to secondary mental health services;
Need for formal support and guidelines within primary care for GPs following patient suicide.
Do GPs want or need formal support following a patient suicide?
Although patient suicide is uncommon in a GP’s career – one in every 3-7 years per GP and six in every ten years per GP practice; it is important to place appropriate emphasis on the effects of patient suicide on GPs. The role of the GP in this context includes suicide treatment and prescription, prevention, professional attendance at the scene of a suicide, comforting the bereaved, and the critical incident review following a patient suicide. GPs support requirements may differ following a patient’s death by suicide compared to death from other causes related to physical ill health because GPs may see suicide patient deaths as preventable. Practices are increasingly exploring the use of critical incident reviews in primary care following patient suicides to highlight the lessons that may be learned to improve patient outcomes and reduce future suicides.
What did we do?
Having carefully co-developed interview schedules, we collected data to explore GPs views on how they are affected by patient suicide and the formal support available to them following the death of their patients who died by suicide to provide findings that are relevant to primary care service providers and practitioners. The study used a mixed methods approach which involved data collection about patients who had died by suicide in the North West of England between 1st January 2003 and 30th June 2007. The GPs who took part in this study were aged between 31-67 years, three out of four GPs were male, and the number of years in practice varied between 8-40 years. Two thirds of the GPs were based at urban practices and one third rural practices. The majority of practices had two or more GPs.
What did we find?
Our findings suggest that the majority of GPs are affected by patient suicide. Those that were more affected by patient suicides tended to have fewer years in practice. On the other hand, many GPs who were not affected reported that they’ve accepted the psychological toll of patient suicide as a part of their profession. Most GPs we surveyed sought informal support from their peers and colleagues. An interesting finding was the apparent lack of formal support systems and the varied responses from GPs about what encompasses support. This opens up an area for concern where formal support mechanisms may need to be put in place – or where they are available, to be more visible. Although GPs can also make use of generic medical support mechanisms for formal assistance (e.g. British Medical Association and the National Counselling Service for sick doctor), the extent to which specific services are accessible to GPs working in primary care seems to be poor.
These findings are of interest to those who plan and provide support services for GPs dealing with the impact of patient suicides. More GPs are seeking legal advice after the suicide of a patient and this also adds additional stress to their circumstance, driven by additional health service scrutiny. Although many GPs expressed that informal support systems through friends and colleagues were adequate, procedures and guidelines should be developed for those who may require professional counselling. Formal support guidelines should also be made available for greater mental health protection for GPs who are more at risk of experiencing psychological injury after suicide by one of their patients. In addition to GPs, such procedures may potentially be useful for Clinical Commissioning Groups (CCGs), those who plan services in primary care, and postgraduate and Continued Professional Development educators. The recent structure of CCGs and the rapid development of GP postgraduate education, through the introduction of Practice Professional Development Plans, provide a great opportunity for improvements regarding formal support procedures for GPs. Further research should be undertaken to establish whether the implementation of such procedures are effective in supporting GPs who may be bereaved by a patient suicide.
Featured image credit: Doctor and patient by daizuoxin, iStockphoto.
The post The effects of patient suicide on general practitioners appeared first on OUPblog.

Shakespeare’s dramatic music
Whenever a public event requires a speech from Shakespeare to articulate the profundity of human experience, or to illustrate the cultural achievements of humankind (or perhaps Britain), there is a very good chance that someone will turn to Caliban:
Be not afeard. The isle is full of noises,
Sounds, and sweet airs, that give delight and hurt not.
Sometimes a thousand twangling instruments
Will hum about mine ears, and sometime voices
That if I then waked after long sleep
Will make me sleep again; and then in dreaming
The clouds methought would open and show riches
Ready to drop upon me, that when I waked
I cried to dream again.
(The Tempest, 3.2.138-46)
Quite what it means to foreground these lines or indeed this character is a question requiring its own article, but one striking effect of performing it at the 2012 London Olympics opening ceremony, for example, is how it places music at the heart of Shakespeare’s artistic vision. The Tempest is often described as Shakespeare’s most musical play, but in fact it is hard to find much of his writing that does not make use of music – be that practical performance or figurative image – in some way. For Shakespeare, music is a dramatic tool; a means of narration; a symbolic discourse of harmony and disharmony; even the basis for the occasional dirty joke.
Shakespeare, it seems, is not only a precise and nuanced user of music cues and song in his own work, but also a great provoker of music in others.
Richard III might not seem like the most musical of plays, lacking any songs or music other than the drums and trumpets used to announce the progress of battles and the entrances of monarchs. Yet even here, Shakespeare toys with his Elizabethan audience’s musical expectations over and over again by matching trumpet calls indicating the glorious entrance of a royal person with a series of distinctly un-kingly sights: a ‘sick’ and dying Edward IV helped onto the stage; a small boy, Edward V, with the ‘Lord Protector’ Richard looming ominously over him; the murderous Richard III himself ‘in his pomp’; and, finally, the more positive royal figure of Henry VII who nonetheless has to step over the body of the king that he has just killed to reach his crown. Music actually makes meaning in this play, asking early modern playgoers to consider what a monarch could – or should – look like.
I am regularly asked whether we have the original tune for a particular song or cue in Shakespeare. The answer is often ‘no’, but there are a reasonable number of likely original compositions preserved in texts of the period. Often these reveal Shakespeare taking a piece of music that his audience would already be familiar with – a popular ballad in Othello (4.3.38-55), or a fashionable, art-music ‘ayre’ in Twelfth Night (2.3.98-108) – and giving it a new twist. Thus, Desdemona’s ‘Song of Willow’ turns a male complaint about an inconstant lover into her articulation of female grief and male mistreatment, and Sir Toby’s ‘Farewell, Dear Heart’ reimagines an introverted, young man’s solo love song as a raucous duet that celebrates the ‘good life’ and taunts the apoplectic Malvolio who has just told him in no uncertain terms to be quiet. Like Richard III’s trumpets, both songs demonstrate Shakespeare’s sophisticated awareness of exactly how his first audiences would engage with a piece of music, and how he can shape those engagements to particular dramatic effect.
Shakespeare and the King’s Men also commissioned new compositions for some of his plays, like court lutenist Robert Johnson’s songs for The Tempest. Two of these, ‘Full Fathom Five’ and ‘Where the Bee Sucks’, survive in seventeenth century sources. These songs demonstrate yet closer links between what are presumably Shakespeare’s words and Johnson’s music, for the lines claiming that Alonso has drowned and ‘suffer[ed] a sea-change | Into something rich and strange’ (1.2.403-4) are accompanied by a musical ‘change’, modulating into the dominant key. Shakespeare’s plays demonstrate a near-infinite capacity to accommodate new song settings and instrumental pieces into a performance, and indeed, the practice of adding or altering music for a production is one that Shakespeare and his company would surely have recognised. But nevertheless, Johnson’s bespoke settings indicate just how much dramatic subtlety is likely to have vanished along with the majority of now-missing music for Shakespeare’s plays.
All this is to say nothing of the dozens of operas, ballets, non-dramatic song cycles and incidental music suites inspired by Shakespeare’s text, and the thousands of pieces composed for use in productions over the last four centuries, recorded in Gooch and Thatcher’s Shakespeare Music Catalogue (OUP, 1991). Shakespeare, it seems, is not only a precise and nuanced user of music cues and song in his own work, but also a great provoker of music – and, often, a provoker of great music – in others.
Featured image credit: The Ambassadors, detail of globe, lute, and books, by Hans Holbein the Younger. Public domain via Wikimedia Commons.
The post Shakespeare’s dramatic music appeared first on OUPblog.

July 4, 2016
What should be done with Justice Scalia’s Supreme Court seat?
Justice Ruth Bader Ginsburg has publicly stated that the US Supreme Court does not function well with eight members. I disagree. Under present circumstances, it would be best for the country and the Court to abolish the vacant Supreme Court seat held by Justice Scalia and to proceed permanently with an eight member court.
The country is divided ideologically. The Court should reflect that divide.
There is nothing magical about the number nine nor is there anything magical about having an odd number of justices. The Constitution allows Congress to decide how many justices there will be. The number has changed at various times in American history and has often been an even number. The Court started with six justices and at one point had ten justices.
Congress and the President can set the number of justices at a number that is appropriate for today. With the current eight justices, the Court is now in rough ideological balance. That is the way it should be.
Keeping the current eight-member Court would mean no more 5-4 votes on contentious issues. Those issues would instead be resolved by political processes by elected officials rather than by unelected justices.
The Court has demonstrated that with eight members it can address important issues of constitutional law. Consider, for example, the Supreme Court’s recent decision in Puerto Rico v. Sanchez Valle. By a 6-2 vote, the Court concluded that it would violate the Double Jeopardy Clause of the Fifth Amendment for Puerto Rico to prosecute defendants for illegal gun sales when such defendants had already pled guilty to analogous federal charges. Because Congress is the “ultimate source” of the prosecutorial authority of the Commonwealth of Puerto Rico, it would constitute Double Jeopardy for Puerto Rican prosecutors to press charges after defendants admitted their guilt on equivalent federal gun running charges.
This is an important constitutional decision which the Court reached with eight members. If the Court had divided 4-4, there would have been no decision but instead further debate and discussion about the Double Jeopardy Clause. Debate and discussion should be embraced, not shunned.
Consider in this context the Supreme Court’s recent 4-4 split in Texas v. United States, concerning the Deferred Action for Parents of Americans and Lawful Permanent Residents program (DAPA). The Court’s division means that advocates of that program must now advance that program through public hearings, in which all sides can be heard. This is a possibility advocates of DAPA should embrace, rather than bemoan.
Right after Justice Scalia’s death in February, I had proposed that the President and Congress agree that one of the living former justices be given a temporary recess appointment back to the Supreme Court. Since, the Court has functioned with eight members, just as it has functioned in the past with an even number of justices.
The membership of the Court should be reduced now, before we know which party will control the White House and which party will control the Senate. This reduction would eliminate the Court as a political issue in the fall.
The Court should not be a cockpit of partisanship. The rule of law would be strengthened if we instead acknowledge the divisions in this country and keep the current eight-member Court which reflects and balances those divisions.
Featured image credit: United States Capitol Rotunda, Washington, D.C. by Ken Lund. CC-BY-SA-2.0 via Flickr.
The post What should be done with Justice Scalia’s Supreme Court seat? appeared first on OUPblog.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
