Oxford University Press's Blog, page 601
October 21, 2015
Of honeymoons, hangovers, and fixed-term contracts
Companies care about the job satisfaction of their employees, because this is in their very own interest. In fact, dissatisfied workers perform poorly, are often absent and impose hiring costs as they switch employers frequently. Managers, as well as management researchers, agree on the importance of job satisfaction, since the Hawthorne experiments suggested in the 1920s that employees like attentive employers. In contrast, it is still a relatively new trend that politicians and economists also consider satisfaction measures, instead of sticking to GDP growth as the one and only measure of social progress. For instance, the British PM started a nationwide initiative to measure well-being in 2010 in order to reach a “government policy that is more focused not just on the bottom line, but on all those things that make life worthwhile.” One ‘thing’ that matters for people’s well-being in particular is employment, or, in the bad case, unemployment. In fact, people suffer from losing work more than from hardly any other life event. A consequence is that workers also fear unemployment already when still employed.
This raises the question whether policy makers should not allow firms to employ workers on contracts that are limited in time, because such contracts bring about more insecurity about future employment than permanent contracts. Indeed, fixed-term employees often report not to know what will happen when their fixed-term job ends and such uncertainty, in general, lowers job satisfaction. Surprisingly, however, this doesn’t automatically mean that fixed-term employees are less satisfied at work than employees with a permanent contract. In fact, numerous studies don’t report any negative effect from temporary employment on job satisfaction.
A statistical correlation often, but not always, points to a cause-effect relationship. The fact that people are more likely to die when retired than before doesn’t mean retirement in itself poses a risk of death. Unobserved (‘third’) factors often cause such a spurious correlation – or even veil a real connection. There may thus be many possible reasons why fixed-term employees are equally satisfied with their jobs as permanent employees, although they feel insecure. One phenomenon appears relevant in particular – the so-called Honeymoon-Hangover-Effect. When employees change their job, they are extremely happy with their new work at the beginning. This honeymoon, like any other honeymoon, lasts only for a short time. Satisfaction drops fast and dramatically – the hangover arises. Due to the short lengths of their jobs, fixed-term employees are more often still in the honeymoon phase when asked about job satisfaction, compared to the permanently employed. Subtracting this influence from the difference in satisfaction between the two groups, it turns out that fixed-term employees are on average less satisfied than the permanently employed.

How can policy free fixed-term employees from the fear of unemployment? Not allowing fixed-term contracts could be a bad solution when companies need flexibility to adjust employment to the demand for their products and services. Employers argue that if they can’t split-up with employees, they won’t hire them at all. An alternative is to take away the bad consequences of unemployment. The idea originates from the Danish ‘Flexicurity’ model, which ties flexibility for firms with security for workers. Regarding the security part, mainly two ideas are discussed, but neither get much support from happiness researchers. First, high unemployment benefits may compensate for the misery of job loss. The problem here is that such a policy would be incredibly costly, given that joblessness hurts extremely and that the effect of money on well-being is not very strong. Second, training schemes and other measures aimed at improving employability can shorten periods of unemployment because they enable workers to find new jobs quickly. However, even people who know that they will find a new job quickly are quite unhappy when their current job is at risk. Obviously, one’s current job entails assets that cannot be replaced easily, such as the social relations with colleagues that have been established over time. It thus remains an important avenue for future research to answer the question how flexibility and security can be combined for the benefit of both workers and companies.
Featured image credit: Office men women working by tpsdave. CC0 Public Domain via Pixabay.
The post Of honeymoons, hangovers, and fixed-term contracts appeared first on OUPblog.

From number theory to e-commerce
In this blog series, Leo Corry, author of A Brief History of Numbers, helps us understand how the history of mathematics can be harnessed to develop modern-day applications today. The final post in the series takes a look at the history of factorization and its use in computer encryption.
The American Mathematical Society held on October 1903 its regular meeting in New York City. The program announced a talk by Frank Nelson Cole (1861-1921), with the unpretending title of On the factorization of large numbers. In due course, Cole approached the board and started to multiply the number 2 by itself, step after step and without saying a word, sixty seven times. He then subtracted 1 from the result obtained. He went on to multiply, by longhand, the following two large numbers:
193,707,721 x 761,838,257,287.
Realizing that the two calculations agreed, an astonished audience offered an enthusiastic applause, while Cole returned to his seat and continued to remain silent. One can only guess how satisfied he was for having shown to his colleagues this little gem of calculation that he discovered after much hard work.

The number 267 – 1 is commonly known as the Mersenne number M67. The question whether a given Mersenne number of the form 2n – 1 is prime had attracted some attention since the seventeenth century, but it was only in the last third of the nineteenth century that Edouard Lucas (1842-1891) came up with an algorithm to test this property. The algorithm was improved in 1930 by Derrick Henry (Dick) Lehmer (1905-1991) and turned into a widely used tool for testing primality, the Lucas-Lehmer test. But it is one thing to know, with the help of this test, whether or not a certain Mersenne number is prime, but a much more difficult task is finding the factors of one such large number even if we know it to be composite. When asked in 1911 how long it had taken to crack M67, he reportedly answered: “three years of Sundays.” Little did he know how important and ubiquitous such calculations would become in the age of e-commerce.
Almost hundred years later, another remarkable factorization was achieved, this one involving much larger numbers. In 1997 a team of computer scientists, led by Samuel Wagstaff at Purdue University, factorized a 167-digit number, (3349 – 1)/2, into two factors of eighty and eighty-seven digits respectively. According to Wagstaff’s report, the result required about 100,000 computer hours. Quite a bit more than in Cole’s story. Wagstaff had previously been involved in many other remarkable computations. For instance, in 1978 he used a digital computer to prove that Fermat’s last theorem (FLT) is valid for prime exponents up to 125,000.
Factorization results such as those of Cole and Wagstaff will at the very least elicit a smile of approval on the side of anyone with a minimum of sympathy and appreciation for remarkable mathematical results. But when faced with the price tag (in terms of human time spent or computer resources used to achieve it), the same sympathetic listener (and by all means the cynical one) will immediately raise the question whether all this awful lot of time was worth spending.
Central to the mainstream conceptions of pure mathematics over the twentieth century was the idea that numerical calculations with individual cases is at best a preliminary exercise to warm up the mind and start getting a feeling for the situations to be investigated. Still nowadays, many a mathematician is proud of stressing his slowness in calculating and pointing out the mistakes he makes in restaurants when splitting a bill among friends. David Hilbert (1862-1943) one of the most influential mathematicians at the turn of the twentieth century, was clear in stating that from “the highest peak reached on the mountain of today’s knowledge of arithmetic” one should “look out on the wide panorama of the whole explored domain” only with the help of elaborated, abstract theories. He consciously sought to avoid the use of any “elaborate computational machinery, so that … proofs can be completed not by calculations but purely by ideas.”

But with the rise of electronic computing a deep change has affected the status of time-consuming computational tasks from the time of Hilbert and Cole to the time of Wagstaff, via Lehmer and up to our own days. If in 1903 Cole found it appropriate to remain silent about his result and its significance, in 1997 the PR department of Purdue University rushed to publish a press release announcing Wagstaff’s factorization result: “Number crunchers zero in on record-large number”. Wagstaff cared to stress to the press the importance of knowing the limits of our abilities to perform such large factorizations, arguing that the latter are “essential to developing secure codes and ciphers.”
General perceptions about the need, and the appropriate ways for public scrutiny of science, its tasks and its funding, changed very much in the period of time between 1903 and 1997, and this in itself would be enough to elicit different kinds of reactions to both undertakings. But above all, it was the rise of e-commerce and the need for secure encryption techniques for the Internet that brought about a deep revolution in the self-definition of the discipline of number theory in the eyes of many of its practitioners, and in the ways it could be presented to the public. Whereas in the past, this was a discipline that prided itself above all for its detachment from any real application to the affairs of the mundane world, over the last four decades it has turned into the showpiece of mathematics as applied to the brave new world of cyberspace security. The application of public-key encryption techniques, such as those based on the RSA cryptosystem, turned the entire field of factorization techniques and primarily testing, from an arcane, highly esoteric, purely mathematical pursuit into a most coveted area of intense investigation with immediate practical applications, and expected to yield enormous economic gains to the experts in the field.
Featured image credit: Transmediale 2010 Ryoji Ikeda Data Tron 1 by Shervinafshar. CC BY-SA 3.0 via Wikimedia Commons.
The post From number theory to e-commerce appeared first on OUPblog.

October 20, 2015
Pressing Giles Cory
Giles Cory has the dubious distinction of being the only person in American history to be pressed to death by a court of law. It is one of the episodes in the Salem witch trials that has captured the American imagination. In a previous post, we have explored why Cory was accused of witchcraft. Yet we know very few details of his death event, and there is much confusion over why he suffered this horrible fate.
Contrary to popular notions, Cory was not pressed to death for failing to make a plea when he was charged with witchcraft. The court records confirm that he pleaded not guilty when first brought to trial on 9 September 1692. However, when asked the customary question of whether he would accept a trial “by the country” (that is, by a jury of his peers), Cory refused to speak. This brought his trial to a halt, and Giles was remanded to the Salem jail. Despite efforts of family and friends to get him to reply to the court, he refused.
So, around noon on Monday, 19 September 1692, Cory was pressed to death, as the court literally tried to press an answer out of him. It was a medieval English legal custom but unique in the annals of American jurisprudence. We do not know exactly how or where he was pressed to death, or how long the ordeal lasted. Given that the previous day was the Sabbath, presumably the ordeal began on the morning of the 19th, and Cory lasted perhaps 3-4 hours before he died.
Lacking any description, and not having any other pressings to draw parallels from, we can assume that rocks were gradually piled upon the supine Cory. It is possible he was first covered with boards or an old door to more evenly distribute the weight, and to keep them from falling off the body of the octogenarian.

Presumably Cory was pressed to death somewhere near the Salem jail. Salem witch trials expert Marilynne Roach has recently suggested the most likely location was directly behind the jail, on a vacant lot then owned by Thomas Putnam. The parcel is on the east side of present-day Washington Street in Salem, across the street from the 1841 Essex County court house. Putnam, whose wife, daughter and serving girl were all afflicted, was a leading accuser in 1692. Furthermore, the day before Giles’ pressing, Putnam had written Judge Samuel Sewall to remind him of Cory’s role in the death of Jacob Goodell back in 1675.
The only real detail we have of his pressing was written in 1700 by Robert Calef, a Bostonian who was a loud critic of the witch trials. He wrote “In pressing his Tongue being prest out of his Mouth, the Sheriff with his Cane forced it in again, when he was dying. He was the first in New-England, that was ever prest to Death.” We have no way of confirming the accuracy of this statement. We do know that Essex County Sheriff George Corwin was a controversial figure due to his role in the trials, including his seizure of personal property of many of the accused.
Indeed, some people have wrongly suggested that Corwin’s seizures explain why Cory remained mute. In England in the seventeenth century, it was legal for sheriffs to seize the personal possessions of people charged with a felony (never was it legal for a sheriff to seize any real estate). And, due to a series of circumstances, such a law was also in effect in Massachusetts in 1692. However, Cory did plead not guilty, and Corwin did seize some of his personal possessions. So, there was no need for Cory to remain mute to try to protect his assets for his heirs. The government had already seized everything they could. There would seem to be no reason for Cory’s actions, except the hope that he might slow the court in their effort to try and convict him. But surely once the pressing began, he must have known that this tactic had failed. Yet, he continued with the ordeal to his death.
The most logical explanation for his behavior is a simple one. Giles Cory was a tough, difficult old man, who refused to give the court the satisfaction of convicting him of witchcraft. Supposedly his last words were “more weight.” While this tradition cannot be confirmed, it certainly is the sort of sentiment one might expect from Cory. By the time Giles was pressed to death, the Court of Oyer and Terminer had tried 28 people, including his own wife Martha Cory. All 28 had pleaded not guilty, yet all were found guilty, and all were sentenced to death. Cory knew the outcome of the trial, and decided to challenge the court and its proceedings in the only way he could.
The court saw this as a direct challenge to its authority, and as legal historian David Konig has pointed out, “no one who behaved defiantly or impudently toward the court escaped with his or her life.” So pressing Cory to death was a choice made to reinforce their authority, as can be seen by a different decision made by a Massachusetts court two years earlier.
In 1690, the Court of Assistants ignored pirate William Coward when he stood mute and refused to make a plea—his way of arguing that the Massachusetts court did not have jurisdiction on the high seas. Despite the lack of a plea, his trial still took place before the assistants the following day. Coward was convicted of piracy and soon hanged for his crime. Future Salem witchcraft judges Corwin, Hathorne, Richards, and Sewall were among the assistants who tried Coward, so they certainly knew this precedent and ignored it.
The court did not have to perform this gruesome and ultimately fatal torture to continue with Cory’s trial. Rather, they wanted to make an example of Giles for challenging the court of Oyer and Terminer. A stubborn man faced a cruel court, thus creating a storied American tragedy.
Image Credit: Photo courtesy of Emerson W. Baker.
The post Pressing Giles Cory appeared first on OUPblog.

Open Access Week – continuing on the journey
That time of the year is upon us again – Strictly Come Dancing is on the TV, Starbucks is selling spiced pumpkin lattes, and the kids are getting ready for a night of trick-or-treating. It can mean only one thing: Open Access Week is upon us. And just like those Strictly contestants, we are hereby urged to remember we are all on a “journey” towards OA, where Open Access Week is now a welcome refuge point for us to rest, refuel, discuss, and reflect on the ground we’ve covered.
Of course, our OA caravan has been dragging itself slowly across the plains of academia for many years. But since 2012, the UK has been travelling at breakneck speed, aided by the rocket-boosting Finch Report. We have spent the last three years establishing a proper framework for the UK, moving us all forward in a decisive direction, with a new raft of policy requirements setting the scene. Funders, publishers, universities, and others have been working flat-out to avoid any nasty bumps and make this as smooth a ride as possible.
Many people have told me that the most significant milestone passed since the Finch Report was the announcement last year that all articles and conference papers submitted to the next Research Excellence Framework (REF) exercise would be subject to an OA eligibility test. That move has been described as a game-changer because it has raised the stakes considerably. The REF matters a great deal to UK research, and universities must now deliver OA for the bulk of papers their academics produce or face difficulties in getting them assessed for future funding.
This has led to a flurry of activity within UK universities to deliver on these requirements. Universities have spent the last 18 months formulating or updating their institutional OA policies, educating their researchers, putting in place processes, and updating their technology. Software developers and vendors, as well as players like Jisc and Sherpa, have been hard at work developing new technical solutions to support funders’ policies. And publishers and learned societies have burst into life with new activities and initiatives to support their authors.
This is undeniably a good thing. These efforts to deliver on funder requirements will, in time, lead to substantial increases in the proportion of UK research that is made freely and openly available. Indeed it has already led to significant improvements in university and publisher processes and technologies, and has generated a real energy and vigour among publishers, universities, academics, and technologists to try out fresh and original approaches.
Naturally, it hasn’t all been straightforward, and our announcement before the summer that we’d be offering additional flexibility to universities seeking to implement the REF policy was widely welcomed. That flexibility was offered in recognition of how tricky it can be to meet the OA challenge. I strongly believe that policies need to work within the grain of what’s reasonable and achievable, and I think the REF policy will do this while delivering those transformative culture changes within our universities that are sorely needed.
All of which brings us up to date, but one important question remains – where exactly are we heading? Well, wherever it is, we’re headed there together. Many, if not most, countries and funders have now adopted OA policies. And in tandem with this, a broadly settled mixture of ‘green’ and ‘gold’ approaches to OA is now recognised and a degree of stability in these has been reached. In rough terms, this means authors either make up-front payments in order to publish their work with a Creative Commons Attribution licence, or alternatively they post copies of their accepted manuscripts into repositories (often with a 12 month lockdown on free access).
This position is likely to continue for a while, and that’s absolutely fine; we need a degree of stability. But as uptake increases in the longer term, it will be difficult to make those author-side charges operate sensibly at scale. The additional funding for OA fees doesn’t stretch far enough at the moment to cover all papers, and with research funding becoming tighter, this is a situation that is likely to continue for some time. That’s why the offsetting schemes being introduced by publishers are so welcome; they recognise that university libraries, not authors, are bearing these up-front costs and that these costs cannot be divorced from ongoing questions around library subscription budgets.
As for repository postings, it’s often argued that these can only work at scale if the supplied content is sufficiently reduced in value to protect the appeal of the subscription offer. This means that publishers are afforded to go to great lengths to restrict what can be posted, when it can be made available, and what readers can do with the paper once it’s available. These restrictions are broadly recognised as necessary by funder policies, but from a public interest perspective they are problematic. And they are hard to manage for authors and universities. While author posting will play a very important role in delivering OA for a number of years to come, I personally do not believe that accepted manuscripts, available only after an embargo and with complex restrictions, represent the right solution for the longer term. We must overcome these restrictions if OA is to be sustainable into the future.
One way we might solve this problem is to work together to find ways to move beyond the drawbacks of high author-side payments and restricted repository postings. I’d like to highlight two examples of different business models for OA which show that innovation is possible. The first is the PeerJ membership model, where authors can publish as many papers in PeerJ in return for an annual membership fee. The second is the Open Library of Humanities model, where libraries, not authors, support the publishing costs through their membership fee, spreading the cost of publishing more widely and providing OA without the author-side charges. Both of these models demonstrate that innovation is possible and should therefore be welcomed.
In sum, we now have a relatively settled policy position for Open Access in the UK, and we’re continuing on our journey apace. However, with funding for research remaining tight and extremely dependent on a successful case being made that research works for the public interest, we’ll need to continue to be innovative and inventive if we want to sustain academic publishing into the longer term. Open Access Week might be a necessary and welcome stopping place along the road; we should all use this opportunity to regroup, to innovate, and to move forward together.
The opinions and other information contained in this blog post and comments do not necessarily reflect the opinions or positions of Oxford University Press.
Featured image: Open padlock. (c) tkacchuk via iStock.
The post Open Access Week – continuing on the journey appeared first on OUPblog.

Building momentum for women in science
I recently attended an event at Johns Hopkins School of Medicine “Celebrating 200+ Women Professors”. The celebration of these women and their careers inspired me, especially as a “young” woman and an assistant professor. It was also humbling to hear about their successes in spite of the many challenges they faced solely due to their sex. Johns Hopkins School of Medicine was founded in part by a woman, the philanthropist Mary Elizabeth Garrett, who contributed to the endowment to establish the school with the unprecedented condition that women be admitted on the same terms as men. I cannot imagine what it would have been like at the beginning of the 20th century for Florence Sabin, the first woman faculty member at Johns Hopkins School of Medicine. She was truly an exceptional physician scientist, who became the first woman to be promoted to full professor at a medical college, as well as the first woman elected to the National Academy of Sciences. It was not for another 50 years that the second woman faculty member at Johns Hopkins, Helen Taussig, famous for her work in “blue baby” syndrome, was promoted to professor. In fact, it took more than 100 years to promote the first 100 women faculty to professor at Johns Hopkins School of Medicine. Surely those first 100 women experienced challenges that we would be ashamed of today, but they paved the way for all of us who have chosen a career in academic science and medicine. The momentum is building, and in the past 13 years more than 100 more women have been promoted to professor at Johns Hopkins School of Medicine. This is surely something to celebrate, but also a time to reflect on where we are now and how we can continue the momentum.

It was because of these women pioneers in academia and their perseverance to succeed in the face of adversity that it is possible for women like me to become professors in science and medicine. The womens’ movement of the 1960s and 1970s made girls like me who grew up in the 1980s truly believe they could be anything they wanted to be. While it is true that with a lot of hard work a girl can achieve just about anything, what I was naïve about back then is that I would not experience discrimination or bias against women. Almost 90 years after the University of Pittsburgh hired its first woman faculty member, I received my doctorate in a department where there was only one woman professor (who became the dean of the school). Half of my classmates in my Pre-medicine program at Pennsylvania State University and half of my classmates in Clinical Pharmaceutical Sciences at Pitt were women. I would go on to do a post-doctoral fellowship at the National Institute of Mental Health, where the majority of post-docs in our branch were women. In my current institute, 73% of the post-docs are women. So why are there so few women faculty members? I am one of only 3 women, all assistant professor rank, out of 25 faculty members (12%) at the Lieber Institute for Brain Development, a private institute affiliate with Johns Hopkins. Across the entire Johns Hopkins School of Medicine only 132 (22%) of the 590 tenured faculty are women. My experience reflects national statistics. According to Nature Neuroscience, while 56% of undergraduate degrees in science and engineering are conferred to women, and 50% or more of science doctorates and post-doctorate fellowships are awarded to women, only 32% of faculty positions are held by women. There have been many articles written about why women are likely to quit at the post-doc to principal investigator transition, as well as articles that talk about the things that keep women out of science. I want to talk about what we can do to keep women in the science pipeline.
Organizations like the Society for Neuroscience, which is holding its annual meeting this week, recognize that “science is stronger with the expertise and input of diverse voices, including women” and have created programs like IWiN (Increasing Women in Neuroscience). Such programs are imperative considering the state of affairs. Out of the top 10 global universities for neuroscience and behavior, only one has a female department chair, and only 36% of the chief editor positions of the top ten neuroscience journals (by citation ranking) are held by women. I believe that we need to continue empowering young girls and teens like A Mighty Girl website, which is dedicated to raising smart, confident, and courageous girls. Maya Angelou said “Courage is the most important of all virtues, because without courage you can’t practice any other virtue consistently.” Women in science and medicine must practice courage to become resilient, which is the only way to continue this career.
One strategy that I would like to focus on is peer mentoring. A problem with women rising the ranks of academia is the lack of good mentors, especially more senior women mentors, who serve as an advocate for younger women. At NIMH, I joined a group created by female post-docs called the Sister Scientist Club, founded on the concepts in Every Other Thursday: Stories and Strategies from Successful Women Scientists by Ellen Daniell. We met over lunch and helped each other navigate the NIH and serve as sounding boards to practice things like negotiating (usually with our male post-doc advisors) or interviewing for faculty positions. I found these sessions to be absolutely essential to my ability to have courage, be resilient, and practice the confidence that I would need in the next few years while making the big jump to my first faculty appointment. Female friendships and collaborations are not a novel concept but the idea of “Femships” has currently been in the media recently for reducing gender inequalities by women helping each other fulfill their professional potential. I am now in a women faculty book club that serves a similar role as I navigate the next steps of my journey in academic science and medicine. Madeleine Albright, the first female U.S. Secretary of State, makes an important point by saying, “There is a special place in hell for women who don’t help other women.” I currently have young women research associates in my lab who I encourage to support and motivate each other. My hope is that they experience less bias and discrimination in their careers as I and my peers have experienced, and by the time my toddler son’s generation begins their career, we will no longer need to make the reference to a “woman” faculty member. As Drew Faust said, “I’m not the woman president of Harvard, I’m the president of Harvard.”
Feature Image: Rat brain cells by GerryShaw. CC BY-SA 3.0 via Wikimedia Commons.
The post Building momentum for women in science appeared first on OUPblog.

Admiral Nelson in letters
This year, on 21 October, marks the 210th anniversary of the Battle of Trafalgar. This naval battle was between the British Royal Navy, led by Admiral Lord Nelson, and the combined French and Spanish fleets led by French Admiral Pierre-Charles Villeneuve. The most decisive victory of the Napoleonic Wars, this battle ensured Nelson’s place as one of Britain’s greatest war heroes. However, it was also to be his last. Nelson was fatally shot by a sharpshooter whilst standing on the deck of his ship, dying shortly after hearing the news that the enemy had surrendered.
We take a look back at his lifetime through a less conventional route – reading the letters that were sent about him. Through these letters we can see what people might have thought of him, from his war record to his affair with Emma Hamilton.
Featured image credit: Nelson Silhouette by Garry Knight. CC-BY 2.0 via Flickr.
Timeline background image credit: Trafalgar Battle – 21st of October 1805 – Situation at 17h by unknown. Public Domain via Wikimedia Commons.
The post Admiral Nelson in letters appeared first on OUPblog.

Ten facts about the French horn
Although there are several different bell-shaped brass instruments, from trumpets to tubas, it’s the French horn that people are talking about when they mention “the horn.” Known for its deep yet high-ranging sound, the French horn is an indispensable part of any orchestra or concert band.
Before the double horn was invented, the “single horn” was primarily used in orchestras and bands. The most popular was the German horn, which emerged in the late nineteenth century and included a slide-crook, which was used to tune the horn. It was also noticeable for its much larger bell-horn, which made it much wider than any subsequent incarnation of the French horn.
Although the horn is an ancient instrument, the French horn wasn’t introduced until the seventeenth century. It made its first known debut in the comedy-ballet La Princesse d’Elide in Paris in 1664.
It’s not actually one piece. Like most instruments, the French horn comes in pieces because of its awkward shape. Even when the horn is made in one piece, it’s fairly easy to cut the horn off into a “screw bell” to make it easier to transport.
Although the horn has never since in popularity, it’s been reshaped several times over the centuries. Crooks, which are pieces of tubing inserted at the mouthpiece, were added in the eighteenth century so that horn players could avoid transposing as they played, which can be tedious.
Musicians don’t just place their hands in French horns to hold them in position. It actually affects the pitch of certain notes, meaning the musician uses more than breathing techniques and lip tension to stay in-tune.
The most common type of French horn, usually employed in orchestras and bands, is actually called a “double horn.” This type of horn employs a fourth valve, which is used to play different notes through a separate set of tubes. This is what gives the French horn the widest range of notes out of any brass instrument.
Although a brass instrument, the French horn does not actually figure into most brass bands.
The horn is often called the most difficult instrument to play. Although it can hit such a wide range of notes, it’s incredibly easy for a musician to crack notes or play flat, making it an even more impressive feat to truly master the French horn.
When uncoiled, the horn is 12 to 13 feet long. That’s a lot of tubing!
Not all French horns have been used for music purposes. Once called a “hunting horn,” it’s that same instrument you see red-coated European aristocrats carrying on horseback in period dramas.
The above are only ten facts from the extensive entry in Grove Music Online. Did we leave out any fun facts about the French horn?
Featured image: French Horn. Photo by Wolfgang Lonien. CC BY 2.0 via Flickr.
The post Ten facts about the French horn appeared first on OUPblog.

Shale oil and gas in the United States [infographic]
The growth of shale oil and gas production in the United States over the last decade has been nothing short of phenomenal. Already the premier natural gas producer, the United States is poised to surpass Saudi Arabia and Russia as the largest oil producer and will likely become a net exporter of both oil and gas within a decade or more thanks to unconventional resource extraction.
Our infographic below illustrates some of the key facts and statistics on shale resources in the United States, including some of the reasons that the United States represents a benchmark by which countries similarly endowed with shale resources can be evaluated.
Download a JPEG or PDF version of the infographic.
Featured image credit: Pumpjacks by Arne Hückelheim. CC BY-SA 3.0 via Wikimedia Commons.
The post Shale oil and gas in the United States [infographic] appeared first on OUPblog.

October 19, 2015
“The Created Agincourt in Literature” extract from Agincourt
In the six hundred years since it was fought the battle of Agincourt has become an exceptionally famous one, which has generated a huge and enduring cultural legacy. Everybody thinks they know what the battle was about but is the Agincourt of popular image the real Agincourt, or is our idea of the battle simply taken from Shakespeare’s famous depiction of it? Anne Curry explores the legacy of the battle through the centuries from 1415 to 2015. The following is an extract describing some of the literary interpretations and depictions of Agincourt.
The inspiration of Agincourt on literary creativity in England did not end with Drayton’s ‘Ballad of Agincourt’ (c. 1606)]. In 1819 another long poem was written, The Lay of Agincourt, presented as if a genuine ballad of the period. The reviewer in the Leeds Mercury on 31 July certainly thought it gave ‘an accurate description of that celebrated battle’. Its ‘author’ was a wandering minstrel who came to Ewood Hall near Halifax. The owner of the hall, Lord Clifford, identified him as the Master Bard Llewellin, recalling that he had heard him at Windsor in the presence of the late king (i.e. Henry V), his nobles and beauties, where he ‘tore our spirits with the lay of Agincourt’s embattled day’. Offered an opportunity to reprise his poem, the Bard claimed to speak from bitter experience since the French had captured him, but he had known how to retaliate:
I bore their taunts with fit disdain
And sang then Poictiers’ battle strain
And Edward’s feats on Cressy’s plain
Till sham’d they left the place
(stanza 44)
Shakespeare is a major influence on the poem both in Henry’s wanderings around the camp and his pre-battle speech. The French spend the night carousing: ‘Already in their feasting eye | They see the redcross prostrate lie.’ New stories are introduced. The sire de Dampierre had given his gauntlet to Lord Willoughby at the surrender of Harfleur: they meet again at the battle. Davy Gam and two other Welsh came to defend the king. The Bard invokes the aid of St David. The choice of a Welsh bard as narrator may have contributed to the idea of major Welsh involvement in the battle. That said, there is also emphasis on English success. The contrived poem omits the killing of the prisoners completely.

Over the nineteenth and twentieth centuries many novels for adults and children have taken Agincourt as their theme. G. P. R. James’s Agincourt (1844) was not deemed amongst the best of his works, according to The Spectator of 23 November of that year, but it set the tone for a romantic as well as heroic approach. Like several subsequent works it followed the fortunes of English participants as they joined the army and journeyed to France, mixing fictional and real characters and ensuring there was love interest. A similar approach colours Katherine Phipps’s The Sword of De Bardwell: A Tale of Agincourt (1881). G. A. Henty’s At Agincourt: A Tale of the White Hoods of Paris (1897) was in the swashbuckling mode of his other adventure stories. Over his eighty or so books, Henty sought to recall British war successes from the Norman Conquest onwards, aimed at inculcating manliness in the teenage boy. His hero of Agincourt, Guy Aylmer, was aged 16. Henty’s works were very popular, selling 25 million copies by 1914.The teenage archer hero—usually Welsh—has been a common feature in twenty-first-century works.
Bernard Cornwell’s Azincourt (2008) also follows the fortunes of an archer, although this time an Englishman, Nicholas Hook. Cornwell took the name and those of other archers from the campaign lists presented in my Agincourt: A New History, but the French girlfriend is invented. Modern taste is reflected by the greater role assigned to women in this previously ‘boy’s own’ world. Martha Rofheart’s Cry God for Harry (1972) has some chapters narrated by women. The Agincourt Bride of Joanna Hickson (2012) (‘Her beauty fuelled a war. Her courage captured a king’) also has a female narrator and views the battle through news brought to Princess Catherine. In Laurel O’Donnell’s The Angel and the Prince (2014), the heroine voices concerns about the forthcoming battle but takes things into her own hands by dressing as a knight to fight. Perhaps because of the dominance of Shakespeare’s Henry V, there has been no attempt as yet to produce a film which does not use his play, but there has been talk of Cornwell’s novel reaching the screen. To date, no Dr Who episode has been situated at the battle but the Doctor was there: in ‘Talons of Wen Chiang’ (1976, part 5) he tells his companion, ‘It’s the wrong time my dear, you’d have loved Agincourt’. To fill the gap, a story has been written. The Doctor’s granddaughter Susan is disguised as Simon, a page: it is her task to distract Henry V from the rockets the Doctor is launching to make the rain fall. We now know the reason for the English victory!
Featured image credit: Battle of Agincourt (1415) by Chroniques d’Enguerrand de Monstrelet (early 15th century). Public domain via Wikimedia Commons.
The post “The Created Agincourt in Literature” extract from Agincourt appeared first on OUPblog.

World Statistics Day: a reading list
On 20 October 2015, the global mathematical community will celebrate World Statistics Day. Supported and promoted by the United Nations, this day marks the achievements and ongoing work of statisticians whose data influences decision-makers and policies that affect millions of people. In honour of this, we present a reading list of OUP books and journal articles that have helped to advance the understanding of these mathematical concepts.
Analyzing Wimbledon, by Franc Klaassen and Jan R. Magnus
The world’s most famous tennis tournament offers statisticians insight into examining probabilities. This study attempts to answer many questions, including whether an advantage is given to the person who serves first, whether new balls influence gameplay, or whether previous champions win crucial points. Looking at a unique data set of 100,000 points played at Wimbledon, Klaassen and Magnus illustrate the amazing power of statistical reasoning.
‘Asking About Numbers: Why and How,’ by Stephen Ansolabehere, Marc Meredith, and Erik Snowberg, published in Political Analysis
How can designing quantitative standardized questions for surveys yields findings that can later be linked to statistical models? The authors offer a full analysis about why quantitative questions are feasible and useful, particularly for the study of economic voting.
The Credit Scoring Toolkit: Theory and Practice for Retail Credit Risk Management and Decision Automation, by Raymond Anderson
This textbook demonstrates how statistical models are used to evaluate retail credit risk and to generate automated decisions. Aimed at graduate students in business, statistics, economics, and finance, the book introduces likely situations where credit scoring might be applicable, before presenting a practical guide and real-life examples on how credit scoring can be learned to implement on the job. Little prior knowledge is assumed, making this textbook the first stop for anyone learning the intricacies of credit scoring.
‘Big data and precision’ by D. R. Cox, published in Biometrika
Professor D.R. Cox of Nuffield College, Oxford, explores issues around big data, statistical procedure, and precision, in addition to oulining a fairly general representation of the accretion of error in large systems.
The New Statistics with R: An Introduction for Biologists, by Andy Hector
This introductory text to statistical reasoning helps biologists learn how to manipulate their data sets in R. The text begins by explaining the classical techniques of linear model analysis and consequently provides real-world examples of its application. With all the analyses worked in R, the open source programming language for statistics and graphics, and the R scripts included as support material, Hector presents an easy-to-use textbook for students and professionals with all levels of understanding of statistics.
‘Housing Wealth and Retirement Timing’ by Martin Farnham and Purvi Sevak, published in CESifo Economic Studies
Having found that rising house prices cause people to revise their planned retirement age, Farnham and Sevak explore movements in the housing market and the implications for labour-supply.
An Introduction to Medical Statistics, by Martin Bland
Every medical student needs to have a firm understanding of medical statistics and its uses throughout training to become doctor. The fourth edition of An Introduction to Medical Statistics aims to do just that, summarising the key statistical methods by drawing on real-life examples and studies carried out in clinical practice. The textbook also includes exercises to aid learning, and illustrates how correctly employed medical data can improve the quality of research published today.
‘Getting policy-makers to listen to field experiments’ by Paul Dolan and Matteo M. Galizzi, published in Oxford Review of Economic Policy
On the premise that the greater use of field experiment findings would lead to more efficient use of scarce resources, this paper from Dolan and Galizzi considers what could be done to address this issue, including a consideration of current obstacles and misconceptions.
Stochastic Analysis and Diffusion Processes, by Gopinath Kallianpur and P. Sundar
Building the basic theory and offering examples of important research directions in stochastic analysis, this graduate textbook provides a mathematical introduction to stochastic calculus and its applications. Written as a guide to important topics in the field and including full proofs of all results, the book aims to render a complete understanding of the subject for the reader in preparation for research work.
‘Statistical measures for evaluating protected group under-representation’ by Joseph L. Gastwirth, Wenjing Xu, and Qing Pan, published in Law, Probability & Risk
The authors explore the conflicting inferences drawn from the same data in the cases of People v. Bryant and Ambrose v. Booker. Based on their full analysis, they argue that when assessing statistics on the demographic mix of jury pools for legal significance, courts should consider the possible reduction in minority representation that can occur in the peremptory challenge proceedings.
Bayesian Theory and Applications, edited by Paul Damien, Petros Dellaportas,
Nicholas G. Polson, and David A. Stephens
Beginning by introducing the foundations of Bayesian theory, this volume proceeds to detail developments in the field since the 1970s. It includes an explanatory chapter for each conceptual advance followed by journal-style chapters presenting applications, targeting those studying statistics at every level.
‘Representative Surveys in Insecure Environments: A Case Study of Mogadishu, Somalia,’ by Jesse Driscoll and Nicholai Lidow, published in Journal of Survey Statistics and Methodology
How do we get accurate statistics from politically unstable areas? This paper discusses the challenges of conducting a representative survey in Somalia and the opportunities for improving future data collection efforts in these insecure environments.
Stochastic Population Processes: Analysis, Approximations, Simulations, by Eric Renshaw
Talking about random processes in real-life is tricky, as the world has no memories of those processes which depend only on the current state of the system and not on its previous history. This book is driven by the underlying Kolmogorov probability equations for population size. It’s the first title on stochastic population processes that focuses on practical application. It is not intended as a text for pure-minded mathematicians who require deep theoretical understanding, but for researchers who want to answer real questions.
Image Credit: Statistics by Simon Cunningham. CC BY 2.0 via Flickr.
The post World Statistics Day: a reading list appeared first on OUPblog.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
