Oxford University Press's Blog, page 781
August 1, 2014
How Georg Ludwig became George I
On 1 August 1714, Queen Anne died. Her last days were marked by political turmoil that saw Robert Harley, Earl of Oxford, and Henry St. John, Viscount Bolingbroke, struggle to assert their authority. However, on her deathbed Anne appointed the moderate Charles Talbot, duke of Shrewsbury, as the last ever lord treasurer. The queen’s death prompted a transition from the Stuart dynasty to the Hanoverian, and the succession of Georg Ludwig, elector of Hanover, as King George I of Great Britain and Ireland.

King George I by Sir Godfrey Kneller, 1714. Public domain via Wikimedia Commons.
We now look back on the Succession as one of decisive dynastic change — one that came about as a result of the Act of Settlement (1701). This act set out how, following Anne’s death, the throne was to be inherited by the children of Sophia, Dowager Electress of Hanover, and granddaughter of James I and VI. The act had named Sophia and “the heirs of her body being protestants” as next in the line of succession precisely because William III – King from 1689 to 1702 — had been anxious to ensure that Protestant monarchy would be preserved within Britain. William’s invasion of Britain in 1688 had been, in part, justified on the basis that the ruling monarch, James II, had put himself at odds with the political nation through his espousal of “popery and arbitrary government.” Moreover, the birth of James’s son in June 1688 raised the prospect of a permanent reversion to dynastic Catholicism into the next generation.
Although William and his wife Mary had quickly established themselves in their new roles, they failed in one particular respect. They did not produce an heir. Queen Mary’s premature death in 1694 was widely mourned, but William did not rush to re-marry and ensure that the succession would be perpetuated into the next generation. Instead, attention turned to Mary’s sister, Anne, the younger of James II’s Protestant daughters. Anne was the victim of frequent gynaecological misfortune. She gave birth to many children but few thrived. In 1689 she and her husband, George, prince of Denmark, had a son, William, Duke of Gloucester. His birth was widely seen as an indication of divine approval for the recent changes on the throne, but William was a sickly child. In July 1700 he contracted smallpox and died shortly afterwards. It was his death that pushed William into formalizing succession arrangements through the Act of Settlement.

Sophia of Hanover by her sister, Louise Hollandine of the Palatinate, c. 1644. Public Domain via Wikimedia Commons.
Naming Sophia as heir to William and Anne, on the basis of her Protestantism, excluded more than fifty closer blood relations who happened to be Catholic. Thus, it is easy to see why a shift from the Stuart line to the Hanoverian seemed momentous. Yet, it’s also important to remember that contemporaries talked not in terms of the Hanoverian but of the Protestant succession, stressing continuity over change. Likewise, following its arrival in England in September 1714, the new dynasty was keen to emphasise its Stuart ancestry. This can be seen in the refurbished state apartments at Hampton Court where visual representations of the Hanoverians stood alongside portraits of James I and VI, his wife Anne of Denmark, and their daughter, Elizabeth of Bohemia.
Sophia, the matriarch of the new Hanoverian dynasty, had married the youngest son of the cadet branch of the dukes of Brunswick in 1658. The fortunes of her husband, Ernst August, had risen rapidly. Through a combination of marriage, negotiation, and luck he was able to acquire the duchy of Calenberg-Göttingen and ensure that his son, Georg Ludwig — the future George I — would inherit the Duchy of Celle. In 1692 Hanover was granted electoral status, so joining an elite club with the right to elect the Holy Roman Emperor. On his father’s death in 1698, Georg Ludwig therefore inherited a major German state; three years later his status was raised further with the prospect of succession to the British thrones. In addition, Georg was an accomplished commander who served as allied supreme commander for a period during the War of the Spanish Succession that raged in Europe between 1702 and 1713/14.
During Anne’s reign (1702-14), it was unclear whether the Act of Settlement had, in fact, resolved the succession question. Support for the exiled Stuarts persisted while Anne was unwilling to allow her Hanoverian relations to come to England to represent their interests personally. The Hanoverians did, however, find staunch supporters among members of the whig party. Polemicists such as George Ridpath worked hard to promote the Hanoverian cause. Protestant dissenters also tended to support the Hanoverians. On 1 August 1714 the dissenting minister Thomas Bradbury was preaching in his meeting house in Fetter Lane, when — alerted to Anne’s death by a handkerchief dropped from the gallery — he claimed the honour of being first to pray publicly for the new king.
George I was proclaimed king without much trouble on 1 August 1714. The plans, long in preparation, were quickly put into action. A Regency council was formed, made up of whigs and some Hanover tories. Hans Kaspar von Bothmer, one of George’s Hanoverian ministers, co-ordinated the transition. The new king was sufficiently relaxed to spend several weeks sorting out governance arrangements for his German lands before departing for London. He stopped to hold talks in the United Provinces en route and only arrived at Greenwich in late September.
Having spent a day meeting key figures, George I was transported into London. A long procession of coaches followed the royal party, soldiers lined the route, and cannon fire accompanied the new king. In the immediate aftermath of their arrival George and his family were highly visible on the London social scene, seeking to emphasize their qualities and advantages over the Stuart, or Jacobite, pretenders. They were clearly Protestant, they were numerous (ensuring that the succession was secure for the immediate future), and they seemed willing to adapt. It is worth remembering that George I was 54 years old when he became king in August 1714. He had behind him a long political and military career and could easily have arrived in Britain set in his ways. Instead, in the coming years he demonstrated a willingness to work hard and take on new responsibilities—qualities that would make George a very successful immigrant.
Dr. Andrew C. Thompson teaches history at Queens’ College, Cambridge, and is the author of Britain, Hanover and the Protestant Interest (2006) and George II: King and Elector (2011). His article on the Politics of the Hanoverian Succession was recently published in the Oxford Dictionary of National Biography, alongside two further essays — on Literary Responses to the Succession (by Abigail Williams) and Legacies of the Hanoverians (by Clarissa Campbell Orr) — to mark the tercentenary.
The Oxford Dictionary of National Biography is a collection of 59,102 life stories of noteworthy Britons, from the Romans to the 21st century. The Oxford DNB is freely available via public libraries across the UK, and many libraries worldwide. Libraries offer remote access allowing members to gain access free, from home (or any other computer), 24 hours a day. You can also sample the ODNB with its changing selection of free content: in addition to the podcast; a topical Life of the Day, and historical people in the news via Twitter @odnb.
Subscribe to the OUPblog via email or RSS.
Subscribe to only history articles on the OUPblog via email or RSS.
The post How Georg Ludwig became George I appeared first on OUPblog.










The Queen whose Soul was Harmony
In 1701, one year before Princess Anne succeeded to the throne, musicians from London traveled to Windsor to perform a special ode composed for her birthday by the gifted young composer Jeremiah Clarke. The anonymous poet addressed part of his poem to the performers, taking note of Anne’s keen interest in music:
Portrait of Anne of Great Britain by Charles Jervas, 1702-1714, Royal Collection, public domain via Wikimedia Commons.
With song your tribute to her bring,
Who best inspires you how to sing;
None better claims your lays than she
Whose very soul is Harmony.
O happy those whose art can feast
So just, and so refined a taste.
As the poet evidently knew, Anne’s just and refined taste was shaped by her own musical experiences. Her music teachers included Francesco Corbetta, the leading guitarist in Europe, and Giovanni Battista Draghi, the harpsichord player who composed the first setting of Dryden’s “Song for St. Cecilia’s Day, 1687.” Henry Purcell wrote the music for her wedding and for other occasions at her court; when he died, still in his early thirties, his widow dedicated a posthumous collection of his keyboard pieces to the Princess, thanking her for her “Generous Encouragement of my deceas’d Husband’s Performances in Musick, together with the great Honour your Highness has don that Science in your Choice of that Instrument, for which the following Compositions were made.”
I have paid particular attention to music written for the often misunderstood Queen, a musician and lover of the fine arts. The four examples I offer here are especially rich and complex.
We begin with an excerpt from Purcell’s last substantial work, an ode for the sixth birthday of William, Duke of Gloucester, Anne’s only child to survive infancy. The political situation at this moment was complex. After the armed coup of 1688, which deposed Anne’s father, James II, and replaced him with her sister Mary and her brother-in-law William, the two sisters had quarreled. Their estrangement continued until Mary’s death in 1694, and although William went through the motions of a reconciliation, his relationship with Anne was edgy at best. In praising the little Duke, the poet paid court to Anne in language that might easily be read as praising the Princess at the expense of the King:
She’s great, Let Fortune Smile or Frown,
Her Virtues make all Hearts her own:
She reigns without a Crown.
Evidently aware that the last line might be offensive to the King’s supporters, Purcell set it only once and surrounded it with two longer settings of the previous line—“Her Virtues make all Hearts her own”—a safer expression of Anne’s growing popularity. He devotes eleven measures to the line about Anne’s virtues, stretching it out with extensive melismatic treatments of the word all, disposes of the line about reigning without a crown in only six measures, and then returns to the words of the penultimate line for another eighteen measures. His setting thus alters the rhetoric of the poem, moving what had been a climactic and cadential line in the poem into a much less prominent position. Purcell had good reasons to look forward to the accession of Anne, who knew more about music than her predecessors, but he had far too much tact to crown her prematurely.
Henry Purcell, excerpts from Who can from Joy Refrain? Performers: Bradford Gleim, baritone; Teresa Wakim and Brenna Wells, sopranos; Jesse Irons and Megumi Stohs Lewis, violins; Peter Sykes, harpsichord; Sarah Freiberg, cello.
Our next example comes from the birthday ode of 1701. At this moment, Anne was just emerging from six months of mourning for her son Gloucester, who had died a few days after his eleventh birthday. In a touching and delicate aria sung by the tenor Richard Elford, who soon became Anne’s favorite singer, the words express the hope that she might bear another child.
In her brave offspring still she’ll live,
Nor must she bless our age alone;
But to succeeding ages give,
Heirs to her virtues, and the throne.
After an innocent string ritornello in B-flat major, the vocalist enters in d minor; Clarke’s wistful expression of the hope for more heirs to Anne’s virtues thus delicately acknowledges her sorrow for the lost Gloucester. The contrast with earlier birthday odes, in which composers saluted Gloucester with martial fanfares, is striking.
Jeremiah Clarke, excerpt from Let Nature Smith, birthday ode for Princess Anne (1701 ?). Performers: Owen McIntosh, tenor; Jesse Irons and Megumi Stohs Lewis, violins; Sarah Darling, viola; Peter Sykes, harpsichord; Sarah Freiberg, cello.
Our third example is an anthem composed by John Blow for the thanksgiving service at St. Paul’s cathedral in 1704, celebrating the Duke of Marlborough’s victory in the Battle of Blenheim. The Bible reading for the day tells the story of the prophetess Deborah, who sent her general Barak to defeat the Philistines. As a married non-combatant who ruled her nation, Deborah was a close biblical analogue for Anne, and one detail in the song’s description of the battle matched the events at Blenheim with eerie accuracy: “The river of Kishon,” sings the prophetess, “swept them away,” and at the end of the recent battle, at least 2,000 French cavalrymen had drowned in the Danube. In constructing his anthem, Blow carefully rearranged a few selected verses from the recommended chapter. After one soloist sings verse 21 of the biblical story, the description of the river, the other joins him in verse 13, celebrating Deborah’s “dominion over the mighty” in a canonic duet involving several hair-raising dissonances, after which the first singer, again alone and safely back in triadic harmonies, declaims verse 31, which prays that all the Lord’s enemies will perish.
John Blow, excerpt from Awake, awake, utter a song (1704). Performers: Owen McIntosh and Marcio de Oliveira, tenors; Peter Sykes, organ; Sarah Freiberg, cello.
We end, as we must, with Handel, whose music the Queen clearly appreciated: she awarded him a generous pension of £200 a year (roughly £40,000 in modern money). Handel’s “Serenata” for the queen’s birthday in 1713 celebrates the impending Treaty of Utrecht, ending a long war on favorable terms. The text, by the Whig poet Ambrose Philips, has seven stanzas, each ending with the same couplet:
The Day that gave great Anna Birth,
Who fix’d a lasting Peace on Earth.
In his first stanza, the poet asks the sun, the “Eternal Source of Light divine,” to “add a lustre to this day,” and for this aria, Handel featured Richard Elford, Anne’s favorite singer in her Chapel Royal, and wrote a trumpet obbligato for John Shore, a versatile and inventive musician who had served the queen and her late husband for years. At thanksgiving services during Anne’s reign, Shore and Elford often performed the prominent parts for high tenor and trumpet in Purcell’s Te Deum and Jubilate, so Handel was honoring the traditions of the Chapel Royal by using them as soloists, by writing in the same key (D major), and by composing a canon between the voice and the trumpet that imitates Purcell’s compositional practice. Like the other composers, he was evidently confident that the queen’s musical ear would allow her to hear and appreciate the compliment he was paying to English music.
George Frideric Handel, aria from Eternal Source of Light Divine (1713). Performance: Jason McStoots, tenor; Robinson Pyle, trumpet; Dorian Bandy and Emily Dahl, violins; Anna Griffis, viola; Peter Sykes, harpsichord; Denise Fan, cello.
Three hundred years ago, on 1 August 1714, Queen Anne died in the Kensington Palace in London. James Anderson Winn is William Fairfield Warren Professor of English at Boston University. His six earlier books include Unsuspected Eloquence (1981), a groundbreaking history of the relations between poetry and music; John Dryden and His World (1987), a prize-winning biography; and The Poetry of War (2008), praised by one reviewer as a book “for anyone who cares about war and truth.” His new book, Queen Anne: Patroness of Arts, includes 23 musical examples, each of which is printed in full score; a companion website allows the reader to listen to performances of each of the excerpts, many of them not heard since Queen Anne’s time.
Subscribe to the OUPblog via email or RSS.
Subscribe to only history articles on the OUPblog via email or RSS.
The post The Queen whose Soul was Harmony appeared first on OUPblog.











Independence, supervision, and patient safety in residency training
Since the late nineteenth century, medical educators have believed that there is one best way to produce outstanding physicians: put interns, residents, and specialty fellows to work in learning their fields. After appropriate scientific preparation during medical school, house officers (the generic term for interns, residents, and specialty fellows) need to jump into the clinical setting and begin caring for patients themselves. This means delegating to house officers the authority to write orders, make important management decisions, and perform major procedures. It is axiomatic in medicine that an individual is not a mature physician until he has learned to care for patients independently. Thus, the assumption of responsibility is the defining principle of graduate medical education.
To develop independence, house officers receive major responsibilities for the care of their patients. They typically are the first to evaluate the patient on admission, speak with the patients on rounds, make all the decisions, write the orders and progress notes, perform the procedures, and are the first to be called should a problem arise with one of their patients. Such responsibility allows house officers not only to develop independence but also to acquire ownership of their patients — the sense that the patients are theirs, that they are the ones responsible for their patients’ medical outcomes and well-being. Medical educators view the assumption of responsibility as the factor that transforms physicians-in-training into capable practitioners.
![By National Cancer Institute [Public domain], via Wikimedia Commons](https://i.gr-assets.com/images/S/compressed.photo.goodreads.com/hostedimages/1403350700i/10082226._SX540_.jpg)
By National Cancer Institute Public domain via Wikimedia Commons.
Independence and responsibility are not given to house officers cavalierly. Rather, they are earned by residents who show themselves to be mature and capable. Responsibility is typically provided in “graded” fashion — that is, junior house officers have much more circumscribed responsibilities, while more experienced house officers who have accomplished their earlier tasks well are advanced to positions of greater responsibility. The more a resident has progressed, the more independence that resident receives.
The assumption of independence and responsibility comes at different rates for different house officers. Advancement to positions of greater responsibility occurs relatively quickly in cognitive fields like neurology, pediatrics, and internal medicine. There, assistant residents in their second or third year receive decision-making authority even for very sick individuals. Among these fields, house officers in pediatrics are generally monitored more closely because of the fragility of their patients, particularly babies and toddlers. Advancement occurs more slowly in procedural fields, such as general surgery, obstetrics and gynecology, and the surgical subspecialties. In these fields, technical proficiency is so important that residents have to wait many years, sometimes until they are chief residents, to perform certain major operations. The degree of independence afforded house officers also depends on the traditions and culture of individual hospitals. At community hospitals, where private physicians are in charge of their own private patients, house officers often receive too little responsibility. At municipal and county hospitals, where charity patients predominate and teaching staffs are often small, house officers can easily receive too much.
The assumption of responsibility does not mean there is no supervision of house officers. Quite the contrary. House officers are accountable to the chief of service, they have regular contact with attending physicians, and chief residents keep an extremely close eye on the resident service. Moreover, someone more senior is typically present or, if not physically present, immediately available. Thus, interns are closely watched by junior residents, junior residents by senior residents, and senior residents by the chief resident. One generation teaches and supervises the next, even though these generations are separated only by a year or two. Backup and support are available for all residents from attending physicians, consultants, and the chiefs of service. The gravest moral offense a house officer can commit is not to call for help.
From the perspective of patient safety, it may seem that patients should be seen only by experienced physicians and surgeons. However, medical educators have recognized all along that this is not a viable option. Medical education incurs the dual responsibility for ensuring the current safety of patients seen during the training process and the future safety of patients of tomorrow seen by those undergoing training today. Every physician needs to gain clinical experience, and every physician faces a day of reckoning when he practices medicine independently for the first time—that is, without anyone looking over his shoulder or immediately available for help. The only choice medical educators have is to control the circumstances in which this will happen. Should house officers gain experience and develop independence within the structured confines of a teaching hospital, where help can readily be obtained, or must this occur afterward in practice at the potential expense of the first patients who present themselves?
Thus, maximizing safety in graduate medical education is a complex task, for the needs of both present and future patients must be taken into account. The system of graded responsibility provided house officers by the residency system, coupled with careful supervision of house officers’ work, has been developed to maximize professional growth among trainees while at the same time maximizing the safety of patients entrusted to them for care. The system is not perfect, but no one in the United States or anywhere else has yet come up with a better system, and it continues to serve the public well.
Kenneth M. Ludmerer is Professor of Medicine and the Mabel Dorn Reeder Distinguished Professor of the History of Medicine at the Washington University School of Medicine. He is the author of the forthcoming Let Me Heal: The Opportunity to Preserve Excellence in American Medicine (1 October 2014), Time to Heal: American Medical Education from the Turn of the Century to the Era of Managed Care (1999), and Learning to Heal: The Development of American Medical Education (Basic Books, 1985).
Subscribe to the OUPblog via email or RSS.
Subscribe to only health and medicine articles on the OUPblog via email or RSS.
The post Independence, supervision, and patient safety in residency training appeared first on OUPblog.










Colostrum, performance, and sports doping

By Martin Luck
A recent edition of BBC Radio 4′s On Your Farm programme spoke to a dairy farmer who supplies colostrum to athletes as a food supplement.
Colostrum is the first milk secreted by a mother. Cow colostrum is quite different from normal cow’s milk: it has about four times as much protein, twice as much fat, and half as much lactose (sugar). It is especially rich in the mother’s antibodies (IgG and IgM), providing the newborn calf with passive immunity before its own immune system gets going.
People who take colostrum believe it has health and performance benefits. It’s said to reduce muscle recovery time after intense training, maintain gut integrity against the heat stress of exercise, and assist recovery from illness and surgery. The radio programme spoke to cyclists, rugby players, footballers, runners, and others convinced of its value and who are prepared to pay the significant premium which colostrum commands over ordinary milk.
Unfortunately, the scientific evidence for these effects is rather poor, especially on the health side. Nutritional and clinical scientists who have reviewed the research literature report a lack of well-designed and reliable trials. There is some evidence for enhanced speed and endurance and for increased strength in older people undertaking resistance training, but many studies show equivocal or unconvincing results. So for the moment, it’s probably safest to describe the benefits of colostrum consumption as anecdotal.
Colostrum undoubtedly contains some hormones and growth factors in higher amounts than in normal milk. Levels of insulin and IGF-1 can be eight times higher in colostrum, especially during the first few hours after calving. Growth hormone is also present at this time, and disappears later. Other hormones, including prolactin, cortisol, vitamin D and a range of growth factors (hormones controlling cell division and maturation), also occur at relatively high amounts in colostrum.

Colostrum cakes
These hormones all have well known biological effects in the body, but finding them in colostrum doesn’t necessarily mean that they will be active when it is consumed in the diet. There are two main reasons for this. Firstly, protein hormones like insulin and IGF-1 get broken down in the gut and are unlikely to be absorbed in an active state. (If the reverse were true, diabetics could take insulin in pill form rather than injections.) Secondly, naturally-secreted hormones generally work in concentration-related ways, appearing in the blood as repeated pulses with which their receptors become coordinated. This pattern is unlikely to be replicated by pouring colostrum over the breakfast cornflakes.
Nevertheless, the possibility that colostrum is a source of potentially performance enhancing bioactive materials has been considered by the World Anti-Doping Agency. Many hormones and growth factors including insulin, IGF-1, cortisol and Growth Hormone, appear in their list of prohibited substances. So could colostrum make athletes fall foul of the regulations? The WADA website advises that although colostrum is not banned, its growth factor content “could influence the outcome of anti-doping tests” and its consumption is not recommended.
Aside from efficacy, colostrum poses a wider, ethical question: when does a natural food product become an artificial supplement? At one extreme, it might be feasible to extract the active ingredients and take them as a training aid or performance enhancer. At the other, perhaps as recognised by WADA, one might happen to consume colostrum as a food without intending to benefit unduly from it. Somewhere in between would be the deliberate consumption of colostrum knowing that it contains potentially beneficial components.
But then why is that different from, say, increasing ones intake of protein or energy to support a higher level of athletic performance? Neither whey protein nor glucose, nor for that matter caffeine or bananas, appear in WADA’s prohibited list, yet they all have their place in the training and performance regimes of many athletes and sports men and women.
Identifying hormones as discrete, potentially bioactive chemical components of food seems to encourage the drawing of a false line along this continuum. No one condones cheating or the gaining of unfair advantage, but it is not clear why one food product should be restricted when another is not, especially when it is available to all and has no identified side effects.
But this brings us back to the question of efficacy? Does colostrum really work? If it could be shown that it does and if it were made widely available as a food item, there is no doubt that all athletes would use it. This might make the doping authorities review their position, although it could mean, of course, that no one is really advantaged (just as being tall brings no gain when playing basketball against others selected for their height).
To find out if colostrum really does work, many more well-designed, high quality, double blind placebo-controlled trials would be necessary. Such investigations are expensive and difficult and few organisations will have the expertise or resources to devote to something which, as a completely natural product, is unlikely to bring commercial gain. But this is the problem with many food supplements and so-called superfoods (and partly explains why health food shops abound in the high street).
In the end, people use food supplements because they believe them to work, not because there is much reliable evidence that they do. And perhaps this, in turn, means that there is no legitimate reason for banning their use.
Based at the University of Nottingham as a Professor of Physiological Education, Martin Luck is the author of Hormones: A Very Short Introduction. He was awarded a National Teaching Fellowship by the Higher Education Academy in 2011.
The Very Short Introductions (VSI) series combines a small format with authoritative analysis and big ideas for hundreds of topic areas. Written by our expert authors, these books can change the way you think about the things that interest you and are the perfect introduction to subjects you previously knew nothing about. Grow your knowledge with OUPblog and the VSI series every Friday, subscribe to Very Short Introductions articles on the OUPblog via email or RSS, and like Very Short Introductions on Facebook.
Subscribe to the OUPblog via email or RSS.
Subscribe to only science and medicine articles on the OUPblog via email or RSS.
Image credit: Colostrum cakes, by Surya Prakash.S.A., CC-BY-SA-3.0 via Wikimedia Commons
The post Colostrum, performance, and sports doping appeared first on OUPblog.










July 31, 2014
The month that changed the world: Friday, 31 July 1914
July 1914 was the month that changed the world. On 28 June 1914, Archduke Franz Ferdinand was assassinated, and just five weeks later the Great Powers of Europe were at war. But how did it all happen? Historian Gordon Martel, author of The Month That Changed The World: July 1914, is blogging regularly for us over the next few weeks, giving us a week-by-week and day-by-day account of the events that led up to the First World War.
By Gordon Martel
Although Austria had declared war, begun the bombardment of Belgrade, and announced the mobilization of its army in the south, negotiations to reach a diplomatic solution continued. A peaceful outcome still seemed possible: a settlement might be negotiated directly between Austria and Russia in St Petersburg, or a conference of the four ‘disinterested’ Great Powers in either London or Berlin might mediate between Austria and Russia.
The German chancellor worried that if Sir Edward Grey succeeded in restraining Russia and France while Vienna declined to negotiate it would be disastrous; it would appear to everyone that the Austrians absolutely wanted a war. Germany would be drawn in, but Russia would be free of responsibility. ‘That would place us in an untenable situation in the eyes of our own people’. He instructed the ambassador in Vienna to advise Austria to accept Grey’s proposal.
In Vienna at 9 a.m. Berchtold convened a meeting of the common ministerial council, explaining that the Grey proposal for a conference à quatre was back on the agenda and that the German chancellor was insisting that this must be carefully considered. Bethmann Hollweg was arguing that Austria’s political prestige and military honour could be satisfied by the occupation of Belgrade and other points, while the humiliation of Serbia would weaken Russia’s position in the Balkans.
Berchtold warned that in such a conference France, Britain, and Italy were likely take Russia’s part and that Austria could not count on the support of the German ambassador in London. If everything that Austria had undertaken were to result in no more than a gain in ‘prestige’, its work would have been in vain. An occupation of Belgrade would be of no use; it was all a fraud. Russia would pose as the saviour of Serbia – which would remain intact – and in two or three years they could expect the Serbs to attack again in circumstances far less favourable to Austria. Thus, he proposed to respond courteously to the British offer while insisting on Austria’s conditions and avoiding a discussion of the merits of the case. The ministers agreed.
The British cabinet also met in the morning to consider France’s request for a promise of British intervention before Germany attacked. The cabinet divided into three factions: those who opposed intervention, those who were undecided, and those who wished to intervene. Only two ministers, Grey and Churchill, favoured intervention. Most agreed that public opinion in Britain would not support them going to war for the sake of France. But opinion might shift if Germany were to violate Belgian neutrality. Grey was instructed to request – from both Germany and France – an assurance that they would respect the neutrality of Belgium. They were not prepared to give France the promise of support that it had asked for; one of them concluded ‘that this Cabinet will not join in the war’.
Grey wired to Berlin to ask whether Germany might be willing to sound out Vienna, while he sounded out St Petersburg, on the possibility of agreeing to a revised formula that could lead to a conference. Perhaps the four disinterested Powers could offer to Austria to undertake to see that it would obtain ‘full satisfaction of her demands on Servia’ – provided that these did not impair Serbian sovereignty or the integrity of Serbian territory. Russia could then be informed by the four Powers that they would undertake to prevent Austrian demands from going to the length of impairing Serbian sovereignty and integrity. All Powers would then suspend further military operations or preparations.

Declaration of war from the German Empire 31 July 1914. Signed by the German Kaiser Wilhelm II. Countersigned by the Reichs-Chancellor Bethmann-Hollweg. Public domain via Wikimedia Commons.
Germany’s response was that it could not consider such a proposal until Russia agreed to cease its mobilization. In Berlin at 2 p.m. the the drohenden Kriegszustand (‘imminent peril of war’) was announced. At 3.30 p.m. Bethmann Hollweg instructed the ambassador in St Petersburg to explain that Germany had been compelled to take this step because of Russia’s mobilization. Germany would mobilize unless Russia agreed to suspend ‘every war measure’ aimed at Austria-Hungary and Germany within twelve hours. The time clock was to begin ticking from the moment that the note was presented in St Petersburg.
At 4.15 p.m. Conrad, the chief of the Austrian general staff, telephoned the office of the general staff in Berlin to explain the Austrian position: the emperor had authorized full mobilization only in response to Russia’s actions and only for the purpose of taking precautions against a Russian attack. Austria had no intention of declaring war against Russia. In other words, Russia could mobilize along the Austrian frontier and Austria could match this on the other side. And there the two forces could wait, without going to war.
This prospect terrified Moltke. He replied immediately that Germany would probably mobilize its forces on Sunday and then commence hostilities against Russia and France. Would Austria abandon Germany? Conrad asked if Germany thus intended to launch a war against Russia and France and whether he should rule out the possibility of fighting a war against Serbia without coming to grips with Russia at the same time. Moltke told him about the ultimatums being presented in St Petersburg and Paris, which required answers by 4 p.m. the next day.
At 6.30 p.m. the Kaiser addressed a crowd of thousands gathered at the pleasure gardens in front of the imperial palace. He declared that those who envied Germany had forced him to take measures to defend the Reich. He had been forced to take up the sword but had not ceased his efforts to maintain the peace. If he did not succeed ‘we shall with God’s help wield the sword in such a way that we can sheathe it with honour’.
In London and Paris they continued to hope that a negotiated settlement was possible. Grey suggested that Russia cease its military preparations in exchange for an undertaking from the other Powers that they would seek a way to give complete satisfaction to Austria without endangering the sovereignty or territorial integrity of Serbia. Viviani, the French premier and foreign minister, agreed. He would tell Sazonov that Grey’s formula furnished a useful basis for a discussion among the Powers who sought an honourable solution to the Austro-Serbian conflict and to avert the danger of war. The formula proposed ‘is calculated equally to give satisfaction to Russia and to Austria and to provide for Serbia an acceptable means of escaping from the present difficulty’.
In St. Petersburg that evening the German ambassador, in a private audience with Tsar Nicholas, warned that Russian military measures might already have produced ‘irreparable consequences’. It was entirely possible that the decision to mobilize when the kaiser was attempting to mediate the dispute might be regarded by him as offensive – and by the German people as provocative. ‘I begged him…to check or to revoke these measures’. The Tsar replied that, for technical reasons, it was not now possible to stop the mobilization. For the sake of European peace it was essential, he argued, that Germany influence, or put pressure on, Austria.
In Paris the German ambassador was to ask the French government if it intended to remain neutral in the event of a Russo-German war. An answer was required within 18 hours. In the unlikely case that France agreed to remain neutral, France was to hand over the fortresses of Verdun and Toul as a pledge of its neutrality. The deadline by which France must agree to this demand was set for 4 p.m. the next day
In St. Petersburg, at 11 p.m., the German ambassador presented the 12-hour ultimatum to Sazonov. If Russia did not abandon its mobilization by noon Saturday Germany would mobilize in response. And, as Bethmann Hollweg had already declared, ‘mobilization means war’.
Gordon Martel is a leading authority on war, empire, and diplomacy in the modern age. His numerous publications include studies of the origins of the first and second world wars, modern imperialism, and the nature of diplomacy. A founding editor of The International History Review, he has taught at a number of Canadian universities, and has been a visiting professor or fellow in England, Ireland and Australia. Editor-in-chief of the five-volume Encyclopedia of War, he is also joint editor of the longstanding Seminar Studies in History series. His new book is The Month That Changed The World: July 1914.
Subscribe to the OUPblog via email or RSS.
Subscribe to only history articles on the OUPblog via email or RSS.
The post The month that changed the world: Friday, 31 July 1914 appeared first on OUPblog.










Barry, Bond, and music on film
Twenty-seven years ago, on 31 July 1987, James Bond returned to the screen in The Living Daylights, with Timothy Dalton as the new Bond. The film has a notable departure in the style of music, as composer John Barry decided that the film needed a new sound to match this reinvented Bond, and his love interest — a musician with dangerous ties. To celebrate the anniversary, here is a brief extract from The Music of James Bond by John Burlingame.
In the script, Bond is caught up in a complex plot involving high-ranking Soviet intelligence officer Koskov (Jeroen Krabbe) who is supposedly defecting to the West. Koskov’s girlfriend, Czech cellist Kara Milovy (Maryam d’Abo), is duped into helping him escape his KGB guards. A Greek terrorist named Necros (Andreas Wisniewski) then supervises his “abduction” from England and transport to the Tangiers estate of an American arms dealer (Joe Don Baker). Eventually Bond and Kara find themselves at a Soviet airbase in Afghanistan, where they meet a Mujahidin leader (Art Malik) who helps 007 thwart the plot.
Because the early portions of the story take place in Czechoslovakia and Austria, The Living Daylights crew shot for two weeks in Vienna, including all of the scenes where Kara is performing on her cello. Director John Glen recalled conferring with Barry about the classical music that would be heard in the film. “We listened to various pieces before we chose what we were going to use,” Glen said. “Obviously we needed something where the cello was featured strongly.” (They ended up with Mozart, Borodin, Strauss, Dvořák and Tchaikovsky.) They recorded the classical selections with Gert Meditz conducting the Austrian Youth Orchestra and then filmed the ensemble, using the prerecorded music as playback on the set.
Maryam d’Abo was filmed “playing” the cello during several of these scenes. “I started taking private lessons a month prior to the film,” she recalled. “I just learned the movements. They basically soaped the bow so there wasn’t any sound [from the instrument]. It was hard work; I could have done with a couple more weeks of lessons. They demanded a lot of strength. No wonder cellists start when they are eight years old.” The solo parts heard in the film were played by Austrian cellist Stefan Kropfitsch.

The Living Daylights Film Poster (c) MGM
The actress, as Kara, “performs” with the orchestra in several scenes, notably at the end of the film when Barry himself is seen conducting Tchaikovsky’s 1877 Variations on a Rococo Theme and Kara is the soloist. It was filmed on October 15, 1986, at Vienna’s Schönbrunn Palace. Recalled Glen: “It was very unusual for John—unlike a lot of other people who liked to appear in movies, John had never asked before—but on that film, he asked if he could appear. At the time, it struck me as a bit strange. It was almost a premonition that this was going to be his last Bond. I was happy to accommodate him, and he was eminently qualified to do it.”
In fact, Barry had done this once before, appearing on-screen as the conductor of a Madrid orchestra in Bryan Forbes’s Deadfall (1968). On that occasion, he was conducting his own music (a single-movement guitar concerto that was ingeniously written to double as dramatic music for a jewel robbery occurring simultaneously with the concert). This time, he was supposed to be conducting the “Lenin’s People’s Conservatoire Orchestra.”
D’Abo socialized with Barry in London, when the unit was shooting at Pinewood. (She later realized that she had already appeared in two Barry films: Until September and Out of Africa.) “John was there, working on the music,” she said. “He was just a joy to be around. I remember seeing him and having dinner with him and [his wife] Laurie, and John being so excited about writing the music. He was so adorable, saying ‘Your love scenes inspire me to write this romantic music.’ John was such a charmer with women.”
Jon Burlingame is the author of The Music of James Bond, now out in paperback with a new chapter on Skyfall. He is one of the nation’s leading writers on the subject of music for film and television. He writes regularly for Daily Variety and teaches film-music history at the University of Southern California. His other work has included three previous books on film and TV music; articles for other publications including The New York Times, Los Angeles Times, The Washington Post, and Premiere and Emmy magazines; and producing radio specials for Los Angeles classical station KUSC.
Subscribe to the OUPblog via email or RSS.
Subscribe to only music articles on the OUPblog via email or RSS.
The post Barry, Bond, and music on film appeared first on OUPblog.










The Odyssey in culture, ancient and modern
Homer’s epic poem The Odyssey recounts the 10-year journey of Odysseus from the fall of Troy to his return home to Ithaca. The story has continued to draw people in since its beginning in an oral tradition, through the first Greek writing and integration into the ancient education system, the numerous translations over the ages, and modern retellings. It has also been adapted to different artistic mediums from depictions on pottery, to scenes in mosaic, to film. We spoke with Barry B. Powell, author of a new free verse translation of The Odyssey, about how the story was embedded into ancient Greek life, why it continues to resonate today, and what translations capture about their contemporary cultures.
Visual representations of The Odyssey and understanding ancient Greek history
Click here to view the embedded video.
Why is The Odyssey still relevant in our modern culture?
Click here to view the embedded video.
On the over 130 translations of The Odyssey into English
Click here to view the embedded video.
Barry B. Powell is Halls-Bascom Professor of Classics Emeritus at the University of Wisconsin, Madison. His new free verse translation of The Odyssey was published by Oxford University Press in 2014. His translation of The Iliad was published by Oxford University Press in 2013. See previous blog posts from Barry B. Powell.
Subscribe to the OUPblog via email or RSS.
Subscribe to only classics and archaeology articles on the OUPblog via email or RSS.
The post The Odyssey in culture, ancient and modern appeared first on OUPblog.










Pseudoscience surplus
We are besieged by misinformation on all sides. When this misinformation masquerades as science, we call it pseudoscience. The scientific tradition has methods that offer a way to get accurate evidence and decrease the chance of misinformation persisting for long. The application of these rules marks the difference between science and pseudoscience. Perhaps more importantly, accepting these rules allows us to admit what we do not yet know, and avoids the pomposity too often associated with the notion of scientific authority.
We are easy prey for pseudoscience. We are natural believers, especially in things that we would like to be true. This belief may be fostered by trusting web surfing. We come to believe that our children can improve their scholastic performances by gulping up fishy pills or other improbable supplements. We would like to be more intelligent and show off our skills in solving puzzles, have better memory and absorb volumes of material effortlessly, and to flaunt our astuteness and acumen at parties. To reach these goals by long hours of swotting is a daunting prospect, so we jump at the idea of a quick fix and are prepared to pay for it.
Take the simplistic dichotomy between the two brain hemispheres that informs a series of training programmes. Such programmes are based on the popular assumption that our brains have a nerdy left hemisphere, which acts as a rigorous accountant, opposed to a creative, hippie half, the right hemisphere (which usually needs to be awakened).
Newsmakers fuel belief in tall tales by running uncritical stories advertising outlandish methods and ignoring their obvious flaws. So we can blame the journalists: easy target. However, when we scientists engage with the public, do we really do any better? We are now all desperate to engage the public; our institutions push us to branch out and reach out, and we get brownie points if we do so. This activity too often translates into a scientist going to the media saying “I have nothing to say, and I want to say it on TV.” It sometimes seems that it is the engagement itself that is valued, independently of what we actually say.
There is nothing wrong if you are not interested in science, but if you are then nowadays there are plenty of opportunities to indulge your curiosity. Science festivals are springing up in every city. However, the idea that simply discussing science publicly can counter misinformation is naïve. I posit that too often than it would be advisable, scientists themselves promulgate pseudoscientific thinking, so even science festivals may be counterproductive. Engaging with the public should push scientists to show the evidence and praise scientific methods. We should not abuse the position to dominate by authority.
The Royal Society‘s motto ‘Nullius in verba’ is Latin for, roughly, ‘take nobody’s word for it’. We scientists should remember this motto, not only in our labs, but also when disseminating our ideas. Yet we seem to know no better. Kary Mullis, Nobel Laureate in Chemistry, asserted in his autobiography his belief in astrology. But he is a Capricorn. I’m a Libran, and Librans do not believe in astrology.
Sergio Della Sala is Professor of Human Cognitive Neuroscience at the University of Edinburgh, and co-editor of Neuroscience in Education: The Good, The Bad, and The Ugly.
Subscribe to the OUPblog via email or RSS.
Subscribe to only psychology articles on the OUPblog via email or RSS.
Image credit: Cod liver oil capsules. Photo by Adrian Wold. CC BY 2.5 via Wikimedia Commons.
The post Pseudoscience surplus appeared first on OUPblog.










What are the costs and impacts of telecare for people who need social care?
In these times of budgetary constraints and demographic change, we need to find new ways of supporting people to live longer in their own homes. Telecare has been suggested as a useful way forward. Some examples of this technology, such as pull-cord or pendant alarms, have been around for years, but these ‘first-generation’ products have given way to more extensive and sophisticated systems. ‘Second-generation’ products literally have more bells and whistles – for instance, alarms for carbon monoxide and floods, and sensors that can detect movement in and out of bed. These sensors send alerts to a call-centre operator who can organise a response, perhaps call out a designated key-holder, organise a visit to see if there is a problem, or ring the emergency services. There are even more elaborate systems that continuously monitor a person’s activity using sensors and analyse these ‘lifestyle’ data to identify changes in usual activity patterns, but these systems are not in mainstream use. In contrast to telehealth – where the recipient is actively involved in transmitting and in many cases receiving information – the sensors in telecare do not require the active engagement of participants to transmit data, as this is done automatically in the background.
Take-up of telecare remains below its potential in England. One recent study estimated that some 4.17 million over-50 year olds could potentially use telecare, while only about a quarter of that figure were actually using personal alarms or alerting devices. The Department of Health has similarly suggested that millions of people with social care needs and long term conditions could benefit from telecare and telehealth. To help meet this need, it launched the 3-Million Lives campaign in partnership with industry to promote the scaling-up of telehealth and telecare.
The hope held by government and commissioners in the NHS and local authorities is that these new assistive technologies not only promote independence and improve care quality but also reduce the use of health and social care services. To decide how much funding to allocate to these promising new services, these commissioners need a solid evidence base. In 2008, the Department of Health launched the Whole Systems Demonstrator (WSD) programme in three local authority areas in England engaged in whole-systems redesign to test the impacts of telecare (for people with social care needs) and telehealth (for people with long-term conditions).
The research that accompanied the WSD programme was extensive. It included quantitative studies investigating health and social care service use, mortality, costs, and the effectiveness of these technologies. Parallel qualitative studies explored the experiences of people using telecare and telehealth and their carers. The research also examined the ways in which local managers and frontline professionals were introducing the new technologies.
Some results from these streams of research have been published with more to come. From the quantitative research, three articles were published in Age and Ageing over the past year. Steventon and colleagues report on the use of hospital, primary care and social services, and mortality for all participants in the trial – around 2,600 people – based on routinely collected data. Two papers report the results of the WSD telecare questionnaire study (Hirani, Beynon et al. 2013; Henderson, Knapp et al. 2014). The questionnaire study included participants from the main trial who filled out questionnaires about their psychological outcomes, their quality of life, and their use of health and social care services.
The most recent paper to be published in Age and Ageing is the cost-effectiveness analysis of WSD telecare. Participants used a second-generation package of sensors and alarms that was passively and remotely monitored. On average, about five items of telecare equipment were provided to people in the ‘intervention’ group. The whole telecare package accounted for just under 10% of the estimated total yearly health and social care costs of £8,625 (adjusting for case mix) for these people. This was more costly than the care packages of people in the ‘usual care’ group (£7,610 per year) although the difference was not statistically significant. The extra cost of gaining a quality-adjusted life year (QALY) associated with the telecare intervention was £297,000. This is much higher than the threshold range – £20,000 to£30,000 per QALY – used by the National Institute for Health and Care Excellence (NICE) when judging whether an intervention should be used in the NHS (National Institute for Health and Clinical Excellence 2008). Given these results, we would, therefore, caution against thinking that second-generation telecare is the cure-all solution for providing good quality care to increasing numbers of people with social care needs while containng costs.
As with any research, it is important to understand how to best use the findings. The telecare tested during the pilot period was ‘second generation’, so conclusions from this research cannot be applied, for instance, to existing pendant alarm systems currently in widespread use. And telecare systems have continued to evolve since this research started. Moreover, while the results summarised here relate to the telecare participants and do not cover any potential impacts on family carers, there is some evidence that telecare alleviates carer strain.
These findings inevitably raise further questions. What are the broader experiences of those using telecare? What makes a telecare experience positive? And what detracts from the experience? Who can benefit most from telecare? Some answers will emerge as we look across all the findings from the WSD research programme. We also need to look forward to findings from new research, such as the current trial of telecare for people with dementia and their carers (Leroi, Woolham et al. 2013). The ‘big’ question is not whether we should implement a ‘one-size fits all’ solution to meet the increasing demands on social care but for whom do these new assistive technologies work best and for whom are they most cost-effective response.
Catherine Henderson is a researcher at the London School of Economics. She is one of the authors of the paper ‘Cost-effectiveness of telecare for people with social care needs: the Whole Systems Demonstrator cluster randomised trial’, which is published in the journal Age and Ageing.
Age and Ageing is an international journal publishing refereed original articles and commissioned reviews on geriatric medicine and gerontology. Its range includes research on ageing and clinical, epidemiological, and psychological aspects of later life.
Subscribe to the OUPblog via email or RSS.
Subscribe to only science and medicine articles on the OUPblog via email or RSS.
Image credit: Senior woman on phone. © bbbrrn, via iStockphoto.
The post What are the costs and impacts of telecare for people who need social care? appeared first on OUPblog.










July 30, 2014
Monthly etymology gleanings for July 2014
Since I’ll be out of town at the end of July, I was not sure I would be able to write these “gleanings.” But the questions have been many, and I could answer some of them ahead of time.
Autumn: its etymology
Our correspondent wonders whether the Latin word from which English, via French, has autumn, could be identified with the name of the Egyptian god Autun. The Romans derived the word autumnus, which was both an adjective (“autumnal”) and a noun (“autumn”), from augere “to increase.” This verb’s perfect participle is auctus “rich (“autumn as a rich season”). The Roman derivation, though not implausible, looks like a tribute to folk etymology. A more serious conjecture allies autumn to the Germanic root aud-, as in Gothic aud-ags “blessed” (in the related languages, also “rich”). But, more probably, Latin autumnus goes back to Etruscan. The main argument for the Etruscan origin is the resemblance of autumnus to Vertumnus, the name of a seasonal deity (or so it seems), about whom little is known besides the tale of his seduction, in the shape of an old woman, of Pomona, as told by Ovid. Vertumnus, or Vortumnus, may be a Latinized form of an Etruscan name. A definite conclusion about autumnus is hardly possible, even though some sources, while tracing this word to Etruscan, add “without doubt.” The Egyptian Autun was a creation god and the god of the setting sun, so that his connection with autumn is remote at best. Nor do we have any evidence that Autun had a cult in Ancient Rome. Everything is so uncertain here that the origin of autumnus must needs remain unknown. In my opinion, the Egyptian hypothesis holds out little promise.

Vertumnus seducing Pomona in the shape of an old woman. (Pomona by Frans de Vriendt “Floris” (Konstnär, 1518-1570) Antwerpen, Belgien, Hallwyl Museum, Photo by Jens Mohr, via Wikimedia Commons)
The origin of so long
I received an interesting letter from Mr. Paul Nance. He writes about so long:
“It seems the kind of expression that should have derived from some fuller social nicety, such as I regret that it will be so long before we meet again or the like, but no one has proposed a clear antecedent. An oddity is its sudden appearance in the early nineteenth century; there are only a handful of sightings before Walt Whitman’s use of it in a poem (including the title) in the 1860-1861 edition of Leaves of Grass. I can, by the way, offer an antedating to the OED citations: so, good bye, so long in the story ‘Cruise of a Guinean Man’. Knickerbocker: New York (Monthly Magazine 5, February 1835, p. 105; available on Google Books). Given the lack of a fuller antecedent, suggestions as to its origin all propose a borrowing from another language. Does this seem reasonable to you?”
Mr. Nance was kind enough to append two articles (by Alan S. Kaye and Joachim Grzega) on so long, both of which I had in my folders but have not reread since 2004 and 2005, when I found and copied them. Grzega’s contribution is especially detailed. My database contains only one more tiny comment on so long by Frank Penny: “About twenty years ago I was informed that it [the expression so long] is allied to Samuel Pepys’s expression so home, and should be written so along or so ’long, meaning that the person using the expression must go his way” (Notes and Queries, Series 12, vol. IX, 1921, p. 419). The group so home does turn up in the Diary more than once, but no citation I could find looks like a formula. Perhaps Stephen Goranson will ferret it out. In any case, so long looks like an Americanism, and it is unlikely that such a popular phrase should have remained dormant in texts for almost two centuries.
Be that as it may, I agree with Mr. Nance that a formula of this type probably arose in civil conversation. The numerous attempts to find a foreign source for it carry little conviction. Norwegian does have an almost identical phrase, but, since its antecedents are unknown, it may have been borrowed from English. I suspect (a favorite turn of speech by old etymologists) that so long is indeed a curtailed version of a once more comprehensible parting formula, unless it belongs with the likes of for auld lang sine. It may have been brought to the New World from England or Scotland and later abbreviated and reinterpreted.
“Heavy rain” in languages other than English
Once I wrote a post titled “When it rains, it does not necessarily pour.” There I mentioned many German and Swedish idioms like it is raining cats and dogs, and, rather than recycling that text, will refer our old correspondent Mr. John Larsson to it.
Ukraine and Baltic place names
The comment on this matter was welcome. In my response, I preferred not to talk about the things alien to me, but I wondered whether the Latvian place name could be of Slavic origin. That is why I said cautiously: “If this is a native Latvian word…” The question, as I understand, remains unanswered, but the suggestion is tempting. And yes, of course, Serb/Croat Krajna is an exact counterpart of Ukraina, only without a prefix. In Russian, stress falls on i; in Ukrainian, I think, the first a is stressed. The same holds for the derived adjectives: ukrainskii ~ ukrainskii. Pushkin said ukrainskaia (feminine).
Slough, sloo, and the rest
Many thanks to those who informed me about their pronunciation of slough “mire.” It was new to me that the surname Slough is pronounced differently in England and the United States. I also received a question about the history of slew. The past tense of slay (Old Engl. slahan) was sloh (with a long vowel), and this form developed like scoh “shoe,” though the verb vacillated between the 6th and the 7th class. The fact that slew and shoe have such dissimilar written forms is due to the vagaries of English spelling. One can think of too, who, you, group, fruit, cruise, rheum, truth, and true, which have the same vowel as slew. In addition, consider Bruin and ruin, which look deceptively like fruit, and add manoeuver for good measure. A mild spelling reform looks like a good idea, doesn’t it?
The pronunciation of February
In one of the letters I received, the writer expresses her indignation that some people insist on sounding the first r in February. Everybody, she asserts, says Febyooary. In such matters, everybody is a dangerous word (as we will also see from the next item). All of us tend to think that what we say is the only correct norm. Words with the succession r…r tend to lose one of them. Yet library is more often pronounced with both, and Drury, brewery, and prurient have withstood the tendency. February has changed its form many times. Thus, long ago feverer (from Old French) became feverel (possibly under the influence of averel “April”). In the older language of New England, January and February turned into Janry and Febry. However powerful the phonetic forces may have been in affecting the pronunciation of February, of great importance was also the fact that the names of the months often occur in enumeration. Without the first r, January and February rhyme. A similar situation is well-known from the etymology of some numerals. Although the pronunciation Febyooary is equally common on both sides of the Atlantic and is recognized as standard throughout the English-speaking world, not “everybody” has accepted it. The consonant b in February is due to the Latinization of the French etymon (late Latin februarius).
Who versus whom
Discussion of these pronouns lost all interest long ago, because the confusion of who and whom and the defeat of whom in American English go back to old days. Yet I am not sure that what I said about the educated norm is “nonsense.” Who will marry our son? Whom will our son marry? Is it “nonsense” to distinguish them, and should (or only can) it be who in both cases? Despite the rebuke, I believe that even in Modern American English the woman who we visited won’t suffer if who is replaced with whom. But, unlike my opponent, I admit that tastes differ.
Wrap
Another question I received was about the origin of the verb wrap. This is a rather long story, and I decided to devote a special post to it in the foreseeable future.
PS. I notice that of the two questions asked by our correspondent last month only copacetic attracted some attention (read Stephen Goranson’s response). But what about hubba hubba?
Anatoly Liberman is the author of Word Origins And How We Know Them as well as An Analytic Dictionary of English Etymology: An Introduction. His column on word origins, The Oxford Etymologist, appears on the OUPblog each Wednesday. Send your etymology question to him care of blog@oup.com; he’ll do his best to avoid responding with “origin unknown.” Subscribe to Anatoly Liberman’s weekly etymology articles via email or RSS.
Subscribe to the OUPblog via email or RSS.
Subscribe to only language articles on the OUPblog via email or RSS.
The post Monthly etymology gleanings for July 2014 appeared first on OUPblog.










Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
