Oxford University Press's Blog, page 742
November 7, 2014
Eleanor Roosevelt’s last days
When Eleanor Roosevelt died on this day (7 November) in 1962, she was widely regarded as “the greatest woman in the world.” Not only was she the longest-tenured First Lady of the United States, but also a teacher, author, journalist, diplomat, and talk-show host. She became a major participant in the intense debates over civil rights, economic justice, multiculturalism, and human rights that remain central to policymaking today. As her husband’s most visible surrogate and collaborator, she became the surviving partner who carried their progressive reform agenda deep into the post-war era, helping millions of needy Americans gain a foothold in the middle class, dismantling Jim Crow laws in the South, and transforming the United States from an isolationist into an internationalist power. In spite of her celebrity, or more likely because of it, she had to endure a prolonged period of intense suffering and humiliation before dying, due in large part to her end-of-life care.
Roosevelt’s terminal agonies began in April 1960 when at 75 years of age, she consulted her personal physician, David Gurewitsch, for increasing fatigue. On detecting mild anemia and an abnormal bone marrow, he diagnosed “aplastic anemia” and warned Roosevelt that transfusions could bring temporary relief, but sooner or later, her marrow would break down completely and internal hemorrhaging would result. Roosevelt responded simply that she was “too busy to be sick.”
For a variety of arcane reasons, Roosevelt’s hematological disorder would be given a different name today – myelodysplastic disorder – and most likely treated with a bone marrow transplant. Unfortunately, in 1962 there was no effective treatment for Roosevelt’s hematologic disorder, and over the ensuing two years, Gurewitsch’s grim prognosis proved correct. Though she entered Columbia-Presbyterian Hospital in New York City repeatedly for tests and treatments, her “aplastic anemia” progressively worsened. Premarin produced only vaginal bleeding necessitating dilatation and curettage, transfusions temporary relief of her fatigue, but at the expense of severe bouts of chills and fever. Repeated courses of prednisone produced only the complications of a weakened immune system. By September 1962, deathly pale, covered with bruises and passing tarry stools, Roosevelt begged Gurewitsch in vain to let her die. She began spitting out pills or hiding them under her tongue, refused further tests and demanded to go home. Eight days after leaving the hospital, the TB bacillus was cultured from her bone marrow.

Gurewitsch was elated. The new finding, he proclaimed, had increased Roosevelt’s chances of survival “by 5000%.” Roosevelt’s family, however, was unimpressed and insisted that their mother’s suffering had gone on long enough. Undeterred, Gurewitsch doubled the dose of TB medications, gave additional transfusions, and ordered tracheal suctioning and a urinary catheter inserted.
In spite of these measures, Roosevelt’s condition continued to deteriorate. Late in the afternoon of 7 November 1962 she ceased breathing. Attempts at closed chest resuscitation with mouth-to-mouth breathing and intra-cardiac adrenalin were unsuccessful.
Years later, when reflecting upon these events, Gurewitsch opined that: “He had not done well by [Roosevelt] toward the end. She had told him that if her illness flared up again and fatally that she did not want to linger on and expected him to save her from the protracted, helpless, dragging out of suffering. But he could not do it.” He said, “When the time came, his duty as a doctor prevented him.”
The ethical standards of morally optimal care for the dying we hold dear today had not yet been articulated when Roosevelt became ill and died. Most of them were violated (albeit unknowingly) by Roosevelt’s physicians in their desperate efforts to halt the progression of her hematological disorder: that of non-maleficence (i.e., avoiding harm); by pushing prednisone after it was having no apparent therapeutic effect; that of beneficence (i.e., limiting interventions to those that are beneficial); by performing cardiopulmonary resuscitation in the absence of any reasonable prospect of a favorable outcome; and that of futility (avoiding futile interventions); by continuing transfusions, performing tracheal suctioning and (some might even argue) beginning anti-tuberculosis therapy after it was clear that Roosevelt’s condition was terminal.
Roosevelt’s physicians also unknowingly violated the principle of respect for persons, by ignoring her repeated pleas to discontinue treatment. However, physician-patient relationships were more paternalistic then, and in 1962 many, if not most, physicians likely would have done as Gurewitsch did, believing as he did that their “duty as doctors” compelled them to preserve life at all cost.
Current bioethical concepts and attitudes would dictate a different, presumably more humane, end-of-life care for Eleanor Roosevelt from that received under the direction of Dr. David Gurewitsch. While arguments can be made about whether any ethical principles are timeless, Gurewitsch’s own retrospective angst over his treatment of Roosevelt, coupled with ancient precedents proscribing futile and/or maleficent interventions, and an already growing awareness of the importance of respect for patients’ wishes in the early part of the 20th century, suggest that even by 1962 standards, Roosevelt’s end-of-life care was misguided. Nevertheless, in criticizing Gurewitsch for his failure “to save [Roosevelt] from the protracted, helpless, dragging out of suffering,” one has to wonder if and when a present-day personal physician of a patient as prominent as Roosevelt would have the fortitude to inform her that nothing more can be done to halt the progression of the disorder that is slowly carrying her to her grave. One wonders further if and when that same personal physician would have the fortitude to inform a deeply concerned public that no further treatment will be given, because in his professional opinion, his famous patient’s condition is terminal and further interventions will only prolong her suffering.
Evidence that recent changes in the bioethics of dying have had an impact on the end-of-life care of famous patients is mixed. Former President Richard Nixon and another famous former First Lady, Jacqueline Kennedy Onassis, both had living wills and died peacefully after forgoing potentially life-prolonging interventions. The deaths of Nelson Mandela and Ariel Sharon were different. Though 95 years of age and clearly over-mastered by a severe lung infection as early as June 2013, Mandela was maintained on life support in a vegetative state for another six months before finally dying in December of that year. Sharon’s dying was even more protracted, thanks to the aggressive end-of-life care provided by Israeli physicians. After a massive hemorrhagic stroke destroyed his cognitive abilities in 2006, a series of surgeries and on-going medical care kept Sharon alive until renal failure finally ended his suffering in January 2014. Thus, although bioethical concepts and attitudes regarding end-of-life care have undergone radical changes since 1962, these contrasting cases suggest that those caring for world leaders at the end of their lives today are sometimes as incapable as Roosevelt’s physicians were a half century ago in saving their patients from the protracted suffering and indignities of a lingering death.
The post Eleanor Roosevelt’s last days appeared first on OUPblog.










The origin of work-hour regulations for house officers
Interns and residents have always worked long hours in hospitals, and there has always been much to admire about this. Beyond the educational benefits that accrue from observing the natural history of disease and therapy, long hours help instill a sense of commitment to the patient. House officers learn that becoming a doctor means learning to meet the needs of others. This message has never been lost on them.
However, it has also long been recognized that house officers are routinely overworked. This point was emphasized in the first systematic study of graduate medical education, published in 1940. In the 1950s and 1960s, the hazards of sleep deprivation became known, including mood changes, depression, impaired cognition, diminished psychomotor functioning, difficulty with interpersonal relationships, and an increased risk of driving accidents. In the 1970s, the phenomenon of burnout was recognized. In the mid-1980s, after prospective payment of hospitals was introduced, the workload of house officers became greater still, as there were now many more patients to see, the patients were sicker, the level of care was more complex, and there was less time with which to care for patients. House officers understood they were in a dilemma where their high standards of professionalism were used by others to justify sometimes inhumane levels of work.
Despite their long hours, the public generally believed that house officers provided outstanding medical and surgical care. Through the 1980s, the traditional view that medical education enhanced patient care remained intact. So did the long-standing belief that teaching hospitals provided the best patient care — in large part because they were teaching hospitals.
In 1984, the traditional belief that medical education leads to better patient care received a sharp rebuke after 18-year old Libby Zion died at the New York Hospital. Ms. Zion, a college freshman, had presented to the hospital with several days of a fever and an earache. The next morning she was dead. The case quickly became the center of intense media interest and a cause célèbre for limiting house officer work hours.

The public’s fear about the safety of hospitals increased in the 1990s. In 1995, a seeming epidemic of errors, including wrong-site surgery and medication and medication mistakes, erupted at US hospitals. These high-profile tragedies received an enormous amount of media attention. The most highly publicized incident involved the death of 39 year-old Betsy Lehman, a health columnist at the Boston Globe, from a massive chemotherapy overdose while being treated for breast cancer at the renowned Dana-Farber Cancer Institute. Public concern for patient safety reached a crescendo in 1999, following the release of the Institute of Medicine’s highly publicized report To Err Is Human. The report concluded that 48,000 to 98,000 Americans died in US hospitals every year because of preventable medical errors.
The result was that in the early 2000s, a contentious debate concerning resident work hours erupted. Many within the medical profession felt that work-hour regulations need not be imposed. They correctly pointed out that little evidence existed that patients had actually suffered at the hands of overly tired residents, and they also claimed that resident education would suffer if held hostage to a time clock. Critics, particularly from outside the profession, pointed to valid physiological evidence that fatigue causes deterioration of high-level functioning; they also argued that high-quality education cannot occur when residents are too tired to absorb the lessons being taught. As the debate proceeded, the public’s voice could not be ignored, for the voices of consumer groups and unions were strong, and Congress threatened legislative action if the profession did not respond on its own
Ultimately, the medical profession acquiesced. In 2002, the Accreditation Council for Graduate Medical Education (ACGME), which oversees and regulates residency programs, established new work-hour standards for residency programs in all specialties. Effective 1 July 2003, residents were not to be scheduled for more than 80 hours of duty per week, averaged over a four-week period. Over-night call was limited to no more frequently than every third night, and residents were required to have one day off per week. House officers were permitted to remain in the hospital for no more than six hours after a night on-call to complete patient care, and a required 10-hour rest period between duty periods was established.
Ironically, as the ACGME passed its new rules, there was little evidence that resident fatigue posed a danger to patients. The Libby Zion case, which fueled the public’s concern with resident work hours, was widely misunderstood. The problems in Ms. Zion’s care resulted from inadequate supervision, not house officer fatigue. At the time the ACGME established its new rules, the pioneering safety expert David Gaba wrote, “Despite many anecdotes about errors that were attributed to fatigue, no study has proved that fatigue on the part of health care personnel causes errs that harm patients.”
On the other hand, the controversy over work hours illustrated a fundamental feature of America’s evolving health care system: Societal forces were more powerful than professional wishes. The bureaucracy in medical education responded slow to the public’s concerns that the long work hours of residents would endanger patient safety. Accordingly, the initiative for reform shifted to forces outside of medicine — consumers, the federal government, labor, and unions. It became clear that a profession that ignored the public’s demand for transparency and accountability did so at its own risk.
The post The origin of work-hour regulations for house officers appeared first on OUPblog.










International Day of Radiology and brain imaging
Tomorrow, 8 November, will mark the third anniversary of the now established International Day of Radiology, an event organised by the European Society of Radiology and Radiological Society of North America: a day in which health care workers worldwide mark their debt of gratitude to Wilhelm Roentgen’s great discovery of x- rays, and its subsequent applications in the field of medical practice, today known as radiology or medical imaging. On 8 November 1895, Roentgen conducted his seminal experiment, which was to change the world forever and earn Roentgen the first Nobel Prize for Physics. This day now is celebrated by over one hundred learned radiology societies worldwide to promote the importance of this discipline in current medical practice, a discipline which has changed beyond wildest recognition from the early days of the pioneers and radiation martyrs. The day is a celebration of all radiology team members’ contribution to patient care. In the early days of this new discipline practitioners were not confined to members of the medical profession but included any lay interested member of the public. Only with the passage of time did the discipline of radiology become the sole preserve of medical practitioners, with appropriate training and regulation introduced to raise standards.
It is interesting to note that the first multidisciplinary society devoted to the new subject ‘The Roentgen society’ was founded in 1897 in London by David Walsh, F.E. Fenton, and F. Harrison Low. In the summer of that year, Professor Silvanus Thompson, the physicist, Fellow of the Royal Society of London, brilliant lecturer, populiser of science, prolific author, and true Victorian polymath became its inaugural President. It has since metamorphosed into the current British Institute of Radiology.
One of the celebratory themes of this year’s International Day of Radiology is brain imaging. The immediate early application of x-rays was to look for fractures and localise foreign bodies, leading to their application in the military setting. In the early days, x-rays did not allow doctors to directly visualise the brain. Arthur Schueller, the Viennese radiologist who worked closely with G. Holzknecht, became an early pioneer in using x-rays to make neuroradiological diagnosis and help neurosurgeons deal with brain tumours.

Although the brain itself could not be seen by x-rays, secondary signs from tumours often showed, such as erosion of the skull bones. Localisation of tumours was not an exact science and early detection was difficult. The American Walter Dandy, who worked at Johns Hopkins Hospital, pioneered the imaging of the ventricles by introducing air and contrast, this assisted surgeons in localising tumours of the brain by looking for ventricular displacement. It is claimed that the great Pulitzer Prize-winning neurosurgeon Harvey Cushing thought that this technique would take the skill out of making a diagnosis by clinically examining the patient, though by today’s standards, a clinical examination would not be considered a pleasant investigation to have.
In Portugal in the late 1920s the polymath Egaz Moniz pioneered cerebral angiography, enabling doctors to visualise the blood supply to the brain, including tumours; this was a great step forward. Moniz was an author, researcher, was at one time Portuguese Foreign Secretary, and was also awarded the Nobel Prize in 1949 for his medical advances. However, a really great leap in brain imaging occurred in the early 1970s, when CT scanning (invented by the British genius Hounsfield) came of age, enabling doctors to visualise the brain itself, along with Magnetic Resonance Imaging (MRI) in the 1980s, further clarifying the workings of the normal and abnormal brain.
Today diagnostic imaging, including the more sophisticated CT scanners, are available even in less affluent countries, and their applications and uses in patient care continues to multiply. They have replaced some of the earlier, more dangerous and uncomfortable investigations endured by the preceding generations of patients. More affluent nations continue to see an exponential growth in modern radiological investigations; such is our fascination for high technology.
Today we salute the pioneers in radiology whose efforts have left us with safer, more accurate and more patient friendly tests than ever before. To find our more about International Day of Radiology and its activities, visit the website.
Heading Image: © Nevit Dilmen, Rad 1706 False colour skull. CC BY-SA 3.0 via Wikimedia Commons.
The post International Day of Radiology and brain imaging appeared first on OUPblog.










November 6, 2014
Looking for Tutankhamun
Poor old king Tut has made the news again – for all the wrong reasons, again.
In a documentary that aired on the BBC two weeks ago, scientists based at the EURAC-Institute for Mummies and the Iceman unveiled a frankly hideous reconstruction of Tutankhamun’s mummy, complete with buck teeth, a sway back, Kardashian-style hips, and a club foot. They based it on CT-scans of the mummy from 2005 and their own research, claiming to have identified a host of genetic disorders and physical deformities suffered by the boy-king, who died around age 19 some 3,300 years ago.
The English-language newspaper Ahram Online has aired the views of three Egyptian Egyptologists who are just as shocked by the reconstruction as many television viewers were. There are old and understandable sensitivities here: Western scientists have been poking around Egyptian mummies for more than 200 years, while the discovery of Tutankhamun’s tomb in 1922 coincided with the birth of an independent Egyptian nation after decades of European colonialism. The ensuing tussle between excavator Howard Carter and the government authorities, over where the tomb finds would end up (Cairo won, and rightly so), highlighted deep-seated tensions about who ‘owned’ ancient Egypt, literally and figuratively. It’s safe to say that the last century has seen king Tut more involved in politics than he ever was in his own lifetime.
Most Egyptologists can readily debunk the ‘evidence’ presented by the EURAC team – if we weren’t so weary of debunking television documentaries already. (why do the ancient Romans get academic royalty like Mary Beard, while the ancient Egyptians get the guy from The Gadget Show?). What’s fascinating is how persistent – and how misguided – lurid interest in the dead bodies of ancient Egyptians is, not to mention the wild assumptions made about the skilled and stunning art this culture produced. The glorious gold mask, gilded shrines and coffins, weighty stone sarcophagus, and hundreds of other objects buried with Tutankhamun were never meant to show us a mere human, but to manifest the razzle-dazzle of a god-king.
Around the time of Tutankhamun’s reign, artists depicted the royal family and the gods with almond eyes, luscious lips, and soft, plump bodies. These were never meant to be true-to-life images, as if the pharaoh and his court were posting #nomakeupselfie snaps on Twitter. Each generation of artists developed a style that was distinctive to a specific ruler, but which also linked him to a line of ancestors, emphasizing the continuity and authority of the royal house. The works of art that surrounded Tutankhamun in life, and in death, were also deeply concerned with a king’s unique responsibilities to his people and to the gods.

All the walking sticks buried in the tomb – more than 130 of them, one of which Carter compared to Charlie Chaplin’s ubiquitous prop – emphasize the king’s status at the pinnacle of society (nothing to do with a limp). The chariots were luxury items (quite macho ones, at that), and Tutankhamun’s wardrobe was the haute couture of its day, with delicate embroidery and spangly sequins. Much of the tomb was taken up with deeply sacred objects, too: guardian statues at the doorways, magic figures bricked into the walls, and two dozen bolted shrines protecting wrapped statues of the king and various gods. Not to mention the shrines, sarcophagus, and coffins that held the royal mummy – a sacred object in itself, long before science got a hold of it.
As for the diseases and deformities Tutankhamun is said to have suffered? Allegations of inbreeding don’t add up: scholars have exhaustively combed through the existing historical sources that relate to Tutankhamun (lots and lots of rather dry inscriptions, I’m afraid), and as yet there is no way to identify his biological parents with any certainty. Don’t assume that DNA is an easy answer, either. Not only do we not know the identity of almost any of the ‘royal’ mummies that regularly do the rounds on TV programmes, but also the identification of DNA from ancient mummies is contested – it simply doesn’t survive in the quantity or quality that DNA amplification techniques require. Instead, many of the ‘abnormal’ features of Tutankhamun’s mummy, like the supposed club foot and damage to the chest and skull, resulted from the mummification process, as research on other mummies has surmised. Embalming a body to the standard required for an Egyptian king was a difficult and messy task, left to specialist priests. What mattered just as much, if not more, was the intricate linen wrapping, the ritual coating of resin, and the layering of amulets, shrouds, coffins, and shrines that Carter and his team had to work through in order to get to the fragile human remains beneath.
The famous mummy mask and spectacular coffins we can see in the Museum of Egyptian Antiquities in Cairo today, or in copious images online, should stop us in our tracks with their splendour and skill. That’s what they were meant to do, for those few people who saw them and for the thousands more whose lives and livelihoods depended on the king. But they should also remind us of how they got there: the invidious colonial system under which archaeology flourished in Egypt, for a start, and the thick resin that had to be hammered off so that the lids could be opened and the royal mummy laid bare. Did king Tut have buck teeth, waddle like a duck, drag race his chariot? Have a look at that mask: do you think we’ve missed the point? Like so many modern engagements with the ancient past, this latest twist in the Tutankhamun tale says more about our times than his.
The post Looking for Tutankhamun appeared first on OUPblog.










The economic consequences of Nehru
As Nehru was India’s longest serving prime minister, and both triumph as well as tragedy had accompanied his tenure, this is a fit occasion for a public debate on what had been attempted in the Nehru era and the extent of its success. I must per force confine myself to the economics. This, though, serves as a corrective to the tendency of political historians to mostly concentrate on the other aspects of his leadership. For instance, Sarvepalli Gopal’s noted three-volume biography bestows a single chapter on Nehru’s economic policy. However, reading through the speeches of Nehru we would find that the economy had remained his continuing pre-occupation even amidst the debates on social policy in the Lok Sabha or on de-colonisation in the United Nations. Reading these speeches is indeed advisable, as strongly held positions on the economy in the Nehru era have often been crowded in by ideological predilection when they have not been clouded over by ignorance.
The objective of economic policy in the 1950s was to raise per capita income in the country via industrialisation. The vehicle for this was the Nehru-Mahalanobis strategy, the decision to this end having been taken as early as 1938 by the National Planning Committee of the Congress constituted by Subhas Chandra Bose during his all-too-brief and ill-fated presidentship of the Party. The Committee was chaired by Nehru. The cornerstone of the strategy was to build machines as fast as possible as capital goods were seen as a basic input in all lines of production. While a formal model devised by Prasantha Chandra Mahalanobis had lent a formal status to the strategy it was the so-called ‘plan frame’ that had guided the allocation of spending. In retrospect, the allocation of investment across lines of production in the Second Five-Year Plan was quite balanced with substantial attention given also to infrastructure, the building of which, given the state of the economy then, the public sector alone would have initiated.
The Nehru-Mahalanobis strategy had attracted criticism. I discuss two of these criticisms at this stage and turn to the third at a later stage. Thus, Vakil and Brahmananda argued that the Mahalanobis model neglected wage goods, being those consumed by workers who were the majority of the country. While important per se, in practical terms, this criticism, turned out to be somewhat academic, as the plan frame – as opposed to the model – had given due importance to agriculture. In fact, the Green Revolution which is dated from the late 60s cannot entirely be divorced from the attention paid to agriculture in the Nehru era. The Grow More Food campaign and the trials in the country’s extended agricultural research network all contributed to it. Next, B.R. Shenoy had written a note of dissent to the Second Five-Year Plan document which queried the use of controls as part of the planning process. Shenoy’s is a well-known position in economic theory that the allocative efficiency of the competitive market- mechanism cannot be improved upon. While this is a useful corrective to ham-handed government intervention it was known even by the 1950s that a free market need not necessarily take the economy to the next level. The Pax Britannica had been a time of free markets, though coated with political repression, and this had not helped India much during the two centuries since Plassey. Moreover many of the extant controls were war-time controls that had not been rescinded. Investment licensing though was a central element in planning in India and Shenoy was right in identifying it as such.
As the maxim ‘the proof of the pudding lies in the eating’ must apply most closely to matters economic, the Nehru-Mahalanobis strategy can be considered only as good as its outcomes. It had aimed to raise the rate of growth of the economy. With the distance that half a century affords us and the aid of superior statistical methods we are now in the position to state that its early success was nothing short of spectacular. Depending upon your source, per capita income in India had either declined or stagnated during the period 1900-47. Over 1950-65, its growth was approximately 1.7 percent. India’s economy, which was no more than a colonial enclave for more than two centuries had been quickened. It is made out that this quickening of the economy in the fifties was no great shakes as the initial level of income was low and a given increase in it would register a higher rate of growth than at a later stage in the progression. This confounds statistical description with an economic assessment. It is a widely recognised feature of economic growth that every increase in wealth makes the next step that much easier to take due to increasing returns to scale. The principle works both ways, rendering the revival of an economy trapped at a low level of income that much more difficult. It is worth stating in the context that the acceleration of growth achieved in the nineteen fifties has not been exceeded since. Also, that India grew faster than China in the Nehru era.
So if the Nehru-Mahalanobis strategy had led to such a good start, why were the early gains not sustained? The loss of the early vitality in the economy had to do partly with political economy and partly with a flaw in the strategy itself. The death of Nehru created a crisis of leadership in the Congress Party which was transferred to the polity. It took almost a decade and a half for stability to be restored. The consequence was felt in the governance of the public sector, and public investment which had been the engine of growth since the early fifties slowed. Additionally, the private corporate sector, which contrary to conventional wisdom had flourished under Nehru, was initially repressed by Indira Gandhi. Private investment collapsed. This held back the acceleration of economic growth.
Even though we now have reason to believe that the mechanism of long-term growth that remains to this day, which is that of cumulative causation, had been ignited by the Nehru-Mahalanobis strategy, the strategy itself was incomplete. This is best understood by reference to the Asian Development Model as it had played out in the economies of east Asia. These economies had pursued more or less the same strategy as India in that the state fostered industrialization. But a glaring difference marks the Indian experience. This was the absence of a serious effort to build human capabilities via education and training. In the east this had taken the form of a spreading of schooling, vocational training and engineering education. In India on the other hand public spending on education had turned towards technical education at the tertiary level too early on. The slow spread of schooling ensured that the growth of productivity in the farm and the factory remained far too slow. Now the pace of poverty reduction also remained slow, and via positive feedback slowed the expansion of demand needed for faster growth of the economy.
It is intriguing that the issue of schooling did not figure majorly among India’s planners, especially as it was part of Gandhi’s Constructive Programme. This had not gone unnoticed even at that time. B.V. Krishnamurthi, then at Bombay University, had pointed out that the priorities of the Second Five-Year Plan undergirded by the Mahalanobis model were skewed. He castigated it for a bias toward “river-valley projects”, reflected in the paltry sums allocated to education. But it was the argument advanced by him for why spending on schooling matters that was prescient. He argued that education would enable Indians to attend to the question of their livelihood themselves without relying on the government, thus lightening the economic burden of the latter, presumably leaving it to build more capital goods in the long run as envisaged in the Mahalanobis model. But this was not to be, with enormous consequences for not only the economy but also the effectiveness of democracy in India.
While the failure to initiate a programme of building the capabilities of the overwhelming majority of our people is a moral failure of colossal proportions, we would be missing the wood for the trees if we do not recognize the economic significance of the short Nehru era in the long haul of India’s history. A moribund economy had been quickened. This would have been the pre-condition for most changes in a country with unacceptably low levels of per capita income. It is yet to be demonstrated how this could have been achieved in the absence of the economic strategy navigated through a democratic polity by Jawaharlal Nehru.
Headline image credit: 1956: U.S. President Dwight D. Eisenhower welcomes Prime Minister Jawaharlal Nehru to the White House in Washington, D.C. CC BY-ND 2.0 via U.S. Embassy New Delhi Flickr.
The post The economic consequences of Nehru appeared first on OUPblog.










A reading list of Ancient Greek classics
This selection of ancient Greek literature includes philosophy, poetry, drama, and history. It introduces some of the great classical thinkers, whose ideas have had a profound influence on Western civilization.
Jason and the Golden Fleece by Apollonius of Rhodes
Apollonius’ Argonautica is the dramatic story of Jason’s voyage in the Argo in search of the Golden Fleece, and how he wins the aid of the Colchian princess and sorceress Medea, as well as her love. Written in the third century BC, it was influential on the Latin poets Catullus and Ovid, as well as on Virgil’s Aeneid.
Poetics by Aristotle
This short treatise has been described as the most influential book on poetry ever written. It is a very readable consideration of why art matters which also contains practical advice for poets and playwrights that is still followed today.
The Trojan Women and Other Plays by Euripides
One of the greatest Greek tragedians, Euripides wrote at least eighty plays, of which seventeen survive complete. The universality of his themes means that his plays continue to be performed and adapted all over the world. In this volume three great war plays, The Trojan Women, Hecuba, and Andromache, explore suffering and the endurance of the female spirit in the aftermath of bloody conflict.
The Histories by Herodotus
Herodotus was called “the father of history” by Cicero because the scale on which he wrote had never been attempted before. His history of the Persian Wars is an astonishing achievement, and is not only a fascinating history of events but is full of digression and entertaining anecdote. It also provokes very interesting questions about historiography.
The Iliad by Homer
Homer’s two great epic poems, the Odyssey and the Iliad, have created stories that have enthralled readers for thousands of year. The Iliad describes a tragic episode during the siege of Troy, sparked by a quarrel between the leader of the Greek army and its mightiest warrior, Achilles; Achilles’ anger and the death of the Trojan hero Hector play out beneath the watchful gaze of the gods.
Republic by Plato
Plato’s dialogue presents Socrates and other philosophers discussing what makes the ideal community. It is essentially an enquiry into morality, and why justice and goodness are fundamental. Harmonious human beings are as necessary as a harmonious society, and Plato has profound things to say about many aspects of life. The dialogue contains the famous myth of the cave, in which only knowledge and wisdom will liberate man from regarding shadows as reality.
Greek Lives by Plutarch
Plutarch wrote forty-six biographies of eminent Greeks and Romans in a series of paired, or parallel, Lives. This selection of nine Greek lives includes Alexander the Great, Pericles, and Lycurgus, and the Lives are notable for their insights into personalities, as well as for what they reveal about such things as the Spartan regime and social system.
Antigone, Oedipus the King, Electra by Sophocles
In these three masterpieces Sophocles established the foundation of Western drama. His three central characters are faced with tests of their will and character, and their refusal to compromise their principles has terrible results. Antigone and Electra are bywords for female resolve, while Oedipus’ discovery that he has committed both incest and patricide has inspired much psychological analysis, and given his name to Freud’s famous complex.
Heading image: Porch of Maidens by Thermos. CC BY-SA 2.5 via Wikimedia Commons.
The post A reading list of Ancient Greek classics appeared first on OUPblog.










Ancient voices for today [infographic]
The ancient writers of Greece and Rome are familiar to many, but what do their voices really tell us about who they were and what they believed? In Twelve Voices from Greece and Rome, Christopher Pelling and Maria Wyke provide a vibrant and distinctive introduction to twelve of the greatest authors from ancient Greece and Rome, writers whose voices still resonate across the centuries. Below is an infographic that shows how each of the great classical authors would describe their voice today, if they could.
Download the infographic in pdf or jpeg.
Featured image credit: “Exterior of the Colosseum” by Diana Ringo. Licensed under CC BY-SA 3.0 via Wikimedia Commons.
The post Ancient voices for today [infographic] appeared first on OUPblog.










Salamone Rossi as a Jew among Jews
Grove Music Online presents this multi-part series by Don Harrán, Artur Rubinstein Professor Emeritus of Musicology at the Hebrew University of Jerusalem, on the life of Jewish musician Salamone Rossi on the anniversary of his birth in 1570. Professor Harrán considers three major questions: Salamone Rossi as a Jew among Jews; Rossi as a Jew among Christians; and the conclusions to be drawn from both. The following is the second installment, continued from part one.
By introducing “art music” into the synagogue Rossi was asking for trouble. He is said by Leon Modena (d. 1648), the person who encouraged him to write his Hebrew songs, to have “worked and labored to add from his secular to his sacred works” (“secular” meaning Gentile compositions). As happened when Modena tried to introduce art music into the synagogue in 1605, he and Rossi feared the composer’s works would awaken hostility. To answer prospective objections, Modena added to Rossi’s collection of “songs” the same responsum he wrote, many years before, on the legitimacy of performing art music in the synagogue. He said:
It could be that among the exiled there are sanctimonious persons who try to eliminate anything new in the synagogue or would prohibit a collection of Hebrew “songs.” To avoid this, I decided to reproduce here in print what I wrote in my responsum eighteen years ago with the intention of closing the mouth of anyone speaking nonsense about art music.
To quiet these same “sanctimonious persons” Rossi was in need of a patron. He found him in Moses Sullam, whom he described as a “courageous, versatile man, in whom all learning and greatness are contained.” Sullam encouraged Rossi to overcome the obstacles in the way of composing Hebrew songs, as it was not easy to write to Hebrew words with their accentual and syntactic demands so different from those in Italian. “How many times did I toil, at your command,” so Rossi declares, “until I was satisfied, ordering my songs with joyful lips.” Sullam had his own private synagogue, and it was there that Rossi probably first tried out the songs to gauge the reaction of singers and listeners. His efforts were favorably received. “When people sang them,” Modena reports, “they were delighted with their many good qualities. The listeners too were radiant, each of them finding it pleasant to hear them and wishing to hear more.” Rossi must have taken heart from these and other “friends”—thus they are called in the preface to the collection.
But it was not enough to have the influential Sullam, “highly successful and well known in Mantua,” behind him. Rossi needed rabbinical support, and here Modena, who followed the progress of the collection from its inception, rushed to his defense. For Modena the collection marked the resuscitation of Hebrew art music after its being forgotten with the destruction of the Second Temple. Modena exalted the composer, noting his importance in what he described as a Jewish musical renascence. He wrote that “the events of our foreign dwellings and of our restless running are dispersed over the lands, and the vicissitudes of life abroad were enough to make them [the Jews] forget all knowledge and lose all intellect.” Yet what was lost has now been recovered. “Let them praise the name of the Lord, for Solomon [= Salamone, in reference to King Solomon] alone is exalted nowadays in this wisdom. Not only is he wiser in music than any man of our nation” but he restored the once glorious music heard in the Temple.
Rossi, who was scared to death over how his Hebrew works would be received, asked Modena to prepare them for the printer; in Rossi’s words, “I asked him to prevent any mishap from coming to the composition, to prepare it [for typesetting], embellish it, proofread it, and look out for typographical errors and defects.” Modena composed a foreword to the collection and three dedicatory poems; he included, as already said, the early responsum from 1605 together with its approval by five Venetian rabbis. The collection went out into the world with as much rabbinical support as any composer could hope to receive.

The major problem for Rossi and Modena was how to narrow the gap between contemporary art music practiced by the Christians and Hebrew music practiced in the synagogue. To do this, Modena resorted to a clever remark of Immanuel Haromi, who wrote around 1335: “What will the science of music [niggun] say to others? ‘I was stolen, yes stolen from the land of the Hebrews’ [Genesis 40:15: gunnov gunnavti mi-eretz ha-‘ivrim].” If the Christians “stole” their music from the Hebrews, who, in their wanderings, forgot their former musical knowledge, then by cultivating art music in the early seventeenth century the Jews in a sense recuperated what was theirs to start with. In short, the only thing that separates the art music of the Jews from that of the Christians is its language: Hebrew.
When he composed his Hebrew works Rossi seems to have had one thing in mind: he was interested in their beautiful performance. Christians, who were familiar with Jewish sacred music from their visits to the synagogues, were usually shocked by what they heard. Here is how Gregorio Leti described Jewish prayer services in Rome in 1675:
No sooner do they [the Jews] enter their sanctuary than they begin to shout with angry voices, shaking their heads back and forth, making certain terribly ridiculous gestures, only to continue, sitting down, with these same shouts, which “beautiful” music lasts until their rabbi begins his sermon.
Even Leon Modena, who was a cantor at the Italian synagogue in Venice, was disappointed with the way music was performed in the synagogue. He rebuked the cantors for being so negligent as “to bray like asses” or “shout to the God of our fathers as a dog and a crow.” Oh, how the Jews are fallen, for “we were once masters of music in our prayers and our praises now become a laughingstock to the nations, for them to say that no longer is science in our midst.”
Both Modena and Rossi were concerned over how Christians would respond to Jewish music. They wanted to prove that whatever the Christians do, the Jews can do equally well. They may not be physically strong, Modena explains, but, in the “sciences,” they are outstanding:
No more will bitter words about the Hebrew people
be uttered, in a voice of scorn, by the haughty.
They will see that full understanding is as much a portion
of theirs [the people’s] as of others who flaunt it.
Though weak in [dealing] blows, in sciences
they [the people] are a hero, as strong as oaks.
[Third dedicatory poem by Leon Modena to Rossi’s “Songs by Solomon.”]
Headline image credit: Esther Scroll by Salom Italia, circa 1641. The Jewish Museum (New York City). Public domain via Wikimedia Commons.
The post Salamone Rossi as a Jew among Jews appeared first on OUPblog.










November 5, 2014
1989 revolutions, 25 years on
This season marks the silver anniversary of the wildfire revolutions that swept across Eastern Europe in the summer and autumn of 1989. The upheavals led to the liberation of Eastern Europe from Soviet control, the Reunification of Germany and the demise of the Soviet Union itself two years later. Its dizzying speed and domino effect caught everyone by surprise, be it the confused communist elites, veteran Kremlinologists and even the participants themselves. The carnivalesque atmosphere of People Power in the streets of Eastern Europe uprising brought a touch of the surreal into world politics, as electricians and playwrights became heads of state and one communist dictator was executed on Christmas Day on live television. That most of the momentous changes that year occurred without bloodshed (apart from Romania) rendered these events all the more improbable, to the point that 1989 is commonly referred to as an annus mirabilis.
Whatever hopes and fears first greeted the dismantling of the Cold War order, the events of 1989 have changed the world in ways that we are only now beginning to understand. Ready-made historical analogies were in good supply at the time, ranging from comparisons to the 1848 “springtime of nations” to proclamations that 1989 was the greatest bicentennial tribute to the French Revolution two centuries earlier. Other contemporary assessments were less sanguine: the spectre of World War II hung heavily in the air, as many observers at the time feared that the collapse of communism would undermine a stable and relatively prosperous Europe to the advantage of a new and powerful “Fourth Reich” straddling the continent. Thatcher’s famous comment that “[i]f we are not careful, the Germans will get in peace what Hitler couldn’t get in the war” reflected broad international anxiety about the new Germany and the new Europe. Many of those dark early predictions of a revanchist Germany and a political explosive post-communist Europe fortunately did not come to pass, as Germany assumed its place as the pivotal “civil power” (and banker) of Europe. This unification of Germany, unlike its 1871 predecessor, ushered in a different Europe, prompting some commentators to liken 11/9 (the day the Berlin Wall was breached) to 9/11 in terms of long-term European historical significance.
Yet however much the dismantling of the Berlin Wall may have provided the upheavals with its most potent telegenic imagery, it is well to remember that the fuse was lit in the docks of Gdansk almost a decade before, and events in Poland in the spring of 1989 served as the impetus for the reform drive across Czechoslovakia, Hungary, East Germany, and the Baltics. Still, the transformations of 1989 could not have begun more inauspiciously. After all, the key date that kicked started the revolutionary events in Eastern Europe that summer – the June 4th Polish elections, which saw Solidarity candidates swept into office – took place on the very day of the fateful crackdown at Tiananmen Square in Beijing. Polish reformers were well aware of what happened across the world that morning, and many feared that the Polish regime would follow suit. And even if the Polish elections were ultimately allowed to stand, there was much talk among the ruling elites across the East Bloc that summer (especially in East Germany) about the need for a “Chinese solution” to domestic discontent.

Instead, 1989 witnessed a series of “velvet revolutions” across the Eastern Bloc. Countless commentators predictably trumpeted the fall of communism in Eastern Europe as liberalism triumphant. The European Union and NATO were expanded eastward, as the gospel of free trade and security emerged as a new mantra of peace and prosperity. To the surprise of many observers, “third way” Scandinavian-style alternatives found little electoral support in the 1990s, as social democrats fared badly at the polls. Most ex-communist countries opted for Christian Democrat Centre-Right candidates, and several even slid in more authoritarian directions. One of the effects of 1989 in Europe was the fracturing of the map of Central Europe into smaller states, often along ethnic lines. While this was a peaceful process in some places, such as with Czechoslovakia’s ‘Velvet Divorce” in 1993, in other places it was not. The violent dissolution of Yugoslavia brought the return of ethnic cleansing to Europe for the first time since World War II. The difficulty of transplanting liberal economic and political models in Eastern Europe together with the survival and continued power of the communist era elites across Eastern Europe was duly noted by observers across the region, sometimes termed “velvet restoration.” For their part, East German leftists voiced misgivings about the sirens of “DM-nationalism” as a squandered opportunity to build socialism afresh. That many felt “Kohl-onized” by Bonn as second-class citizens drove home the widespread disenchantment with die Wende. Such sentiment was echoed by others across the continent (particularly in Southern Europe) a generation later in the wake of the 2007-2008 financial crisis, as East German apprehension and anger about being taken over by the Federal Republic had become Europeanized in the last few years. Interpreting 1989 as simply the Cold War victory of Western liberalism thus looks much cruder and less convincing in hindsight. In this way, 1989 may have marked the defeat of socialism, but it inadvertently exposed the limits of liberalism as well.
As for the wider contemporary significance, we need to bear in mind that Russia and China have formulated their own responses to the ‘spirit of ‘89.’ 1989’s unique legacy to world politics – the prospect of ‘peaceful revolution’ and the mass mobilization of civil society against powerful states – is an object lesson of civic democracy, and participants in the Arab Spring explicitly looked to this model in its early phases. Nonetheless, the violent ‘blowback’ of military authoritarianism in many of those toppled regimes suggests that the Russian and Chinese response to People Power drew a different lesson from 1989. The point is that the legacy of 1989 contains both elements, as seen in the dramatically divergent outcomes of4 June 1989 in Poland and China. Which aspect of June 4th will be Hong Kong’s fate, for example, is impossible to say at the moment, but one thing is clear: the urgent plea for democratization that was 1989’s call to action across the Bloc is as relevant as ever in all corners of the globe.
Headline image: Fall of the Berlin Wall 1989, people walking by Raphaël Thiémard. CC BY-SA 2.0 via Wikimedia Commons.
The post 1989 revolutions, 25 years on appeared first on OUPblog.










Monthly etymology gleanings for October 2014, Part 2
Brown study
As I mentioned last time, one of our correspondents asked me whether anything is known about this idiom. My database has very little on brown study, but I may refer to an editorial comment from the indispensable Notes and Queries (1862, 3rd Series/I, p. 190). The writer brings brown study in connection with French humeur brune, literally “brown humor, or disposition,” said about a somber or melancholy temperament. “It is to be borne in mind that in French the substantive brune signifies nightfall, the gloomy time of day; sur la brune ‘towards evening’; and also that in English brown (adjective) is employed poetically in the sense of gloomy, ‘a browner horror.’ ([Alexander] Pope, [Charles] Cotton.) It is remarkable how the colours are used to express various phases of human character and temperament. Thus we have not only ‘brown study’, but ‘blue melancholy’, ‘green and yellow melancholy’, ‘blue devils’ and ‘blues’, ‘yellow stockings’ (jealousy), ‘red hand’ (Walter Scott), and ‘white feather’, &c.” Not all such phrases belong together, but, considering how late brown study appeared in English, the French origin of this idiom is probable. I may add that a great miracle is called “blue wonder” (blaues Wunder) in German.
Bold etymologies
A correspondent from California sent me a short article containing a new derivation of the much-discussed word Viking. He traced it to Estonian vihk “sheaf.” I don’t think this suggestion is convincing. In etymology, as in any reconstruction, only probable, rather than possible, approaches have value. If multiple hypotheses vie for recognition, it is the duty of every next researcher to show that the previous conjectures are wrong or less persuasive than the new one. Our correspondent has consulted a single reference work, the Online Etymology Dictionary. He says: “It is beyond the scope of this paper to discuss the merits of the proposed etymologies.” It can never be beyond the scope of a paper on word origins to discuss the work of the predecessors.
The Vikings were Scandinavians and presumably used a native word for naming themselves. In the paper, we are reminded that the Old English Widsith “Wide-Traveler,” who says that he visited countless tribes, also spent some time with Wicingas. The Vikings were not a tribe and could not be visited (Widsith has been discussed more than once, but even a look at Kemp Malone’s 1936 book Widsith would have made the situation clear). The rest of the paper (two short paragraphs) deals with the legendary Sceaf of Old English fame, the Hungarian word for “sheaf,” and the place name Kiev. No references to Old Icelandic, Hungarian, Estonian, and Russian etymological dictionaries are given. The origin of Kiev is a famous crux, but it may be worthy of note that many towns, not only the capital of Ukraine, are called Kiev. Considering my negative attitude toward the proposed etymology, I decided not to mention the URL for the paper, but, if our correspondent is interested in a broader discussion of his idea, I would recommend that he post a short summary of his work as a comment, the more so as it can already be found online.

Another correspondent (this time from Canada) wonders whether the original meaning of the word Viking could be “bastard.” He writes: “…if wick in Eastwick meant ‘village’ (or ‘settlement’)…then the vik in Reykjavik must have meant ‘village’…as well. So, since the vik in Viking refers to a village (or a settlement)…does it not seem logical that when the first Norse German decided to go a-pillaging in the neighbouring fjord, that he was called a son of the village to his face by his own countrymen?” In addition to inspiration, etymology needs professional knowledge. The Icelandic word is vík “bay,” not vik (and it is Reykjavík, not Reykjavik). The stress mark designates vowel length, so that vik and vík are different words. Reykjavík means “bay of mists or fogs” (literally “of smokes”). There have never been villages in Iceland (only farms), and the Vikings never attacked their neighbors: they were sea pirates and pillaged abroad. My advice stays: don’t let etymological ideas run away with you before you have mastered the relevant material and the method of reconstruction.
Aroint
Our correspondent and I exchanged emails on the word that interested him, so that the question seems to have been answered to the satisfaction of both parties. Some time ago, I wrote a post on the enigmatic exclamation aroint thee, witch (Macbeth; aroint thee also occurs in King Lear) and supported an old guess that aroint thee had developed from the apotropaic phrase a rowan to thee, said to scare away witches. The correspondent’s question was whether “German areus” could be related to Engl. aroint. Since there is no such German word, I asked him to give me more information. It turned out that he had heard what may have been a Yiddish word meaning “away with you.” Obviously, we are dealing with some variant or cognate of German heraus, and aroint has nothing to do with it. Let me add that all obvious etymologies were offered long ago. There is little hope to find an overlooked connection that is for everyone to see.

Was there a Proto-Indo-European word for “pear”?
There was no such word. Even the example of apple has less weight than it seems, as a never-ending stream of articles on the etymology of this word shows. The names of fruits are among the most often borrowed words in the vocabulary. In English, nearly all the names of vegetables and probably all the names of the main fruits, from potato to pear, are borrowed nouns. Leek is native—a rare exception. Latin pirum, the source of pear, is of unknown origin (from Semitic?). The Greek for “pear” is ápion or ápios, and it may perhaps be related to pirum, which leaves us in the dark about the word’s ultimate origin. Russian grusha (with similar cognates elsewhere in Slavic) also seems to be a borrowing.
Why awful but awesome?
I am not sure I can answer this question. Judg(e)ment and acknowledg(e)ment are different cases. Aweful has been recorded as a spelling variant of awful (see the OED) but later lost its e, perhaps because, however you spell it, there can be no misunderstanding, unlike what might have happened in baneful, wasteful, and their likes. Yet rueful has e, though, with regard to pronunciation, ruful would have been unambiguous. Awesome is not a recent word, as the OED shows, but it occurred rarely, and one gets the impression that, when the modern overused Americanism awesome (from the West Coast?) suddenly popped up (was it in the mid or early seventies?) and conquered the English-speaking world, it was coined anew. Instead of “awe-inspiring,” it began to mean “superb” and soon turned into an inane epithet, along with its present-day competitors great and cool. If my guess has any value, then the neologism was bound to acquire the form awe + some. The real mystery is how and why it came into existence. “Origin unknown,” I believe.
Image credits: (1) Reykjavik panorama. View from the top of Hallgrímskirkja. photo by ccho. CC BY-NC-ND 2.0 via ccho Flickr. (2) Pear. Photo by Robert S. Donovan. CC BY 2.0 via booleansplit Flickr.
The post Monthly etymology gleanings for October 2014, Part 2 appeared first on OUPblog.










Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
