Oxford University Press's Blog, page 356
June 12, 2017
Suspected ‘fake results’ in science
There is a concern that too many scientific studies are failing to be replicated as often as expected. This means that a high proportion is suspected of being invalid. The blame is often put on confusion surrounding the ‘P value’ which is used to assess the effect of chance on scientific observations. A ‘P value’ is calculated by first assuming that the ‘true result’ is disappointing (e.g. that the outcome of giving a treatment and placebo was exactly the same based on an ideally large number of patients). This disappointing true result is called a ‘null hypothesis’. A ‘P value’ of 0.025 means that if the ‘null hypothesis’ were true, there would be only a 2.5% chance of getting the real observed difference between treatment and placebo, or even a greater difference, in an actual study based on a smaller number of patients. This clumsy concept does not tell us the probability of getting a ‘true’ difference in an idealized study, based on the result of a real study.
Because it is based on random sampling model, a ‘P value’ implies that the probability of a treatment being truly better in a large idealized study is very near to ‘1 – P’ provided that it is calculated by using the ‘normal’ or Gaussian distribution, that the study is described accurately so that someone else can repeat in exactly the same way, the study is performed with no hidden biases, and there are no other study results that contradict it. It should also be borne in mind that ‘truly better’ in this context includes differences of just greater than ‘no difference’, so that ‘truly better’ may not necessarily mean a big difference. However, if the above conditions of accuracy etc. are not met then the probability of the treatment being truly better than placebo in an idealized study will be lower (i.e. it will range from an upper limit of ‘1 – P’ [e.g. 1 – 0.025 = 0.975] down to zero). This is so because the possible outcomes of a very large number of random samples are always equally probable, this being a special property of the random sampling process. I will explain.

Figure 1 represents a large population two mutually exclusive subgroups. One contains people with ‘appendicitis’ numbering 80M + 20M = 100M; the other group has ‘no appendicitis’ numbering 120M + 180M = 300M. Now, say that a single computer file contains all the records of only one of these groups and we have to guess which group it holds. In order to help us, we are told that 80M/(80M+20M) = 80% of those with appendicitis have RLQ pain and that 120M/(120M+180M) = 40% of those without appendicitis have RLQ pain as shown in figure 1. In order to find out which one of the group’s records is in the computer file, we could perform an ‘idealised’ study. This would involve selecting an individual patient’s record at random from the unknown group and looking to see if that person had RLQ pain or not. If the person had RLQ pain we could write ‘RLQ pain’ on a card and put it into a box. We could repeat this process an ideally large number (N) times (e.g. thousands).
If we had been selecting from the group of people with appendicitis then we would get the result in Box A where 80N/100N = 80% of the cards had ‘RLQ pain’ written on them. However, if we had been selecting from people without appendicitis, we would get the result in Box B, with 120N/300N = 40% of the cards bearing ‘RLQ pain’. We would then be able to tell immediately from which group of people we had been selecting. Note that random sampling only ‘sees’ the proportion with RLQ pain in each group (i.e. either 80% or 40%). It is immaterial that the size of the group of people in figure 1 with appendicitis (100M) is different to the group without appendicitis (300M).
The current confusion about ‘P values’ is because this ‘fact’ is overlooked and that it is assumed wrongly that a difference in size of the source populations affects the sampling process. A scientist would be interested in the possible long term outcome of an idealised study (in this case the possible contents of the two boxes A and B) not in the various proportions in the unknown source population.
Making a large number of ‘N’ random selections would represent an idealized study. In practice we cannot do such idealized studies but have to make do with a smaller number of observations. For example, we would have to try to predict from which of these possible boxes with N cards representing ideal study outcomes we would have selected a smaller sample. If we selected 24 cards at random from the box of cards drawn from the computer file containing details of the unknown population and found that 15 by chance had ‘RLQ pain’, we can work out the probability (from the binomial distribution e.g. when n=24, r=15 and p=0.8) of getting 15/24 exactly from each possible box A and B. From Box A it would be 0.023554 and from Box B it would be 0.0141483. The proportions in box A and B are not affected by the numbers with and without appendicitis in the source population and are therefore equally probable before the random selections were made. This allows us to work out the probability that the computer file contained records of patients with appendicitis by dividing 0.023554 by (0.023554 + 0.0141483) = 0.6247. The probability of the computer file containing the ‘no appendicitis’ group would thus be 1- 0.6247 = 0.3753.
It does not matter how many possible idealized study results we have to consider; they will always be equally probable. This is because each possible idealized random selection study result is not affected by differences in sizes of the source populations. So, if a ‘P value’ is 0.025 based on a ‘normal’ or Gaussian distribution, the probability of a treatment being better than placebo will be
1 – P = 0.975 or less if there are inaccuracies, biases, or other very similar studies that give contrary results, etc. These factors will have to be taken into account in most cases.
Featured image credit: Edited STATS1_P-VALUE originally by fickleandfreckled. CC BY 2.0 via Flickr.
The post Suspected ‘fake results’ in science appeared first on OUPblog.

Paul A. Samuelson and the evolution of modern economics
For thirty years after the Second World War, the teaching of introductory economics in the US was dominated by a single textbook, initially titled Economics: An Introductory Analysis, later shortened to just Economics. When the first edition appeared in 1948, its author, Paul Samuelson, was only 33 years old. As far as most other young economists were concerned, the book provided an account of what had rapidly become the accepted way of thinking about problems of unemployment. If there was high unemployment, government could and should increase the level of spending to create new jobs; the money spent by those in government jobs would employ more people in private sector jobs, creating an additional stimulus to employment. These economists well understood that too much government spending could be inflationary, but in times of high unemployment, modest government deficits were a good thing. The traditional conservative view that deficits were always bad was completely unfounded.
Samuelson’s claim that, in times of unemployment, government had a responsibility to take action—that American capitalism could work properly only if government played a role—was a red rag to conservatives. His book, of which preliminary drafts were circulated and tried out on students at MIT, was attacked even before it was published. The notion that government needed to stabilize a capitalist economy, and that this stabilization might sometimes require modest government deficits was, so his critics’ claimed, tantamount to communism, a charge that came to have even greater political resonance in the McCarthy era. Yet, though Samuelson paid careful attention to his wording as he revised the book, he did not compromise: he knew that even if businessmen and many in the Republican administration disagreed, he had the support of most economists of his generation.
Where did Samuelson get these ideas and how did he get into the position of eminence that he had clearly achieved by 1948? He had been a student at Harvard, of the great Austrian students of the business cycle, including Joseph Schumpeter and Gottfried Haberler. However, his initial reputation was based on his ability to recast economic theory in mathematics. His PhD thesis was an exercise in abstract economic theory demonstrating the value of using mathematics. More than that, it explained the importance of “operationalizing” economic theory—the importance of generating predictions that could be tested against data.

Initially Samuelson’s views were conservative, insofar as he could be conservative and support Franklin Roosevelt. But whatever his politics, there is no evidence that he took any interest in the policy questions that were later to fascinate him, and which were to be a major theme of his textbook. He changed because of two events. The first was the arrival of Alvin Hansen at Harvard, one of America’s leading business cycle theorists. He clearly saw in Hansen’s theories of the business cycle an opportunity to make use of his knowledge of mathematics, solving a problem that Hansen, who was no mathematician, could not solve; but he also became very close to Hansen, developing a respect for his work. Samuelson eventually made his debut into journalism in 1945, with articles endorsing Hansen’s internationalist political philosophy – that the United States had a responsibility to provide economic support to the devastated economies of Europe.
The second event was the Second World War. Even before Pearl Harbor, the United States government had started planning for the “post-emergency” situation, fearful that, as after most previous wars, there would be an economic slump. Anxious to avoid being classified as unfit for military service, Samuelson threw himself into government work, first for the National Resources Planning Board and, when that was disbanded in 1943, for the War Production Board. Commuting fortnightly from Cambridge, Massachusetts, where he continued to teach in an MIT geared up to training military officers, he became involved in the community of young economists scattered through government agencies in Washington. He engaged in the debates through which the theory of stabilization policy—arguably the key issue when the government deficit rose to twenty percent of national income—was worked out.
He and his younger colleagues learned that the old myths, that the government should never run a deficit, did not hold. Government could and should take responsibility for the level of spending and hence the level of unemployment. American capitalism needed government action for it to work. This was the social philosophy that helped to make the textbook a bestseller and incurred the ire of conservative critics.
Featured image credit: money coins finance cash by Tookapic. Public Domain via Pexels .
The post Paul A. Samuelson and the evolution of modern economics appeared first on OUPblog.

Assessing the historical and imperial turn in International Law
Unlike international relations, international law has a long-standing tradition of teaching and research that connects history and theory. While international relations scholars are not especially trained to think contextually, since international relations theory tends to be taught and assimilated detached from historical concerns and it is very hard to find courses on the history and theory of international relations, international law scholars have forged at least a tradition in the curricula of teaching the history and theory of international law and conceiving of these two dimensions, if not as mutually inter-linked, at least as related to one another.
In recent years, a new wave of innovative scholarship, exploring the historical trajectory of international law and its complicity with colonial and imperial endeavors, has emerged. In fact, this transformation has been often regarded as a “historical turn” and even “imperial turn.” Yet most of these pioneering contributions were written by scholars strongly attached also to the field of the theory of international law, such as Martti Koskenniemi, Antony Anghie and Anne Orford, among others. Therefore, it is timely and worth assessing the implications of such transformation for theoretical innovation in legal theory and interdisciplinary research in order to create new grounds for a comparative and fruitful dialogue between history and theory.
One way of assimilating the contributions of this new body of scholarship is to conceive it as a broader shift in focus in international law and international legal and political theory from normative theories of global justice to empirical studies about the legitimization of historical situations of global injustice. More specifically, thanks to these recent studies exploring the colonial origins of international law, the complicity of certain liberal internationalist legal ideologies with imperialism and the civilizing mission of international law, among other themes, scholars in the field have begun to learn new lessons that demand new frameworks and new ways to come to terms with global justice through a much greater awareness of historical situations of global injustice. Learning these lessons might be disappointing and could lead to loose faith in the potentialities of international law to produce global justice, generating profound skepticism. Nevertheless, it contributes to unveiling the strong limits of international law for transforming situations of global injustice, as well as the concrete and rigid structures of the international legal order.
These studies have generated a new skeptical legal and political sensibility that is historically more aware of global injustices. Therefore, concrete global injustices, rather than the production of normative frameworks for theories of global justice, seems to be the new point of departure for formulating new international legal theory today.
These studies have generated a new skeptical legal and political sensibility that is historically more aware of global injustices.
Another final lesson with theoretical implications that stems from this so-called historical turn is that international law is a very flexible and elastic language and therefore comparative and interdisciplinary research that touches on the connections between history and theory and examines openly international law as a flexible and complex language, deeply embedded of political concerns and anxieties, are essential. For the historical turn has also shown that international law has been highly politicized, despite the fact that many international lawyers tend detach international law from power politics. Although such studies have shown that international law has been generally deployed for legitimizing imperial and hegemonic projects, it has also been and continues to be deployed as a legal tool to limit, resist and confront with imperial policies. Indeed, it is necessary to enhance our understanding of the histories and theories related to the deployment of international law for anti-imperialist purposes.
Finally, these two dimensions, imperialism and anti-imperialism, should be explored comparatively as interconnected in order to assess historically and conceptually the potentialities and limits of international law for dismantling situations of global injustice and generating mechanisms for power balancing. For instance, international law has been especially flexible in the Americas, since it was invoked in complicity with imperial projects and at the same time with the aim of putting forward anti-imperialist and anti-interventionist legal aspirations, especially regarding US military and unilateral interventions in Latin America in the early twentieth century.
In order to take these comparative lessons seriously for the making of legal theory, it is also worth forging a much more consistent dialogue between history and theory.
Featured image credit: “Administration” by Pexels. CC0 Public Domain via Pixabay .
The post Assessing the historical and imperial turn in International Law appeared first on OUPblog.

June 11, 2017
The news media: are you an expert? [Quiz]
The news media has long shaped the way that we see the world. But with the rise of social media and citizen journalism, it can be difficult to determine which stories are fake news and which are simply the product of the evolving media.
Inspired by The Death of Expertise, in which Tom Nichols explores the dangers of the public rejection of expertise, we’ve created a series of quizzes to test your knowledge. Take this quiz to see how much you know about the news media. Then watch the video below to see how OUP employees fared against news media expert C. W. Anderson.
Featured image credit: “fake-news-media-disinformation” by Wokandapix. CC0 public domain via Pixabay.
The post The news media: are you an expert? [Quiz] appeared first on OUPblog.

How does climate change impact global peace and security?
Climate change is one of the most pervasive global threats to peace and security in the 21st century. But how many people would list this as a key factor in international relations and domestic welfare? In reality, climate change touches all areas of security, peace building, and development. The impacts of climate change are already adversely affecting vulnerable communities, as well as stretching the capacities of societies and governments.
Climate change is best understood as a ‘threat multiplier’, i.e. something that interacts with existing pressures (such as social conflict, economic inequality, large-scale migration, or competition for resources) and further compounds these issues—increasing the likelihood of instability or violent conflict. Using facts and analysis from the SIPRI Yearbook 2016, we’ve taken a look at 7 of the most important ‘compound factors’ that climate change can influence….
1. Local resource competition
The impacts of climate change directly affect the availability, the quality, and access to natural resources, particularly water, arable land, forests, and extractive resources. Growing competition when supply cannot meet demand can lead to instability and even violent conflict where there are no adequate management institutions or dispute resolution mechanisms in place. In the worst case, natural resource competition can contribute to regional instability or civil conflicts. For example, land disputes were a major driver of 27 of the 30 civil conflicts in Africa between 1990 and 2009.
2. Livelihood insecurity and migration
Climate change increases the human insecurity of people dependent on natural resources for their livelihoods. Rising human insecurity can induce them to migrate or seek out alternative, illegal sources of income, which in turn can also drive conflict. Where there is also resource scarcity in the alternative location or job sector, there is an increased risk of conflict between the newcomers and those who were there first. For example, in northern Kenya, many nomadic pastoralists have turned to fishing on Lake Turkana as recurring drought has reduced the viability of maintaining cattle herds, leading to lethal conflicts between rival Kenyan tribes and with Ethiopian fisherfolk on the other side of the lake.
3. Extreme weather events and disasters
How a government reacts to and prepares for natural disasters can increase or mitigate the risk of conflict following such an event. In the worst case, government action after a disaster can create grievances and increase the risk of conflict, while in the best case government action can be a springboard to build peace and increase resilience. Disasters put additional strain on already weak government systems, disrupt economic activity, displace communities and often require a large-scale humanitarian response which a weak state is less able to manage.

4. Volatile food prices and provision
Climate change, in conjunction with other factors such as population growth, rising energy prices, and the rapid advance of biofuel production from crops, has heightened the volatility of food supplies and prices around the world. While higher food prices do not always lead to violent conflict, sudden food price hikes are a major driver of civil unrest and protest. High unemployment, as well as social and economic marginalization also contribute to this political instability – with food price riots often used as a political tool to demonstrate people’s discontent. In 2008 a global food crisis saw riots in response to food and fuel inflation across 48 countries, most notably including Bangladesh, Burkina Faso, Haiti, and Pakistan.
5. Trans-boundary water management
Shared water resources are often a source of cross-border tension. As the impacts of climate change affect the supply and quality of water, and at the same time the demand for water continues to grow, competition over water is likely to increase pressure on existing water-sharing agreements and governance structures. There have been no occurrences of wars fought over water to date, but as water supply becomes less certain and demand grows, climate change could compound the risks.
6. Sea-level rise and coastal degradation
Rising sea levels threaten the viability of lives and livelihoods in low-lying areas. More frequent flooding and the risk of loss of territory to the sea increase the prevalence of displacement, migration, and social unrest. Particularly at risk are the small island states, which face the loss of their entire territory, and cities built on river deltas and coasts, such as Karachi in Pakistan and Lagos in Nigeria, where flooding and storm surges will have a major impact on economic development and large, highly concentrated populations. Territorial loss may increase migration, which in turn can increase competition for resources— in some cases, this causes heightened tensions between migrants and host communicates, increasing the risks of conflict.
7. The unintended effects of climate policies adaptation and mitigation
In an already fragile context, policies designed to help vulnerable communities adapt to climate change can increase fragility risks if they fail to consider the wider economic, political, and social impacts—particularly any knock-on consequences they may have on access to resources, food security, and livelihoods. Efforts to cut carbon emissions through shifts to green technologies and renewable energy could also pose a risk of conflict as these will create new power dynamics within highly politically sensitive energy sectors.
Featured Image Credit: ‘Desert, Drought, Landscape’ by cocoparisienne. Public Domain via Pixabay .
The post How does climate change impact global peace and security? appeared first on OUPblog.

Loving and before
This year marks the 50th anniversary of the Supreme Court Case that ruled prohibitions on interracial marriages unconstitutional. The decision and the brave couple, Richard and Mildred Loving, who challenged the Virginia statute denying their union because he was deemed a white man and she, a black woman, deserve celebration. The couple had grown up together in a small rural town where racial tensions and segregation persisted, but were faded by familiarity. As adults, Richard and Mildred fell in love and chose to formalize their relationship. They took a trip to nearby Washington, DC. where they secured a marriage license. However, soon after returning to Virginia, one of 16 states, mostly in the American South, which held firm to its anti-miscegenation statute, an overly enthusiastic sheriff barged into the couple’s bedroom in the middle of the night and arrested them. After an uncomfortable stay in jail—a then-pregnant Mildred was detained longer than Richard—the couple were released. Ordered to depart the state for 25 years, the Lovings reluctantly relocated to Washington, DC where they would raise their children. Mildred, in particular, regretted city life and wished to return to rural Virginia. As much as their longing for home and family, the arguments and energy of the era’s black freedom struggle, and faith in the rightness of their course, persuaded the couple to seek the legal support of the ACLU and file a lawsuit. In 1967, a unanimous court ruled in favor of the Lovings, determining that anti-miscegenation statutes violated the equal protection clause of the 14th amendment.
The victory marked the end of anti-miscegenation statutes that had proliferated and persisted in the United States because Americans regularly romanced across color lines and those who depended upon those lines to protect their authority worked feverishly to reinforce them wherever and whenever possible. Within their respective North American empires, the Spanish and French had selectively discouraged certain types of marriages, but it was the British who most disdained and would actively restrict interracial marriages. This antipathy reflected both the settler nature of the British colonies—the British sought to populate the land with families who would clear forests, build farms, and facilitate trade—and a desire to protect racial slavery. The colony of Maryland debuted the first anti-miscegenation statute in 1684, banning marriages between free English women (presumed to be white) and black slaves. Seven years later, neighboring Virginia passed a more punitive and comprehensive limitation, criminalizing marriages between white women or men and blacks, mulattos, and Indians. Violators would suffer banishment or removal from the dominion. All southern and many northern colonies, including Pennsylvania and Massachusetts, would follow suit. Soon after the American Revolution, the movement to abolish slavery would prompt northern states to repeal anti-miscegenation statutes, but southern states, who worried about the collapse of segregation and white supremacy after the Civil War, recommitted to them.
As the United States added western territory by force and negotiation through the 19th century, Americans and anti-miscegenation statutes moved west, too. Refusing to legally recognize love between races proved an integral part of confirming the American conquest and incorporating new land. Thus, western states not only prevented white Americans from marrying African Americans and Native Americans, as had their counterparts in the Midwest and East Coast, but those of Chinese, Japanese, Korean and Filipino descent. The maturation and popularization of pseudo-scientific ideas about racial divisions within the human population at the end of the 19th and beginning of the 20th centuries helped justify this violation of civil rights.
However, American couples regularly defied or circumvented the laws. Indeed, the triumph of the Lovings built upon the struggles of many other interracial couples who similarly formed intimate partnerships and defied the idea that individuals could be classified and divided by something as capricious as race. Among the most prominent of these forerunners were Andrea Perez and Sylvester Davis. This pair had met in 1940s Los Angeles. The son of black migrants from the south, Sylvester and Andrea, the daughter of Mexican immigrants took an immediate liking to one another. Unfortunately, as Andrea remembered, her father did not share her affections for Sylvester, worrying about the social and economic consequences of his daughter dating a black man in the United States. Although separated during World War II-Andrea worked at a local shipbuilding operation and Sylvester served in the United States Army-the couple rekindled their relationship and decided to marry after the war’s end. Soon following their marriage ceremony in a Catholic Church which honored their union, they asked for the legal help of Daniel Marshall, a leader of the Catholic Interracial Council. Like other individuals of Mexican descent, Andrea enjoyed the legal status of white, if not always the lived privileges of being white, and thus was in violation of California’s ban against marriages between whites and African or Asian Americans. Changing ideas about race accelerated by the era’s democratic rhetoric and the discovery of Nazi atrocities likely shaped the judge’s interpretation. In 1947, California became the first state whose highest court struck down an anti-miscegenation statute.
Without the particular love story of and battle waged by Richard and Mildred, a formal barrier to equality would have stood longer. Yet, we should also remember that the right to marital freedom was asserted by so many Americans. These men and women chose to love whom they loved despite state restrictions and thus became unexpected agitators in the long contest for equal rights in the United States.
Featured Image credit: US Supreme Court building in 2011. Picture by Architect of the Capitol, Public Domain via Wikimedia Commons .
The post Loving and before appeared first on OUPblog.

What is to be done with Harriet Martineau?
“She says nothing that is not obvious,” claimed Alice Meynell of Harriet Martineau (1802-76), “and nothing that is not peevishly and intentionally misunderstood.” (Pall Mall Gazette, 11 October 1895). If this seemed the case in 1895, how does her reputation stand in the twenty-first century, given that so much of her writing and campaigning was tied to passing causes and controversies of the time? These included issues such as economic conditions of the 1830s, American slavery, British rule in India, the fire hazards of crinolines, and the Great Exhibition of 1851.
The Victorians couldn’t make up their minds about Martineau. Alternately hailed as a celebrity and vilified as an unfeminine woman who had stepped out of her proper sphere, she lived in a state of perpetual argument and debate. Even the last twenty years of her life, spent peacefully in the Lake District, brought their own household dramas, including the tragic death (from typhoid) of her devoted niece and companion (“a glorious niece of mine,—my unsurpassable nurse”), Maria Martineau (1827-64). Fully expecting to die of the heart disease misdiagnosed in 1855, and a frequent victim of sinking fits and other physical ills, Harriet Martineau proved unexpectedly resilient, firing off trenchant ‘leaders’ for the London Daily News (until 1866) and articles on tough political issues for top periodicals such as the Edinburgh, Westminster and Quarterly Reviews.
Martineau even wrote a lengthy autobiography (begun in 1855, two decades before her death) which ended with a panoramic survey of the human condition she thought she was leaving. A self-critical obituary was also provided, with the date of death left blank to be filled in when the time came, to “appear as soon as possible after I am gone.” Arguably, it was the longest farewell in nineteenth-century literary history.

The question of what specifically she should be remembered for continues to exercise critics and historians. Martineau made herself an expert in one disciplinary field after another, beginning with economics. She first shot to fame with her twenty-five short tales, Illustrations of Political Economy (1832-4), which made her the darling of the London drawing-rooms, but also the butt of hostile critics who were appalled at the idea of a woman presuming to educate her readers in the niceties of economics, and even what was euphemistically called “the preventive check.”
Following a two-year tour of the United States she became an expert on the anti-slavery cause, and she subsequently educated herself on issues as varied as mesmerism, industrial processes, post-Crimean nursing reforms, the roles of servants, the health of governesses, the condition of Ireland, and travel and religion in the Middle East. At the same time she was experimenting with a range of literary genres, including children’s stories and domestic realism in fiction. Critics today claim her as “the first woman sociologist” an early proto-feminist, and a pathbreaking autobiographer, who, unlike many of her female contemporaries, presented her life as a series of intellectual advancements culminating in the abandonment of her Unitarian religion for a version of agnosticism.
It is perhaps as a journalist, however, that Martineau best deserves to be remembered. Her subjects may have been ephemeral and of the moment, but her ‘voice’ and what she stood for were distinctive. Her key phrase as a journalist is “What is to be done?” which reverberates as a rallying cry through many of her books and articles. She asked it of endowed schools in Ireland, and of unsold land in America; of the challenges of observing foreign customs, rehabilitating criminals, and even eating corn on the cob.
Typically her most trenchant pieces, written in a homely style, appeal to the responsible reader as someone who can assist in finding a solution. Her letters, in which she rehearsed many of her arguments, suggest that Martineau hugely enjoyed her work, and was amused by the eagerness of editors to solicit her contributions. When, as a middle-aged woman, she told a friend “I have an all important review to write,” she said it with undiminished pride and excitement. The sheer exhilaration of always having something purposeful to say and do was perhaps the most optimistic message Martineau delivered about herself, about women, and about humankind in general. As the author of an early tale called French Wines and Politics (1833), she would certainly have had something to say about modern political events, especially the recent Brexit: most probably “What is to be done?”
Featured Image Credit: ‘Lake District Photo’ by Stanleytheman, CC by -SA 3.0 via Wikipedia .
The post What is to be done with Harriet Martineau? appeared first on OUPblog.

June 10, 2017
Margaret Thatcher, Lego, and the Principle of Least Action
Imagine a toy city, seen from afar. Now imagine that some of the buildings have Lego-shaped castellations, others have Lego-shaped holes in the walls, and there are a few loose Lego bricks lying around. All this evidence leads us to guess that the whole toy city is made up of Lego bricks. When we get up close, we see that our guess is correct.
By a similar blend of evidence and theorizing, John Dalton, around 1800, came up with the Atomic Theory — the theory that says that matter in all its variety (whether grass, mud, cricket balls, etc.) is made from a finite number of fundamental tiny building blocks.
Even earlier, Isaac Newton, in 1687, postulated that everything in physics could be explained by particles acted upon by forces. In very complicated scenarios it might be necessary to have a near infinite number of particles. For example, when Newton calculated the shape of the Moon’s orbit around the Earth he imagined the Earth and the Moon subdivided into tiny volume-elements (‘particles’), then determined the gravitational attraction between a particle in the Moon and a particle in the body of the Earth, and then determined the net Earth-Moon attraction by summing over all such pairs. He had to be careful that he didn’t miss any pairs out, or count some pairs more than once. It was a difficult task: “the only problem which gave me a headache”. He found the agreement between his predicted orbit and the strength of gravity on Earth to “answer pretty nearly”. This is one of the most remarkable theoretical confirmations ever made.

These three examples (Lego, the Atomic Theory, and Newtonian Mechanics) lead us to wonder whether every problem, however complicated, may always be broken down into the interactions between elemental tiny components.
Finally, we come to Margaret Thatcher. One of her most famous quotations, in 1987, was “There is no such thing as society”. This quotation caused such a storm that her press office took the unusual step of offering an explanation. It seems that Mrs Thatcher had meant that society is abstract, not real, and so the forward march of progress must be due solely to the motivations and actions of individuals.
We now know that all these theories are wrong — wrong if they assume that the given problem can always be reduced to a sum over elemental parts. In physics, despite the enormous success of Newtonian Mechanics (viz., the calculation of the Moon’s orbit), it is an astonishing discovery to learn that rather few problems can be solved in this way. We have all heard the saying “the whole is more than the sum of its parts” but this is only the beginning of correcting the wrongness. A society is more than just individuals; it is made up of entities that do exist, are real, and are influential (for example, the pub where my talk is held, the famous old university nearby, and so on).
In physics, it is the Principle of Least Action that teaches us a radically different approach. It shows us that it is not just a case of new properties emerging when all the parts are considered together; rather, it is the realization that (in all but a few exceptionally simple cases) the system cannot be considered as a collection of simple elemental components. Thus, instead of particles, we might have to have as our fundamental elements: lever arm, pendulum, capacitor, electromagnet, flexing beam, suspension bridge, planets, black hole, binary star, hydrogen atom, a flowing river, a spinning top, and so on. The components are neither simple, nor universal, but are system-dependent, that is they have to be re-formulated for each new scenario.
We have lost elemental simplicity — what have we gained in its place? What we gain is a universal Principle instead of a universal particle. The telling ingredients are not particles and forces, they are energies. The energies come in two types: kinetic (the energy of motion) and potential (the energy of configuration). However, the Principle of Least Action leads to even greater insight: rather than kinetic and potential energies, the true dichotomy is between ‘individual component’ and ‘super-structure’ energies. Finally, the Principle postulates that the individual energies and the super-structure energies always act in opposition to each other (one increases at the expense of the other) yet, through time, Nature takes the path where the difference between them is as small as possible.
How do we know that this more complicated, less intuitive Principle, is correct? For three reasons:
It answers to many more scenarios —not just the usual Newtonian, but also the relativistic and quantum-mechanical domains.
It works whether applied to a system that is stationary or moving (I dub it the ‘Harrison’s Chronometer’ of physics).
It gives us new physical insights —especially into the nature of energy, but also into the nature of space and time.
As Einstein said (paraphrased), the method must be as simple as possible, but not simpler.
Returning to Margaret Thatcher, maybe the Principle of Least Action is hinting that, while there will always be a tension between the needs of individuals and the needs of society, for a stable society, that tension must be the minimum possible.
Featured image credit: Atoms by PIRO4D. CC0 public domain via Pixabay.
The post Margaret Thatcher, Lego, and the Principle of Least Action appeared first on OUPblog.

What are the best ways to view a solar eclipse?
Millions of people will soon travel to a narrow strip in America to witness a rare event: a total solar eclipse. On 21 August, many will look up to the sky to witness this phenomenon – will you be one of them? In the following shortened excerpt from Totality: The Great American Eclipses of 2017 and 2024, learn what types of eyewear you should be using to watch the Sun disappear, when you can do away with eye protection completely, and other ways to best view this event.
You would never think of staring at the Sun without eye protection on an ordinary day. You know the disk of the Sun is dazzlingly bright, enough to permanently damage your eyes. Likewise, any time the disk of the Sun is visible—throughout the partial phase of an eclipse—you need proper eye protection. Even when the Sun is nearing total eclipse, when only a thin crescent of the Sun remains, the 1% of the Sun’s surface still visible is about 10,000 times brighter than the full moon.
Once the Sun is entirely eclipsed, however, its bright surface is hidden from view and it is completely safe to look directly at the totally eclipsed Sun without any filters. In fact, it is one of the greatest sights in nature. Here are ways to observe the partial phases of a solar eclipse without damaging your eyes.
Solar eclipse glasses
The most convenient way to watch the partial phases of an eclipse is with solar eclipse glasses. These devices consist of solar filters mounted in cardboard frames that can be worn like a pair of eyeglasses. If you normally wear prescription eyeglasses, you place the eclipse glasses right in front of them.
When you are using a filter, do not stare for long periods at the Sun. Look through the filter briefly and then look away. In this way, a tiny hole that you miss will not cause you any harm. You know from your ignorant childhood days that it is possible to glance at the Sun and immediately look away without damaging your eyes. Just remember that your eyes can be damaged without you feeling any pain.
Welder’s goggles
Another safe filter for looking directly at the Sun is welder’s goggles (or the filters for welder’s goggles) with a shade of 13 or 14. They are relatively inexpensive and can be purchased from a welding supply company. The down side is that they cost more than eclipse glasses and give the Sun an unnatural green cast.
The pinhole projection method
If you don’t have eclipse glasses or a welder’s filter, you can always make your own pinhole projector, which allows you to view a projected image of the Sun. There are fancy pinhole cameras you can make out of cardboard boxes, but a perfectly adequate (and portable) version can be made out of two thin but stiff pieces of white cardboard. Punch a small clean pinhole in one piece of cardboard and let the sunlight fall through that hole onto the second piece of cardboard, which serves as a screen, held behind it. An inverted image of the Sun is formed. To make the image larger, move the screen farther from the pinhole. To make the image brighter, move the screen closer to the pinhole. Do not make the pinhole wide or you will have only a shaft of sunlight rather than an image of the crescent Sun. Remember, a pinhole projector is used with your back to the Sun. The sunlight passes over your shoulder, through the pinhole, and forms an image on the cardboard screen behind it. Do not look through the pinhole at the Sun.
Solar filters for cameras, binoculars, and telescopes
Many telescope companies provide special filters that are safe for viewing the Sun. Black polymer filters are economical but some observers prefer the more expensive metal-coated glass filters because they produce sharper images under high magnification.
Caution: Do not confuse these filters, which are designed to fit over the front of a camera lens or the aperture of a telescope, with a so-called solar eyepiece for a telescope. Solar eyepieces are still sometimes sold with small amateur telescopes. They are not safe because they absorb heat and tend to crack, allowing the sunlight concentrated by the telescope’s full aperture to enter your eye.
Eye suicide
Do not use standard or polaroid sunglasses to observe the partial phases of an eclipse. They are not solar filters. Standard and polaroid sunglasses cut down on glare and may afford some eye relief if you are outside on a bright day, but you would never think of using them to stare at the Sun. So you must not use sunglasses, even crossed polaroids, to look directly at the Sun during the partial phases of an eclipse.
Do not use smoked glass, medical x-ray film with images on them, photographic neutral-density filters, and polarizing filters. All these “filters” offer utterly inadequate eye protection for observing the Sun.
Observing with binoculars
Binoculars are excellent for observing total eclipses. Any size will do. Astronomy writer George Lovi’s favorite instrument for observing eclipses was 7 x 50 binoculars—magnification of seven times with 50-millimeter (2-inch) objective lenses. “Even the best photographs do not do justice to the detail and color of the Sun in eclipse,” Lovi said, “especially the very fine structure of the corona, with its exceedingly delicate contrasts that no camera can capture the way the eye can.” He felt that the people who did the best job of capturing the true appearance of the eclipsed Sun were the 19th century artists who photographed totality with their eyes and minds and developed their memories with paints on canvas.
For people who plan to use binoculars on an eclipse, Lovi cautioned common sense. Totality can and should be observed without a filter, whether with the eyes alone or with binoculars or telescopes. But the partial phases of the eclipse, right up through the diamond ring effect, must be observed with filters over the objective (front) lenses of the binoculars. Only when the diamond ring has faded is it safe to remove the filter. And it is crucial to return to filtered viewing as totality is ending and the western edge of the Moon’s silhouette brightens with the appearance of the second diamond ring. After all, binoculars are really two small telescopes mounted side by side. If observing a partially eclipsed Sun without a filter is quickly damaging to the unaided eyes, it is far quicker and even more damaging to look at even a sliver of the uneclipsed Sun with binoculars that lack a filter.
Featured image credit: “stars-night-forest-sky” by Unsplash. CC0 via Pexels.
The post What are the best ways to view a solar eclipse? appeared first on OUPblog.

June 9, 2017
Teaching medicine: how the great ones do it
“The greatest mistake in the treatment of diseases is that there are physicians for the body and physicians for the soul, although the two cannot be separated” – Plato
Attending physicians, the physicians who train interns and residents on hospital wards, have always borne a heavy responsibility. They are accountable for the level of medical care received by each succeeding generation of American patients. But today, these physician-teachers confront unprecedented obstacles. How well they meet the challenge may have long-term consequences for patients and for the medical profession as a whole.
This turning point in medical education was the inspiration for our in-depth study of 12 of the nation’s outstanding attending physicians. We observed them as they interacted with learners and patients, and interviewed them, as well as some of their past and present learners, to provide a glimpse of what the future of clinical education could look like.
The most important form of clinical training takes place at the patient’s bedside, yet attending physicians have less time to spend with learners on patient rounds. Due to the mandated reductions in the length of the learners’ workday and because hospitals are discharging patients sooner than ever before, there are fewer hours for learners to follow the care of any one patient. Meanwhile, learners continue to spend a great deal of their limited time behind a computer screen documenting care rather than administering care.
Attending physicians must also cope with a seismic change in the hospital environment. In days past, they personally provided or oversaw most of their patients’ hospital care. That is virtually impossible today. Attendings are now part of an interdependent team that encompasses not just learners but nurses, pharmacists, radiologists, and other specialists. Teamwork requires such personal qualities as empathy and communication skills, which were not particularly noticeable among attendings of previous generations. Those same qualities are in demand as hospitals have become more focused on satisfying patients (aka customers) rather than physicians; hospitals expect their physicians to view patients as partners in their care and to treat them with a full measure of respect.
“The most important form of clinical training takes place at the patient’s bedside”
Although the 12 attendings exhibited a variety of individual behaviors and techniques, we found that they shared a dedication to the following central propositions: the team environment should be supportive, and the teaching should be team-based and patient-centered.
A supportive environment
The 12 attendings set high standards for their medical team (typically a senior resident, two interns, and several medical students) but they were aware performance anxiety is not conducive to learning. They created an atmosphere that was cooperative and trusting, rather than competitive.
To achieve that goal, the attendings established personal connections with individual team members, exchanging life experiences and jokes. The attendings emphasized that they themselves were students, always learning, and urged team members to challenge their findings when there was a disagreement.
The attendings used their own past mistakes to illustrate their teaching and to demonstrate that mistakes, though obviously to be avoided, will happen and are an essential aspect of learning. Major missteps were corrected in private to keep from publicly embarrassing learners.
Bad outcomes can take a heavy emotional toll on learners. We saw how one of the attendings helped his team cope with the death of a patient. “We should reflect on what happened, but not lose our confidence,” he told them. “The day after he died, I sat in my truck and did a personal pep talk. You have to come in and take care of the next patient and do the best you can.”
Team-based learning
The 12 attendings put the team in charge of patient care, while demonstrating that they were available, 24/7, when needed. They positioned themselves as members of the team, rather than the leaders; giving that task to the senior resident. The teams were constantly told to question every diagnosis and every treatment plan, to develop and test multiple hypotheses and alternatives.

The attendings engaged their teams in discussions of a few key points, rather than delivering lectures filled with facts to be memorized. Instead of simply correcting a learner’s conclusion, the attendings would ask the learner to explain, step by step, how he or she got there. The Socratic method of questioning was used to explore learners’ understanding of the material and guide them toward the best answers.
The attendings shared with their team their own reasoning process in arriving at a diagnosis or treatment of a patient. In their capacity as role models, they wanted to show how seasoned physicians think about medicine.
Patient-centered teaching
In their behavior with patients, the attendings modelled the kind of safe patient care they expected of their learners. They washed their hands before and after every patient visit; they placed the stethoscope directly on the skin rather than over the patient’s gown when listening to the lungs or heart.
Before going on rounds with their teams, the 12 attendings reviewed the medical records of their patients allowing them to prepare some key teaching points during rounds.
The 12 attendings sought to create rapport with patients, greeting them in a friendly, upbeat manner; empathizing with their discomfort; explaining medical issues in layman’s language. Patients were treated with kindness and humility.
The concern for patients’ welfare extended to their post-hospital lives. The attendings started their teams thinking about the patients’ discharge when the patients first arrived on the unit, and included for example, proper transportation home options, patient care at home, and patient insurance coverage.
The 12 attendings recognized their responsibility to model for their learners what it means to be a physician in today’s challenging healthcare environment. One of the most impressive qualities about these attendings was that they loved being physicians and teachers. This description from one of the former learners sums up the 12 physicians as a whole: “He was a doctor who loved taking care of patients and loved teaching. He was never there to just get through something, but very present and very excited about what he was doing.”
If the future of clinical education rests in the hands, minds, and hearts of physicians such as these, learners and patients will be well served.
Featured image credit: Hospital by skeeze. CC BY 2.0 via Pixabay.
The post Teaching medicine: how the great ones do it appeared first on OUPblog.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
