Oxford University Press's Blog, page 268

March 24, 2018

What is it like for women in philosophy, and in academia as a whole?

During Women’s History Month, the OUP Philosophy team have been focusing on and celebrating Women in Philosophy throughout history and in the present day. The majority of us can think of at least a handful of male philosophers, however it is far more difficult for people to name female philosophers even though often their influence has been just as great as their male counterparts. Women have been outnumbered, overlooked, and unrecorded in academia so it is important for us to bring to light not only female philosophers, but also what it is like for women in philosophy.


Women have persisted throughout history, and still do today, and more and more we are seeing support, optimism, and encouragement for women to step up and be heard – both in society and in academia. It may feel near impossible to have the confidence to be heard, to retain a thick skin with those talking over the top of you, and to recover the lost voices of past female philosophers but times are changing and things are improving.


Below we have a collection of quotes about what it is like for women in philosophy, now and then. We have brought together these quotes from across academia and editorial teams, and from books and journals. Let us know your experience of being a woman in philosophy, or any experiences you’ve witnessed, in the comments below.


“This is an exciting time in philosophy for many reasons, one of them being the enthusiasm around bringing women into the canon at long last. While many philosophers are passionate about their work, there is a special energy among the community that is working hard to make primary materials and scholarship on women philosophers from throughout philosophy’s history into the fold. I’m pleased that OUP supports this movement with various books and series, and look forward to seeing how things will change in the years ahead. As we’ve seen in so many other ways, representation really matters, and we’re likely to see many more brilliant women philosophers enter the field now, inspired by heretofore underappreciated figures from philosophy’s past.”


– Lucy Randall, Editor for Philosophy at Oxford University Press, New York


“There’s no doubt that women face obstacles in our participation in philosophy that men don’t face, but being a woman in philosophy nonetheless presents a golden opportunity. For being a part of the discipline allows us to think about the history of philosophy – including the extensive exclusion of women philosophers from that history – and it allows us to recover the lost voices of women philosophers. In doing so, we are able to make philosophy a more inclusive activity, one that future generations of women can see as more reflective of them and their experiences.”


– Karen Detlefsen, Associate Professor of Philosophy and Education at the University of Pennsylvania and co-editor of Women and Liberty, 1600-1800: Philosophical Essays.


“Most journals operate with a double anonymous review process. At MIND we made the decision to adopt a triple anonymous review process, and so as editors we have the pleasure of finding out who wrote papers only once we have accepted them, and we never know the identity of the authors whose papers we reject. There is no doubt that a triple anonymous review process protects against well-known biases. Not knowing an author’s institution, gender, ethnicity, or name means that the only source of our judgment is the paper submitted and the reviewers’ opinions. That has to be good.


Yet, MIND has a pretty poor record for publishing papers by women. And although it is not significantly worse than the record of its competitors, it has not got much better. Why? Because women do not submit papers to MIND. Those who do actually have a slightly higher chance of being accepted than men who submit. We would like to use this celebration of women’s history month to make two pleas. Our first plea is directed to women philosophers: please submit your papers to MIND. Our second plea is directed to supervisors and mentors of women philosophers: please encourage your women students and mentees and be on the look-out for their habits of self-exclusion, but also be on the look-out for yours of underestimation.”


– Professor AW Moore and Professor Lucy O’Brien, Editors of MIND. Full quote available on the Oxford Journals website.


“Philosophers: deadly serious, combative, nit-picking, and always talking over the top of you. But thankfully that’s just the first-year students. My philosophy colleagues—many of whom are women—have been wonderfully supportive and kind. Some of us are working together to write women back into the history of philosophy. Our research demonstrates that women have always been intensely active in the discipline. Many of these historical thinkers met their fair share of combatants, nit-pickers, and talkers-over-the-top. But others received support and encouragement toward developing their own original ideas and arguments. So much has changed, so much remains the same.”


— Jacqueline Broad, Associate Professor of Philosophy at Monash University, author of The Philosophy of Mary Astell, and co-editor of Women and Liberty, 1600-1800: Philosophical Essays.



“In this landmark anniversary year, which marks a century since the enfranchisement of women, I am particularly struck by how far women have come in the academic community. A great deal has changed even in the history of the University of Oxford since 1918; women were soon after granted the right to a full university membership, by 1978 my Alma mater had transformed from the first fledgling women’s college into a progressive and co-educational centre of academia, and in the present day women are not only well-represented at the university but actually form a majority here at Oxford University Press. Philosophy is an academic discipline which has long been dominated by men – though women philosophers have been present and active – and the greater diversity that we are seeing is very welcome. Academic philosophy may have been at one point an overwhelmingly androcentric and Western canon but I am excited by how much the discipline has grown. As female academics face up to this legacy, working to shift the focus of philosophical thought by writing women back into history, publishers play the fulfilling role of helping to make their voices all the more prominent moving forward.”


– April Peake, Assistant Commissioning Editor for Philosophy at Oxford University Press, Oxford


For more of our Women in Philosophy content, check out #WomenInPhilMonth on Twitter.


Featured image credit: The School of Athens by Raphael. Public domain via Wikimedia Commons.


The post What is it like for women in philosophy, and in academia as a whole? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on March 24, 2018 00:30

March 23, 2018

Shaping the legacy of Dame Cicely Saunders [excerpt]

Dame Cicely Saunders (1918-2005) was a physician and the founder of the modern hospice movement. Her initial early work in hospitals and hospices established her determination to start her own home for the dying, and in 1959 she founded St. Christopher’s hospice.


The edited extract below, taken from the upcoming biography Cicely Saunders: A Life and Legacy, details Cicely’s first two years at the Society for Homes Students (St. Anne’s College) at Oxford University and the effect the start of World War Two had upon her studies.


She arrived in 1938, at age twenty-one, for the Michaelmas term. In that year, there were 850 women studying at the University, making up a record 18.5% of the student body. Cicely elected to read Politics, Philosophy, and Economics (P.P.E.). This programme of study had been established at Oxford in the 1920s as an alternative to ‘Greats’ or Classics. It was generally known as ‘Modern Greats’. Oxford defined the degree as being ‘the study of the structure, and the philosophical and economic principles, of Modern Society’. Cicely was therefore in the vanguard of a new interest in social science which was entering British academic life.


At the time, Cicely had the idea of being ‘a secretary to a politician or something like that’. This was a gendered assumption which reflected the period and the context. Although women made up almost a fifth of the undergraduate body at this point, they continued to be heavily discriminated against. Yet, Oxford had already seen some remarkable female graduates, including Gertrude Bell, Vera Brittain, Winifred Holtby, and Dorothy L. Sayers. The name of Cicely Saunders would eventually be added to that list. Notwithstanding, in 1938 the Oxford Union again voted to deny women access to full debating rights. Until 1934, women students studying medicine had to make use of their own anatomy laboratory. Oxford’s conservatism and privilege would have been at once familiar to the undergraduate Cicely; but, the intellectual and moral foment, the daily engagement with world politics, and the constant discussion at meals about the prospects of war, were new and enervating.


On 29 September 1938, days before Cicely arrived among the ‘dreaming spires’, the British Prime Minister Neville Chamberlain had signed the Munich Agreement, handing over the Sudetenland to German control. Her first term at University therefore had as its signature theme the growing spectre of war.



Image courtesy of Christopher Saunders. Do not reproduce without permission.

With turmoil all around, Cicely applied herself to her pass moderations. These were the first set of examinations at Oxford and would determine her onward progression to Modern Greats. Her year was enjoyable. She settled to her work but also joined the Bach Choir, did Scottish Country Dancing, and made some friends. At the end of the year the results were sound and allowed her to continue. The detailed focus on the core elements of P.P.E. was about to begin.


On 3 September 1939, Neville Chamberlain announced that Britain was at war with Germany. The assurances of Munich had fallen apart. Oxford students flocked to volunteer for officer training — despite the controversial debate of 9 February 1933, condemned by Churchill, when the Oxford Union had carried the motion ‘This House would not in any circumstances fight for King and Country’. Now a new patriotism was emerging, less jingoistic than in the Great War, and heavily focussed on the scourge of fascism. In due course, the exodus of undergraduates to the war would be matched by an influx of evacuees from London and elsewhere.


During her second year at Oxford, Cicely watched these events unfold for just one more academic term before resolving to leave and get involved in the war effort. The decision seems to have been finalised towards the end of 1939, and it took her tutors and her family by surprise. An article she had written for the Roedean School Magazine, published in December 1939, gives only the mildest hint of it. This elegantly written piece, seemingly composed in the months of autumn, begins by referring to the outbreak of war and ‘the first unthinking moment, when a large proportion of the female population of England said “I shall give up everything, and either nurse, or drive enormous army lorries”’ but after which ‘most of us resigned ourselves to a thoroughly dim term, with nothing and nobody the same as before’. In the main, Cicely strikes a tone of normality. Lectures continue, the daily routine remains unchanged, and the city looks ‘much the same as usual’, apart for the sandbagged windows and bright orange signs denoting the air- raid shelters. She majors on the continuities:


‘Christ Church Meadow, where the leaves are turning; the group of spires and towers across Merton fields; the river by Port Meadow; and the three college Chapels, where choral evensong is sung every day, Magdalen dimly lit by rows of tall candles placed along the pews’.


But she explains that most of the women students had taken up war- related interests — attending Red Cross lectures, organising sewing and knitting groups, helping with evacuees, and doing part- time ambulance work. Yet the article makes clear these remain secondary to the main purpose of being in Oxford: ‘working for as a good a degree as possible, in the hope that nothing will happen to interrupt this’. Indeed, continuing with the life of Oxford she regarded as ‘a National Service of the greatest value’. These words may have been written to placate her teachers at her old school and also to reassure her parents that she was not about to make any rash decisions. They may also have been an exercise in self- deception, knowing all the while that her intentions were the reverse. For whilst they seem deeply felt, they were not enduring. Even by the time they appeared in print, Cicely had made the decision to foreshorten her Oxford studies, to leave the University, and to apply for training as a nurse.


Featured image credit: ‘Radcliffe Camera and All Souls College’ by Tejvan Pettinger. CC BY 2.0 via Flickr.


The post Shaping the legacy of Dame Cicely Saunders [excerpt] appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on March 23, 2018 03:30

March 22, 2018

The secret of the Earth

One of the questions currently keeping astrobiologists (the people who would like to study life on other planets if only they could find some) awake at night is, what is the crucial difference that allowed the emergence and evolution of life on Earth, while its neighbours remained sterile?


In their violent youth, all the inner planets started out with so much surplus heat energy—from planetary accretion and radioactive decay—that their surfaces melted to form magma oceans hundreds or thousands of kilometres deep. Such oceans lose internal heat to space rapidly—so rapidly that within a few tens of millions of years, the surface of a young planet cools and solidifies to create a hot stagnant lid above a vigorously convecting mantle.


Through this mantle rise plumes of hot buoyant rock, in a “bottom-up” form of tectonics. Close to the surface, these plumes melt to produce a thick, light crust above a thin layer of cooled and rigid mantle. Together, these two layers form a primitive shell known as the lithosphere. As it is less dense overall than the mantle beneath, this lithosphere is buoyant and therefore cannot sink very far before choking any early subduction zones that might form—a condition called “trench lock.”


Planetary evolution is, to a great extent, a story of heat loss. Sooner or later, all rocky, Earth-like planets arrive at a major fork in the road to heat death. Most will continue along the stagnant lid highway, perhaps with occasional overturn of their surfaces. This may have occurred beneath the northern hemisphere of Mars early in its history, creating the so-called “Martian Dichotomy,” or on Venus around 600 million years ago, completely renewing its surface, a style known as “episodic lid” tectonics.


A few planets, however, will take the road less travelled, the road that leads to the creation of moving plates. This was the route chosen by the Earth at the end of the Archean eon, 2.5 billion years ago. The transformation was precipitated by the onset of deep subduction as the cooling lid became denser and eventually “negatively buoyant,” allowing sheets of lithosphere to sink thousands of kilometres into the lower mantle, thereby providing a driving force for the surface plates, to which they were still attached.



Image credits: Venus image: Calvin Hamilton, Johns Hopkins University Applied Physics Laboratory, NOAA; Earth image: National Center for Environmental Information, NOAA. Used with permission.

Which route a rocky planet (as opposed to a gas giant) chooses depends on factors that are still imperfectly understood. Computer simulations of mantle convection suggest that the choice of stagnant or mobile lid depends ultimately on the flow of heat out of the molten iron core, which thus heats the mantle from below, and the generation of heat within the mantle by radioactive decay of the unstable isotopes 238U, 235U, 232Th, and 40K. The balance between these two determines the overall pattern of mantle convection and the degree of coupling between the overturning mantle and the plates above.


Through time, both sources wane as heat is lost through the surface, affecting buoyancy and viscosity: the chilled lid becomes less buoyant while the mantle beneath becomes more viscous, exerting more drag on the lid. Eventually, a point is reached at which the “hot stagnant lid” becomes unstable. It then fragments and starts to collapse into the interior, marking a transition to the episodic lid mode in which a stagnant lid re-forms after the collapse event, only to be followed, some tens of millions of years later, by another collapse. Further cooling may lead to a second transition from episodic lid to fully mobile lid tectonics, that is, to plate tectonics.


Many now believe that the transformation to plate tectonics was the step that allowed life on Earth to develop beyond the microbial stage. By exchanging elements such as carbon, hydrogen, and phosphorus between the surface and the deep mantle, plate tectonics made the world inhabitable for complex organisms such as wise apes. Plate tectonics also provided the thermostat that maintained conditions at the surface within the narrow limits necessary to retain liquid water and prevent a descent into “frozen planet” mode or, worse, a runaway greenhouse resulting in a Venus-like desert.


Thus, while it appears that microbes can evolve very early in a planet’s evolution—and should therefore be common throughout the galaxy—animals, plants, and fungi are a different matter altogether, requiring billions of years of incubation on worlds that experience a high level of stability. No wonder, then, that some have taken to referring to Earth as the “Goldilocks planet.”


Finding exoplanets with plate tectonics is a difficult, and possibly insurmountable, challenge, even with the largest and most expensive telescopes. Astrobiologists are therefore focussing their efforts on building an inventory of “Earth-like” planets orbiting within the “habitable zones” of their stars, the band in which surface temperatures are estimated to be right for the maintenance of liquid water, thought to be essential for life. However, unless such planets are able to support plate tectonics for billions of years, they are likely to be as sterile as Venus or Mars. Disappointingly, recent calculations suggest that only about a third of the stars in our galaxy even have the correct chemical composition to produce “Earth-like” planets on which a negatively buoyant lid may form.


All in all, the Earth’s mobile lid has turned out to be a very good thing for us and for all multicellular life, but like all good things, it will come to an end. For plate tectonics is its own worst enemy, being such an efficient mode of shedding heat that, in a billion years or so, it will have cooled the Earth sufficiently that the upper mantle will no longer be able to melt as plates are ripped apart at mid-ocean ridges. The consequence will be what Bob Stern calls “ridge lock:” opposing plates will become welded together and the planet will pass into its “cold stagnant lid” phase. At about the same time, the oceans will have evaporated owing to the gradual increase in the Sun’s luminosity, finally exposing the tennis ball seam of mid-ocean ridges just as sea-floor spreading comes to an end.


The Earth therefore occupies the space between two fatal extremes, and has done so for two and a half billion years, long enough for microbes to evolve into Man. The demise of plate-driven convection will, however, spell the end of the world as we know it: without the life-support services provided by plate tectonics, the Earth will once again be inhabited only by microbes, eventually becoming as lifeless as its neighbours. If the inhabitants of a planet orbiting a star in the elliptical galaxy known affectionately as NGC 4709, 200 million light years distant in the constellation Centaurus, were to launch a spacecraft towards Earth today, travelling, like Stephen Hawking’s laser-propelled nanoprobes at a fifth of the speed of light, it would arrive to find a planet rather like Venus: hot and dead.


Featured image credit: Space by Melmak. CC0 via Pixabay


The post The secret of the Earth appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on March 22, 2018 04:30

You’ve got internet!– connecting rural areas

Twenty years ago, if you wanted internet access in many rural areas of America, you had to plug your computer into a phone line, listen to the dialing sound, and hope for the best. Today many people can easily join the cyber world at reliable speeds that few imagined decades ago. The internet has become a gateway to consumption opportunities and information that boggles the mind. It has transformed how businesses operate in both rural and urban areas. Although the percentage of people with broadband has increased, many in rural communities still lack broadband access and the accompanying benefits.


The US government adopted policies to address this issue with the goal of establishing universal access by 2020. The USDA Rural Utilities Service and FCC Universal Service Fund subsidize rural access by providing resources for upfront capital and operating costs respectively. Proponents argue that better broadband access will improve economic growth in rural areas by lowering production costs and increasing the size of the market for the sale of goods. Transaction costs will fall allowing rural business to better reach customers. Similarly, employers will have a better chance of finding employees who have the skills they want, and employees will more likely find firms that offer the compensation packages they desire. Consumers will face lower costs due to increased competition that raises their real income. Economic growth from broadband access offers numerous other benefits such as telemedicine, on-line educational opportunities, and more social capital from increased community interaction.


By improving the access to the internet, economic growth in rural areas would reduce the gap with the more affluent urban areas. In general populous, urban areas tend to have higher incomes than sparsely populated rural areas. Regional inequalities pervade the US, and public policies that address it have risen in importance in recent years. Expanding broadband access offers a potential way to reduce these inequalities.


Empirically assessing the impact of increased access to broadband has proven difficult. Often broadband access takes place when regional economies expand. Separating cause and effect is not easy. Maybe broadband access improved because of the region’s growth. Maybe access to broadband Internet increases economic growth because it lowers firm production costs and broadens the market for firm output. Identifying the relationship becomes more difficult since broadband access increased very quickly in urban areas and therefore reducing variation.


Broadband access might not provide the substantial benefits policy-makers anticipated. Increased competition from online competitors may adversely affect local businesses. Consider the case of Etsy. It provides a global platform for small artisans to sell their products across the country. These include all kinds of products used for home decoration. On the one hand the consumer benefits from the increased variety. But local producers may suffer from the increased competition. An online retailer may better satisfy consumer wants and needs and put the local firm out of business.


 Although the percentage of people with broadband has increased, many in rural communities still lack broadband access and the accompanying benefits.

Moreover, broadband may allow urban firms the opportunity to sell more products to rural customers, creating competition for local businesses, which may have negative effects on the local rural economy. Just as broadband may allow rural firms to access distant customers, broadband may also allow urban firms to sell more products to rural customers, which may mean a decrease in local rural products sold. In addition, broadband may shut down rural branch offices because basic services in branch offices can be replaced by online customer services.


A second reason to be cautiously optimistic about broadband policy is subtler. While improved broadband access increases the likelihood that new firms would set up businesses in the rural areas, at the same time, it is not obvious whether broadband increases the total number of new firms in rural areas. If broadband merely affects new firm location decisions without increasing the total number of new firms, the total benefits from broadband would be smaller than policy makers anticipate.


Another problem is that the benefits tend to accrue to those rural areas closest to urban areas. Extending broadband from the city to country yields benefits to those rural areas closest to the city because critical business transaction decisions still require face-to-face interactions. The diffusion of new technologies and information is facilitated with face-to-face interactions as well. Broadband may help coordinate meetings and increase face-to-face communication of businesses and workers between urban areas and adjacent rural areas. These benefits may be smaller for remote rural areas due to physical distance. Information technologies increase labor productivity of educated workers more than less educated workers. Since urban areas tend to have larger clusters of educated workers, these labor productivity gains from broadband may be smaller in remote rural areas.


With all the pros and cons, broadband advantages in rural areas might be more impactful for those close to urban or metro markets because of the benefits that arise from more frequent interactions, enhanced information, and the larger population of skilled workers. While broadband access does not necessarily mean that the total number of new firms in rural areas would grow, our results are consistent with the view that government broadband deployment projects in rural areas will increase the likelihood of firm entry in these areas. The smallest and most remote rural towns will get the smallest economic benefits from government broadband deployment projects compared to larger rural counties closer to metropolitan areas.


Featured image credit: cyberspace data wire electronic by jarmoluk. Public domain via Pixabay .


The post You’ve got internet!– connecting rural areas appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on March 22, 2018 03:30

Ten fascinating facts about the Marshall Plan

In 1947, with Britain’s empire collapsing and Stalin’s rise in Europe, US officials under new Secretary of State George C. Marshall set out to reconstruct Western Europe as a bulwark against communist authoritarianism. Their massive, costly, and ambitious undertaking confronted Europeans and Americans alike with a vision at odds with their history and self-conceptions. Benn Steil highlights ten intriguing facts about the Marshall Plan (officially the European Recovery Program) and how it shaped the decades following World War II.


1. The Marshall Plan represented a radical departure from established American foreign policy doctrine. Since George Washington’s famous farewell address of 1796, US policymakers had sought not to “entangle [American] peace and prosperity in the toils of European ambition, rivalship, interest, humor or caprice.” The Marshall Plan was a conscious departure from this stricture, based on the understanding that huge changes in technology and commerce had made it impossible for the United States to isolate itself without serious consequences for its security and prosperity.


2. Marshall aid amounted to over $135 billion in today’s money, or $800 billion as a proportion of American gross domestic product. The total aid figure is higher still if we account for substantial non-Marshall military and other assistance in Europe.


3. The European Union and NATO were products of the Marshall Plan. All of the pillars of the postwar liberal order were created by the United States in a few short years after the end of World War II. The United Nations, the International Monetary Fund , the World Bank, and the predecessor to the World Trade Organization were launched between 1945 and 1947. The embryo of what would become the European Union was created in 1947, and NATO was formed to protect it in 1949: both were offshoots of the Marshall Plan.



Poster created by the Economic Cooperation Administration to sell the Marshall Plan in Europe, 1950. Public domain via Wikimedia Commons.

4. The Marshall Plan marks the true beginning of the Cold War. The Marshall Plan, created in the spring of 1947, marks the true beginning of the Cold War. Although tensions between the United States and the Soviet Union were escalating throughout 1946 and early 1947, it was not until General Marshall’s decision to launch a massive aid program—designed to keep western Europe in the democratic-capitalist fold—that relations between the two countries became irreparably ruptured. Stalin only abandoned negotiations with the United States over the establishment of an interim unified Korean government after Marshall’s initiative.


5. Marshall’s famous Harvard address in June of 1947 was interrupted twice by applause, both times after indirect references to the Soviet Union. It was not Marshall’s call for the United States to help alleviate Europe’s suffering that moved his audience to applaud, but his warning that Washington would actively oppose “governments . . . which seek to perpetuate human misery in order to profit therefrom politically.”


6. Germany was the heart of the Marshall Plan and the Cold War. At the heart of the Cold War conflict was Germany. Neither the United States nor the Soviet Union could countenance a united Germany allied with the other. When the United States decided to make a reindustrialized western Germany into the fulcrum of the Marshall Plan, Stalin tried to undermine it with the Berlin blockade. After the western airlift defeated the blockade in 1949, Germany was split and the boundaries of the Cold War in Europe were frozen for 40 years.


7. The CIA’s earliest major covert operations were in support of the Marshall Plan. One of the first priorities of the Marshall Plan was to keep the Italian Communists out of government, and the CIA played an important propaganda role in the critical April 1948 elections in support of the center-right Christian Democrats. A new US Office of Policy Coordination (OPC) belied its dull name by running covert operations in Europe to bolster Marshall Plan political aims, using Marshall Plan administrative funding. The OPC was merged with the CIA in 1951.


8. We are today replaying the collapse in U.S.-Russia relations that followed the Marshall Plan. The collapse of United States-Russia relations with the launch of the Marshall Plan and NATO is replaying itself today with the expansion of NATO and the European Union. Russia considers NATO and the EU hostile to its interests and is pushing back hard. The difference between then and now is that the United States and Western Europe were willing to accept a Russian buffer zone in Eastern Europe in 1949, but are unwilling to accept one today.


9. The United States has spent far more on reconstruction in Iraq and Afghanistan than it spent, in current dollars, on Marshall aid. The United States has spent about 50 percent more on postwar reconstruction in Iraq and Afghanistan than it spent, in current dollars, on the entirety of Marshall aid. Yet it has almost nothing to show for it. The main reason? A lack of security. Both countries remain under armed siege by foreign and domestic opponents, such as ISIS and the Taliban. A credible American security umbrella for the Marshall countries was vital to generating the confidence and capacity of Western Europe to rebuild after World War II.


10. The most striking legacy of the Marshall Plan is the endless desire to repeat it, but it has yet to be imitated. In the past ten years alone, western statesman and celebrity philanthropists have called for “Marshall Plans” in Ukraine, Greece, Southern Europe, North Africa, Gaza, and the Arab Middle East; “Marshall Plans” for global warming and global unemployment. Yet since the real one was created nothing remotely similar has ever emerged.


Feature image credit: Rebuilding the Hotel Kempinski in Berlin with Marshall Plan aid, 1952. CC-BY-SA 3.0 via Wikimedia Commons .


The post Ten fascinating facts about the Marshall Plan appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on March 22, 2018 02:30

Is Debussy an Impressionist?

From the start, audiences liked Claude Debussy’s music. Critics, perplexed by its originality, were less enthusiastic. It seemed so non-traditional that they found it difficult to grasp, and a challenge to categorize. That’s what eventually led to the term Impressionism being applied to it. It became an easy way both to classify it and make it seem less unusual.


Prior to linking Debussy to it, Impressionism was solely associated with the visual arts. The first exhibition of Impressionist art had been in 1874, and included works by many of the painters who became representative of it: Claude Monet, Camille Pissarro, Pierre-August Renoir, Alfred Sisley, and Berthe Morisot. The eighth and final exhibition was in 1886. Debussy was only 12 when the Impressionists first made a name for themselves. During the 1890s, when he was beginning his career, many of the Impressionist artists were ending theirs. At the same time there was growing recognition of their work, and broad appreciation of their style. Creating a musical counterpart seemed plausible.


Impressionism started as a revolutionary movement. There was a focus on color and light (and less on line and detail). Bold, forceful brushstrokes were often used to produce a luminosity that resulted in blurred imagery. Subject matter was equally distinctive. There was a concern with modernity. But there was also an interest in subjective interpretation of natural scenes, and in painting out-of-doors.


What connection did critics discover between Impressionist art and Debussy’s music? Basically, it was the similarity of titles. Many of the descriptive titles of Debussy’s Images for piano bring to mind an Impressionist gallery, as do those for the Nocturnes and Estampes (Prints). Writing in 1908, Debussy’s first biographer singled out “Reflets dans l’eau” (“Reflections in the Water,” one of the Images), gave it an imaginary program (“The rippling flow and trickle of a running stream is heard; the cool, translucent effect and gurgle of disturbed water is given”), and labeled it an “impressionist sketch.” There was no real attempt to find specific, stylistic similarities between Debussy’s music and Impressionist art. More often than not, Debussy’s contemporaries labeled his music Impressionistic without explanation, the assumption being that listeners knew only too well why it was.


“Debussy did not like the label. He felt that it cast doubts on his originality. But he admitted a connection of sorts.”

Debussy did not like the label. He felt that it cast doubts on his originality. But he admitted a connection of sorts. Concerning the Images for orchestra he wrote: “I tried to make ‘something else’ of them and to create—in some manner—realities—what imbeciles call ‘impressionism,’ a term as poorly used as possible, especially by art critics . . .” By “realities” Debussy was referring to the approach he also followed in La Mer (The Sea), where he based his music not on an intermediary—like a painting of the sea—but on reality: the sea itself.


Debussy was fascinated by the visual arts and literature, and his music frequently reflected his interests. But only a small portion of his compositions have titles that might seem Impressionistic. Several of his most famous pieces (the Prelude to the Afternoon of a Faun, and the opera, Pelléas et Mélisande, for example) actually owe their origin to another “ism” popular at the time—Symbolism, a movement in many ways the aesthetic opposite of Impressionism. Then there are those works by Debussy with no extra-musical connection at all, like the sonatas of his final years. Here Debussy was concerned with sonority, with extending the types of sound usually associated with musical instruments, with malleability of structure, and with using musical compositions from the past as a point of departure. Yet for all of his compositions—those with a possible link to Impressionism, Symbolism, or no “ism” at all—the elements of his musical style remain essentially the same.


Does it make sense, then, to continue to categorize Debussy as an Impressionist? It may have served as a helpful guidepost 120 years ago, when the novelty and unconventionality of his music seemed puzzling. But when we associate a composer with a particular style—say, Classical, Romantic, or Impressionist—it is expected to provide insight not just for a segment, but for the totality of his or her work. In Debussy’s case, the breadth of his interests—from contemporary art and literature to eighteenth-century French chamber music—was extraordinary. And, since those interests produced an astonishing variety of compositions, pigeon-holing him only limits our understanding and appreciation of his accomplishments.


Featured image credit: Piano by thesongbirdscry. CC0 via Pixabay


The post Is Debussy an Impressionist? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on March 22, 2018 00:30

March 21, 2018

Addressing international law in action

“In theory, theory and practice are the same. In practice, they are not”– Albert Einstein


The 112th American Society of International Law’s annual meeting (4th-7th April 2018) will focus on the constitutive and often contentious nature of ‘International Law in Practice’. Practice not only reifies the law, but how it is understood, applied, and enforced in practice shapes its meaning and impacts the generation of future international rules.


In preparation for this year’s meeting, we have asked some key authors to share their thoughts on international law in action.


How and by whom is international law made, shaped, and carried out?


In the standard account, the International Criminal Tribunal for the former Yugoslavia (ICTY) epitomizes top-down lawmaking or, more precisely, lawmaking by elites far removed from communities affected by their work. Created by the UN Security Council in 1993, the Tribunal’s path breaking jurisprudence was fashioned by judges in The Hague, none of whom were citizens of former Yugoslav states. Yet this account obscures myriad ways in which survivors and other domestic actors engaged with the Tribunal and defined its legacy. Chronicling the ICTY’s evolving impact in Bosnia-Herzegovina and Serbia, my recent work has sought to illuminate citizens’ contributions and responses to the Tribunal’s work, as well as the central role of national jurists in creating the ICTY’s most tangible legacy—domestic war crimes institutions. This approach also highlights the crucial roles of national, regional, and local political elites, along with external actors like the European Union, in shaping Bosnian and Serbian experiences of Hague justice. Legal innovations associated with the Tribunal, as well as the nature of its domestic impact, were forged through the dynamic interplay of myriad actors in multiple spaces.


Diane Orentlicher, Professor of International Law, Washington College of Law, American University,  author of Some Kind of Justice: The ICTY’s Impact in Bosnia and Serbia.


What impact do the practices of international institutions have on the generation of international rules?


In setting up a new international organization, the parties are likely to assess whether the practices of existing, similarly-situated international organizations have created international norms to be respected. For the Asian Infrastructure Investment Bank (AIIB) in 2016, the charter incorporates common elements of earlier multilateral development bank (MDB) treaties as well as innovations that reflect AIIB goals and evolving MDB experience. MDB practices are also influencing AIIB legal and policy frameworks, such as loan contract provisions with sovereign states, requirements for environmental and social assessment and consultation, accountability mechanisms for project-related complaints, internal and external anti-corruption functions, and staff dispute resolution bodies.


Beyond the legal ramifications, there are pragmatic and strategic reasons for a new MDB to adopt the modern rules of comparators. For a financial institution, credibility is hugely important. An organization that carries forward a base that is workable, known, and generally respected offers reliability to governments, financial markets, and potential recipients—even as particular improvements may call for added scrutiny. The commonality of requirements also facilitates collaboration among new and old institutions and minimizes the procedural burden on client countries and companies.


Natalie Lichtenstein, former Inaugural General Counsel, Asian Infrastructure Investment Bank and Assistant General Counsel, World Bank, and author of A Comparative Guide to the Asian Infrastructure Investment Bank.


Where states did not originally envision that human rights law would be implemented by international institutions, it is now clear that the operationalization of human rights requires a wide range of global governance organizations. The United Nations human rights system has a mandate to implement human rights law, yet it does not have the institutional competence, expertise, or capacity to implement human rights across all fields of global governance. Codifying human rights implementation responsibilities across a wider range of institutions of global governance, the 1993 Vienna Declaration and Programme of Action looked to the entire United Nations in implementing rights, with the UN Secretary-General calling thereafter for the “mainstreaming” of human rights law across all UN policies, programs, and practices. These efforts to mainstream human rights in global governance have required international institutions to translate state legal obligations into organizational policy practices. By advancing human rights under international law as a basis for public health, these international institutions have generated international rules through global health governance. Global health governance has thus become a basis to realize a more just world through public health, with an array of international institutions ensuring the implementation of health-related human rights across the range of economic, social, and cultural fields that underlie public health in a globalizing world.


Benjamin Mason Meier, Associate Professor of Global Health Policy, University of North Carolina at Chapel Hill, and Lawrence O. Gostin, O’Neill Chair in Global Health Law, Georgetown University. Authors of Human Rights in Global Health: Rights-Based Governance for a Globalizing World.


“In theory, theory and practice are the same. In practice, they are not” – Albert Einstein

How has international legal practice changed (and is continuing to change) in response to geopolitical shifts and contemporary challenges?


The 2015 Paris Agreement represents a step-change in international environmental lawmaking and practice. Climate change is one of the greatest challenges humanity has ever faced, and addressing it requires engaging a wide array of state and non-state actors in long-term mitigation and adaptation efforts. The Paris Agreement seeks to meet this challenge without enshrining binding environmental targets, which was the previously dominant, “top-down” approach of international environmental law. While the Paris Agreement is a binding treaty and requires parties to maintain successive “nationally determined contributions,” these contributions are not binding per se. Instead, the Agreement includes a layered oversight mechanism to trigger a regular review of and more ambitious contributions over time. The agreement’s “hybrid” design was a response to the complexity of the climate challenge and the need to bring states into the agreement notwithstanding domestic political constraints and long-standing differences over burden sharing. But it assumes a new significance in the face of current geopolitical dynamics. Rising non-Western powers seek to instantiate a “new world order,” with a renewed focus on state sovereignty. Meanwhile, major Western societies experience growing resistance to what is perceived as overly intrusive international law. The Paris Agreement’s emphasis on nationally determined commitments thus fits well with the tenor of the times.


Daniel Bodansky, Foundation Professor, Sandra Day O’Conner College of Law, and Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability, Arizona State University, Jutta Brunnée,  Professor of Law and Metcalf Chair in Environmental Law, University of Toronto, and Lavanya Rajamani, Professor, Centre for Policy Research, New Delhi. They are authors of International Climate Change Law, winner of the ASIL 2018 Certificate of Merit for a specialized area of international law.


International law is often perceived as remote and top-down, yet it is clear that it has a major impact on the lives of individuals. It can structure opportunities for employment and movement across borders, regulate the goods we consume, and the even govern the recognition of our subjectivity. Recent political upheavals – from ‘Brexit’ to the election of Donald Trump – are not simple phenomena that can be explained by one factor. But, arguably, both are a rejection of international legal rules and institutions by communities who experience these rules and institutions as unwelcome and unaccountable and perceive their lives as full of international law’s objects, over which they have little control.


We need to open international law up to those beyond the state and a small circle of supporting technical personnel, and to better connect communities and individuals to the processes of international lawmaking. The process of drafting the United Nations Declaration on the Rights of Indigenous Peoples (UNDRIP) offers an example of how international legal standard setting can meaningfully incorporate a range of actors who have, historically, been rendered powerless in and by international law. More importantly, the results – which include the Declaration, but also the process itself – demonstrate that more open practices can not only succeed but can lead to an international law that has greater relevance and legitimacy to the people whose lives it impacts on most closely.


Jessie Hohmann, Senior Lecturer in Law at Queen Mary, University of London, and  co-editor of  The UN Declaration on the Rights of Indigenous Peoples: A Commentary  and International Law’s Objects.


So that you can prepare for the conference or experience the debate from afar, we have created a collection of free journal articles which focus on ‘International Law in Practice’.


Featured image credit: ‘New Zealand Representatives at the International Court of Justice in the Hague’ by Archives New Zealand from New Zealand.  CC BY-SA 2.0 via Wikimedia Commons.


The post Addressing international law in action appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on March 21, 2018 05:30

Digging into the innards: “liver”

Etymological bodybuilding is a never-ceasing process. The important thing is to know when to stop, and I’ll stop soon, but a few more exercises may be worth the trouble. Today’s post is about liver. What little can be said about this word has been said many times, so that an overview is all we’ll need. First, as usual, a prologue or, if you prefer, a posy of the ring. Our classification of body parts and inner organs is to a certain degree arbitrary. The existence of two eyes and two ears is an obvious fact, but, for example, the division of the extremities into two parts is an avoidable complication.  Many languages do not distinguish between leg and foot, finger and toe, arm and hand. And where exactly does the breast end? The situation with our organs is even less clear, because we don’t see what is inside us.


A posy of the ring. Image credit: Gold posy ring found while metal detecting in 2005. Submitted under the Treasure Act and now in the keeping of the British Museum, by Sonofthesands. CC BY-SA 3.0 via Wikimedia Commons.

Last time, I noted that, to establish the origin of words like heart, brain, groin, and their likes, we should try to understand what function our remote ancestors ascribed to those organs. Quite often, confusion is our only reward. A typical example is Greek phrēn (in its forms and derivatives, ē alternates with short e), from which, via Latin and French, English has frenzy and frenetic. The word, which occurred mainly in the plural, meant “midriff; breast; heart; sense; mind”; in translation, “soul” is sometimes called for. The word’s root is obscure, to use the etymologists’ polite jargon. If it has a respectable Indo-European past, the word must have begun with ghw-, and no connection with Engl. brain, from Old Engl. brægen, is possible. Even when the function is clear, we often feel puzzled. Thus, the Greeks, who developed the theory of humors, associated the spleen with the black bile and suffering. Elsewhere, for instance, in the Talmud, the same organ was said to make people laugh. Why so? Like frenzy, spleen reached English by way of Old French, Latin, and Greek. End of the prologue.


What do we and what did our ancestors know about the liver? They probably enjoyed eating the liver of the animals they hunted and associated it with grease. Today’s liver sausage (liverwurst, Leberwurst) needs no advertising. The English word liver has unmistakable cognates everywhere in Germanic, but only there, and this makes historical linguists unhappy. You may remember that at the end of Hemingway’s tale, the Old Man dreamed of lions. Etymologists dream of Indo-European protoforms. In dictionaries and scholarly books, reconstructed forms (that is, such forms as have not been attested but can be assumed to have existed) are supplied with asterisks. (For example, Old Engl. brægen is believed to go back to *bragnam.) So we may say that historical linguists, while breaking through all kind of obstacles, dream of stars (per aspera ad astra). How are those stellar pinnacles to be reached if we start with liver? The Latin for “liver” is jecur (more properly, iecur), a word that bears no resemblance to liver. The Greek word (hēpar) is, unfortunately, familiar to us thanks to its genitive hēpatos, from which hepatitis, literally, “inflammation of the liver,” was coined. The Latin and the Greek words are related (I’ll dispense with the details). The Germanic tie will be discussed below.


Per aspera ad astra. Image credit: Neil Armstrong works at the LM in the only photo taken of him on the moon from the surface via NASA photo as11-40-5886. Public Domain via Wikimedia Commons.

The Greeks considered the liver, rather than the heart, to be the most vital organ and the seat of passions and emotions. The speakers of Old Germanic also set great store by the liver. This is made clear by the evidence of Old Icelandic. From Old Icel. lifr “liver” the words lifri “brother” and lifra “sister” were formed. Some other equally picturesque Icelandic words for “brother” were blóði (ó is long o, and ð = th in Engl. this), that is, in Mowgli’s language, “we be of one blood” (blóð “blood”); likewise, barmi (from barmr breast: “the offspring of one breast”) and hlýri, from hlýr “cheek” (someone with whom you are cheek by jowl?). Thus, lifri ~ lifra meant: “We be of one liver.” The Germans have the saying welche Laus ist dir über die Leber gelaufen (gekrochen)?–literally, “what louse has run (crawled) over you liver?”, that is, “what made you so angry?” Is it an echo of the old view of the liver?


The Germanic protoform of liver must have sounded approximately as *librō-, and it has been suggested that the ancient root of liver is the same as in many words for “fat” (so in Greek liparós and elsewhere; remember Leberwrust?). Italian fegato “liver” is usually cited in this connection, because fegato goes back to Latin jecur ficatum “fattened liver.” This is fine, but not good for the Indo-European protoform, because the beginning of liparós and hēpar don’t sound alike (l- versus h-). Perhaps liver is not related to words like Greek liparós, but only experienced the influence of some such word, so that l- was added to the name of the liver. In principle, this is not improbable.  Consider the Indo-European words for “tongue.”


Dreaming of lions. Image credit: “Fishing” by sasint. CC0 via Pixabay.

The protoform *tungō, with initial t, goes back, as usual, to non-Germanic d-, but the Latin word is lingua! Clearly, something went wrong in Latin. Lingua may have acquired its l- under the influence of the verb for “lick.” On the other hand, some obscure sound change might be at work. Thus, Engl. tear (from the eye) is a cognate of Old Latin dacruma, with the expected correspondence of t- to d-. Yet the familiar Latin word for “tear” is lacruma ~ lacrima (known to us from lachrymose and “Lacrimosa,” part of Mozart’s Requiem). The enigmatic alternation d ~ l has been much discussed, but with slender success, as they used to say in the 19th century. Also, in the names of body parts and organs, taboo was often in play, so that the words were deliberately distorted, to ward off the influence of evil spirits, which would hear the word but miss its meaning and do no harm.


Sure enough, the tongue licks, but why did l appear in liver? No reason suggests itself. If l- in liver is secondary, we obtain a possible common Indo-European origin of this word. However, it is also possible to reconstruct the meaning of the Indo-European word for liver without sacrificing l.  The same root as in liver is very probably present in the verb leave, whose original meaning seems to have been “to stick, adhere; smear.” See Greek liparós above! Leave may be a good fit even without smearing. A bold hypothesis avers (asserts, suggests) that the liver, the most valuable organ in the opinion of the ancients, was “left” to the gods as a sacrifice.


Where are we at the end of such a tortuous way? Perhaps the Indo-European name of the liver existed and sounded approximately as *liekwer(t). In Greek and Latin, it presumably lost the initial sound (no reasons for the loss are known), while in Germanic it was retained. The word, related to the verb leave, meant “a part left over.” How realistic is this story? Very moderately so. Quite likely, there was no Indo-European word for this organ, and we should satisfy ourselves with a Germanic word, whose true meaning evades us (“greasy?”).


The Goncourts: of one blood, one breast, one cheek, and one liver. Image credit: Photography of Edmond (left) and Jules (right) de Goncourt by Félix Nadar. {{PD-US}} via Wikimedia Commons.

A non-specialist cannot help wondering: Do such exercises deserve the name of scholarship? Yes, up to a point. Historical linguists try to understand the processes that allowed the speakers of old to name the objects around them. The naming happened very long ago, and the light from the past is dim. Speleology requires brave efforts, because caves are deep and usually dirty. If you are afraid of bats and ghosts, stay away.


And does live have anything to do with the story? No, nothing at all. Fast livers are the heroes of an entirely different “post.”


The post Digging into the innards: “liver” appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on March 21, 2018 04:30

History in 3 acts: a brief introduction to Ancient Greece [excerpt]

Ancient Greek history is conventionally broken down into three periods: Archaic, Classical, and Hellenistic. However, the language used to describe them highlights an oversight made by generations of historians. By dubbing one period of history as “Classical,” scholars imply that the other two periods are inferior, simplifying the Archaic age as a mere precursor to, and the Hellenistic age as a lesser descendant of the Classical age.


Independent scholar and translator Robin Waterfield argues that each of these three periods should be given equal weight within the study of Ancient Greece. The following extract from Creators, Conquerors, and Citizens introduce the key components of each period.


The Archaic Period (750– 480 BCE)


The two and a half centuries that make up the Archaic period, roughly 750 to 480 BCE, saw the lives of the Greeks change fundamentally. Above all, there was the gradual development of statehood and civilized life, from primitive and hierarchical beginnings to far greater collectivism, equality under the law, and general participation in public life. From a broad perspective, this was an astonishing development. For hundreds, if not thou­sands of years, the chief form of political and social organization in the Near East and Mediterranean had been the hierarchically organized kingdom. Yet the Greeks evolved a different form, which became dominant in the Mediterranean world for several centuries. Politically, it was more egalitar­ian; economically, property belonged to private individuals, not just the king or a temple.



Photograph of the bust of Homer in the British Museum by JW1805. Public domain via Wikimedia Commons.

Within the Archaic period also, the art of writing, lost since the col­lapse of the Mycenaean palaces, was reintroduced. Creative geniuses such as Homer, Hesiod, the lyric poets, and the Presocratic natural scientists showed what could be done with words and ideas. Brilliant experimentation gov­erned the changing styles of vase painting; Greek art was valued all over the Mediterranean. Temple architecture evolved from modest to monumen­tal, and sanctuaries were filled with often strikingly impressive buildings and beautiful artifacts. Coined money spread rapidly. New forms of warfare were developed. The Greeks founded cities and trading posts all over the Mediterranean, impelled by the quest for wealth, or at least for relief from poverty, and supported by the god Apollo’s oracle at Delphi, which became the hub for many networks in the Mediterranean. The institutions, artifacts, and practices that define the better-known Classical period have their roots in the Archaic period.


The Classical Period (479– 323 BCE)


The Classical period is bracketed by two world-changing inva­sions: the Persian invasions of Greece and Alexander the Great’s invasion of Asia— the latter presented as retaliation for the former. Alexander’s invasion brought the Achaemenid Empire to an end and the constant possibility of Persian intervention in Greek affairs. Immediately following the Persian Wars, it still might have been possible for the Greeks to unify in the face of the threat from the East, but that did not happen. An account of Greek his­tory in the fifth and fourth centuries is bound to read at times like a litany of inter-Greek warfare. Orators spouted pan-Hellenic sentiments, but the ideals were not deeply enough rooted to overcome the ancient particu­larism of the Greeks; pan-Hellenism was propaganda rather than practical politics. It is ironic that Athens and Sparta, the two states that were chiefly responsible for repelling the Persians, were also principally to blame for keeping the Greek states disunited and weak, and therefore vulnerable, ulti­mately, to a second invasion by the Macedonians. The mainland Greeks had avoided becoming part of the Persian Empire, but in 338 they fell instead under what would become the Macedonian Empire.


The Hellenistic Period (323– 30 BCE)


The Hellenistic period, and independent Greek or Greco-Macedonian history, ended in the year 30 with the fall to Rome of the final successor kingdom, that of the Ptolemies in Egypt. It is said that when Octavian, the future Roman emperor Augustus, entered the Egyptian capital, Alexandria, he honored the tomb of Alexander the Great with offerings of a golden crown and flowers. When he was asked if he would like to see the tombs of the Ptolemies as well, he refused, saying that “he wanted to see a king, not corpses.” The new ruler of the world was extravagantly honoring the first ruler of the world, but he did have a point. There was a sense in which Alexander had stayed alive, while others died. The Greeks of the Hellenistic period continued to live in Alexander’s shadow. It was his ambitions that had laid the foundations of the new world, and his spirit lingered in its con­stant and frequently brilliant search for new horizons.


Augustus’ contempt had a long history, however. Until recently, it was not uncommon for accounts of ancient history to skip from Alexander’s death to the rise of Rome, ignoring the decades in between as though nothing important happened: men turned into mere corpses, but did not bestride the world the way a true king does. This attitude is misplaced. As a result of Alexander’s conquests, Greeks and Macedonians came to rule and inhabit huge new territories. They were living, in effect, in a new world, and this made the Hellenistic period one of the most thrilling periods of history, as everyone at every level of society, from potentates to peasants, adjusted to their new situations. The period pulsates with fresh energy and with a sense— reminiscent of the excitement of the Archaic period— that anything was possible, that there were further boundaries, cultural as well as geo­graphical, to discover and overcome.


Featured image credit: “The Triumph of Aemilius Paulus” provided by The Metropolitan Museum of Art by Carle Vernet. Public domain via Wikimedia Commons.



The post History in 3 acts: a brief introduction to Ancient Greece [excerpt] appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on March 21, 2018 03:30

March 20, 2018

Celebrating the first women Fellows of the Linnean Society of London

Diversity in science is in the news today as never before, and it is hard to imagine what it might have been like to be a woman scientist in 1900, knocking at the doors of learned societies requesting that women be granted the full advantages of Fellowship. It might seem trivial to us now, but in the past these societies were the primary arena in which discussions took place, contacts were made and science progressed. In the early 20th century, science itself was only just finding its feet as a professional activity; for much of the previous century it had been the dominion of ‘amateurs’, like Charles Darwin, many of whom were wealthy enough to fund their own activities. So being barred, not just from Fellowship of the learned societies but also from being able to attend meetings, essentially meant exclusion from the day to day workings of science.


Coinciding with the 230th anniversary of its foundation, the Linnean Society of London is holding a meeting on 21 March to celebrate its first women Fellows—the vote to elect 16 women to the Fellowship in November 1904 was a landmark for participation of women in the science of Natural History. It followed several years of requests from women who wished to become part of the workings of science, and to also illicit changes to the Charters that governed Fellowship. These women were from all branches of natural history—botany, zoology, geology, microbiology (one even studied organisms in rum), and spanned the gamut from Dames, to Professors, to Misses. In 1906 the husband of one of their number commissioned a painting from James Stant depicting the admittance of some of these ‘first women’ that today hangs in pride of place on the staircase in the Society’s premises in Burlington House.


The Zoological Society of London had admitted women as Fellows since 1826 and the Botanical Society admitted women as Members on the same terms as men in 1837, so the Linnean was behind in a sense, but it is worth bearing in mind that the Royal Society did not routinely admit women until the 1940s—sometimes change is a long time in coming. It was quite radical of the Linnean Society to work to change its Royal Charter to allow admission of women to the Fellowship – changing a Royal Charter is no trivial matter. It is testament to the drive of the Council and Officers, who may not have been united on this matter, that it happened in a mere four years.


But was there a specific tipping point for this change? In 1900, Mrs. Marian Farquharson, a botanist who had helped to publish a field guide to British ferns, requested that “duly qualified women should be eligible for ordinary Fellowship and, if elected, there should be no restriction forbidding their attendance at meetings”. This insistence on attendance at meetings was important; other societies allowed women to be members, but they were barred from attending meetings (Farquharson had been elected as the first female Fellow of the Royal Microscopical Society in 1885 but was not allowed to attend). At first she was rebuffed by the Council of the Linnean Society, but eventually won the day, through sheer persistence (the Society holds a plethora of correspondence from Farquharson) and the vocal support of some members of Council. Ironically, she was the only one of the 16 proposed Fellows who was not admitted on that day in November 1904!


Natural history seems an obvious place for women to have made breakthroughs in acceptance—after all, I was often told in my youth, “botany is for girls”. Botany was traditionally a place where women could make contributions without seemingly threatening the social order, and in the late 18th and 19th centuries there was a thriving culture of women writing botanical books for children that stressed thinking for oneself and questioning authority.


Carolus Linnaeus (or Carl von Linné)—the ground-breaking 18th century Swedish botanist whose collections of plants, animals and books are now cared for by the Linnean Society of London—had female pupils, even in the highly male-dominated world of the late Enlightenment, though they never went on worldwide journeys like his male ‘Apostles’. Famously Linnaeus’s ‘sexual system’ of organising plants by counting their male and female parts was considered scandalous in the 18th century by some—one botanist even went as far as to suggest that it rendered botany a pursuit that would be inappropriate for women!


Times have changed, though looking back and celebrating the achievements of female scientists who struggled to attain their place at the table of science in no way diminishes the challenges faced by women in science today. Despite over 100 years of admission of women as Fellows to the Linnean Society, in 2017 only about 22% of Fellows were female—given the relative parity of men and women in undergraduate biology degrees, this indicates there is still room for improvement. But it is important not to consider the participation of women—or of diverse communities in general—in science as a difficult or impossible task, or to consider broadening participation as some sort of necessary evil. Diversity is good for science, it brings new perspectives and ideas; inclusion helps institutions like learned societies to achieve their aims.


The meeting on the 21st March will not only celebrate the pioneering women who pressed for, and achieved, Fellowship within the Linnean Society, but will also celebrate the efforts and successes of female natural historians in documenting and describing the world around us. The event seeks to explore issues confronting today’s female natural historians, such as imposter syndrome (feeling that one is not really good enough), or the vagaries of fieldwork. These issues affect men as well, but the Society hopes the day will allow all natural historians to come together and discuss how to expand the diversity in our community to become a catalyst for change.


Featured image credit: Used with kind permission of The Linnean Society of London .


The post Celebrating the first women Fellows of the Linnean Society of London appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on March 20, 2018 05:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.