Oxford University Press's Blog, page 774
August 15, 2014
An appreciation of air conditioning
This week—August 15, to be exact—celebrates the climax of Air Conditioning Appreciation Days, a month-long tribute to the wonderful technology that has made summer heat a little more bearable for millions of people. Census figures tell us that nine out of ten Americans have central air conditioning, or a window unit, or more than one, in our homes; in our cars, it’s nearly universal. Go to any hardware or home goods store and you’ll see a pile of boxes containing no-fuss machines in a whole range of sizes, amazingly affordable, plop-’em-in-the-window-and-plug-’em-in-and-you’re-done. Not only do we appreciate the air conditioner, but we appreciate how easy it is to become air conditioned.
When it comes to cool, we’ve come a long way. But in earlier times, it was nowhere near as simple for ordinary citizens to get summertime comfort.
One of the first cooling contraptions offered to the public showed up around 1865, the brainchild of inventor Azel S. Lyman: Lyman’s Air Purifier. This consisted of a tall, bulky cabinet that formed the headboard of a bed, divided into various levels that held ice to cool the air, unslaked lime to absorb humidity, and charcoal to absorb “minute particles of decomposing animal and vegetable matter” as well as “disgusting gases.” Relying on the principle that hot air rises and cool air sinks, air would (theoretically) enter the cabinet under its own power, rise to encounter the ice, be dried by the lime, purified by the charcoal, and finally ejected—directly onto the pillow of the sleeper—“as pure and exhilarating as was ever breathed upon the heights of Oregon.” Lyman announced this marvel in Scientific American, and in the same issue ran an advertisement looking for salesmen. Somehow the Air Purifier didn’t take off.
More interesting to homeowners was the device that showed up in 1882, the electric fan. Until then, fans were powered by water or steam, usually intended for public buildings rather than homes, and most of them tended to circulate air lazily. But the electric model was quite different, with blades that revolved at 2,000 rpm—“as rapidly as a buzz saw,” observed one wag, and for years they were nicknamed “buzz” fans. They were some of the very first electrically powered appliances available for sale. They were also exorbitant, costing $20 (in modern terms, about $475). But that didn’t stop the era’s big spenders from seizing upon them eagerly. Delighted reviewers of the electric fan claimed that it was “warranted to lower the temperature of a room from ninety-five to sixty degrees in a few minutes” and that its effect was “like going into a cool grove.”
The fan combined with ice around the turn of the century, producing an eight-foot-tall metal object that its inventor called “The NEVO, or Cold Air Stove.” The principle was simple: air entered through a small pipe at the top, was pulled by a fan through the NEVO’s body—which had to be stuffed daily with 250 pounds of ice and salt to provide the cooling—and would then be discharged out an opening at the bottom. “It dries, washes, and purifies the air.” As the NEVO had more in common with a gigantic ice cream freezer than with actual temperature control, and the smallest NEVO cost $80 (nowadays, $1,700) and cost $100 per season (over $2,000) to operate, it didn’t get far.

By this time, a young engineer named Willis Carrier had developed a mechanical system that could actually cool the air and dry it, the Apparatus for Treating Air. But this was machinery of the Giant Economy Size, and used only in factories. In 1914, one wealthy gent asked Carrier to install a system in his new forty-bedroom Minneapolis home, and indeed the system was the same type that “a small factory” would use. Unfortunately, this proud homeowner died before the house was completed, and historians speculate that the machinery was never even turned on.
It wasn’t until 1929 that Frigidaire announced the first home air conditioner, the Frigidaire Room Cooler. This wasn’t in any way a lightweight portable. The Room Cooler consisted of a four-foot-tall metal cabinet, weighing 200 pounds, that had to be connected by pipes to a separate 400-pound compressor (“may be located in the basement, or any convenient location”). And it cost $800, in those days the same as a Pontiac roadster. While newspaper and magazine articles regarded the Room Cooler as a hot-weather miracle, the price (along with the setup requirements) meant that its customers came almost solely from the ranks of the rich, or businesses with cash to burn. Then fate intervened only months after the Room Cooler’s introduction when the stock market crashed, leaving very little cash for anyone to burn. Home air conditioning would have to wait until the country climbed back from the Depression.
Actually, it waited until the end of World War II, when the postwar housing boom prompted brand-new homeowners to fill their houses with the latest comforts. Along with television, air conditioning was at the top of the wish list. And at last, the timing was right; manufacturers were able to offer central cooling, as well as window units, at affordable prices. The compressor in the backyard, or the metal posterior droning out the window, became bona fide status symbols. By 1953, sales topped a million units—and the country never looked back.
Appreciation? Of course. And perhaps, adoration.
The post An appreciation of air conditioning appeared first on OUPblog.










Engaged Buddhism and community ecology
For the most part, Buddhists have historically been less concerned with explaining the world than with generating personal peace and enlightenment. However, the emergence of “engaged Buddhism” — especially in the West, has emphasized a powerful commitment to environmental protection based in no small part on a fundamental ecological awareness that lies at the heart of Buddhist thought and practice.
People who follow ecological thinking (including some of our hardest-headed scientists) may not realize that they are also embracing an ancient spiritual tradition, just as many who espouse Buddhism — succumbing, perhaps, to its chic, Hollywood appeal — may not realize that they are also endorsing a worldview with political implications that go beyond bumper stickers and trendy, feel-good support for a “free Tibet.”
Biologists readily acknowledge that living processes are connected; after all, we breathe and eat in order to metabolize, and biogeochemical cycles are fundamental to life (and not merely to ecology courses). Nonetheless, biology — like most Western science — largely seeks to reduce things to their simplest components. Although such reductionism has generally paid off (witness the deciphering of DNA, advances in neurobiology, etc.), ecologists in particular have also emphasized the stunningly complex reality of organism-environment interconnection as well as the importance of biological “communities” — which doesn’t refer to the human inhabitants of a housing development.
Although “community ecology” and complicated relationships among its living and nonliving components has become a crucial part of ecological research, recognizing the existence — not to mention the importance — of such interconnectedness nonetheless requires constant struggle and emphasis, probably because the Western mind deals poorly with boundary-less notions. This isn’t because Westerners are genetically predisposed to roadblocks that don’t exist for our Eastern colleagues, but simply because, for reasons that no one seems as yet to have unraveled, the latter’s predominant intellectual traditions have accepted and embraced the absence of such boundaries.
In The Jungle Book, Rudyard Kipling captured the power of such recognition in the magical phrase by which Mowgli the human boy gained entrance into the life of animals: “We be of one blood, you and I.” Being of one blood, and acknowledging it, is also a key Buddhistic concept, reflected as well in the biochemical reality that human beings share more than 99% of their genes with each other. At the same time, there is no reason why Mowgli’s meet-and-greet should be limited to what transpires between human beings. After all, just as the jungle-boy interacted with other creatures — wolves, monkeys, an especially benevolent snake, panther, and bear, as well as a malevolent tiger — everyone’s relationship to the rest of the world, living and even nonliving, is equally intense. Thus, we share fully 98% of our genes with chimpanzees, and more than 92% with mammals generally; modern genetics confirms that we literally are of one blood, just as modern ecology — along with modern Buddhism — confirms that the alleged distinction between organism and environment is an arbitrary error of misperception, and not the way the world really is.
The interpenetration of organism and environment also leads both ecologists and Buddhists to a more sophisticated — and often paradoxical — rejection of simple cause-and-effect relationships. Thus, the absence of clear-cut boundaries among natural systems, plus the multiplicity of relevant factors means that no one can be singled out as the cause — and indeed, the impact of these factors is so multifaceted that no single “effect” can be recognized either. Systems exist as a whole, not as isolated causative sequences. Are soils the cause or effect of vegetation? Is the prairie the cause or effect of grazing mammals? Is the speed of a gazelle the cause or effect of the speed of a cheetah? Do cells create DNA or does DNA create cells? Chickens and eggs, anyone? “Organism” and “environment” interconnect and interpenetrate such that neither can truly be labeled a “cause” or “effect” of the other.

It has long been known, for example, that organisms generate environments: beavers create wetlands, ungulates crop grasses and thereby maintain prairies, while lowly worms — as Darwin first demonstrated — are directly responsible for creating rich, loamy soil. On the other hand (or rather, by the same token) it can equally be concluded that environments generate organisms: the ecology of North America’s grass prairie was responsible for the existence of bison genes, just as causation proceeds in the other direction, too. Even as ecologists have no doubt that organism and environment are inseparable, ethologists — students of animal behavior — are equally unanimous that it is foolhardy to ask whether behavior is attributable to nature or nurture, i.e. environment or genotype. Such dichotomies are wholly artificial … something that Buddhists would call maya.
Western images are typically linear: a train, a chain, a ladder, a procession of marchers, a highway unrolling before one’s speeding car. By contrast, images derived from Indian thought (which gave rise to both Hinduism and Buddhism) are more likely to involve circularity: wheels and cycles, endlessly repeating. Although there is every reason to think that evolution proceeds as an essentially one-way street, Eastern cyclicity is readily discernible not only in ecology — a discipline that is intensely aware of how every key element and molecule relevant to life has its own cycling pattern — but also in the immediacy of cell metabolism, reflected, for example, in the Krebs cycle, or the wheel of ATP, the basic process whereby energy is released for the metabolism of living cells.
At the same time, and as we have noted earlier, there is no single entity labeled “Buddhism,” just as there is no single phenomenon identifiable as “Christianity,” “Judaism,” or “Islam.” And certain schools of Buddhism (e.g. Zen) are more sympathetic to ecological ethics than are others (e.g. Theravada, which remains more committed to personal enlightenment). To be sure, the science of ecology is partitioned as well, to some extent between theoreticians (fond of mathematical models) and field workers (more inclined to get their hands dirty in the real world), but also between ecology as a hard science and ecology in the broader sense of ethical responsibility to a complex natural world. Most spiritual traditions have some sort of moral relationship to the natural world built into them, from Christian stewardship to shamanic identification. Yet another reality, and a regrettable one, is that for the Abrahamic religions in particular (Judaism, Christianity, and Islam), separateness — of soul from body, individuals from each other, heaven from hell, human beings from the rest of the natural world, and so forth — is the primary operating assumption. This is assuredly not the case with Buddhism.
For me (and I assuredly am not alone in this), Buddhism is not a religion but rather, a practice system and philosophical perspective. And it is with pleasure and optimism that I point to the convergence between Buddhism and biology generally — and ecology in particular — as something that is not only fascinating but also deeply reassuring.
The post Engaged Buddhism and community ecology appeared first on OUPblog.










Job: A Masque for Dancing by Ralph Vaughan Williams
Michael Kennedy has described Job as one of Vaughan Williams’s mightiest achievements. It is a work which, in a full production, combines painting (the inspiration for the work came from a scenario drawn up by Geoffrey Keynes based on William Blake’s Illustrations of the Book of Job), literature (the King James Bible), music, and dance. The idea of a ballet on the Blake Job illustrations was conceived by Geoffrey Keynes, whose mother was a Darwin and a cousin of Vaughan Williams, assisted by another Darwin cousin, Gwen Raverat whom Keynes asked to design the scenery and costumes. They decided to keep it in the family and approached Vaughan Williams about writing the music. The idea took such a hold on the composer that he found himself writing to Mrs Raverat in August 1927 ‘I am anxiously awaiting your scenario – otherwise the music will push on by itself which may cause trouble later on’.
Out of all this emerged a musical work that exhibits the composer at the height of his powers. Often ballet music can seem only half the story when it is played apart from the dancing it was written for, but in this case the composer fully realised that an actual danced production was by no means assured (Diaghilev had firmly turned down Keynes’s offer of the ballet for Ballets Russes) and wrote a powerful piece for full orchestra, including organ, which could stand independently in a concert. That was indeed how Job received its first and second performances, the first in Norwich in October 1930 and the second in London in February 1931, both under the composer’s baton. It is dedicated to Adrian Boult. The first danced production was given by the recently formed Camargo Society at the Cambridge Theatre on 5 July 1931. It was choreographed by Ninette de Valois and conducted by Constant Lambert, who (much to the composer’s admiration) adeptly reduced the orchestration because the pit at the Cambridge Theatre could not accommodate the full orchestra specified by the composer. The part of Satan was danced by Anton Dolin.
Opinion was divided at the time as to how well the work stood up to performance independently of the dance dimension, but now, with the wisdom of hindsight, we can see it as having the stature of a symphony in terms of its overall shape and length. The careful placing of different elements in the score – the heavenly, the earthly and the infernal, all characterised by a different style of music – emphasises the sense of symphonic unity. In the music for Satan we hear a foretaste of the savagery which was to cause so much astonishment in the Fourth Symphony, on which he started work almost at once after completing Job. In the music for Job and his family we find elements of the calm we have come to associate with the Fifth Symphony, while the music for God and the ‘sons of the morning’ (Saraband, Pavane, and Galliard) presents a broad diatonic sweep at the beginning and then towards the end of the work. This will become apparent to listeners of Job performed at the Promenade Concert on 13 August 2014. They will also be able to draw comparisons between the ethereal violin solo in The Lark Ascending and the violin solo in ‘Elihu’s dance of youth and beauty’ in Scene VII.
It is no accident that two of the pieces, the Pavane and Galliard, together with the calm Epilogue, were played at Vaughan Williams’s funeral at Westminster Abbey on 19 September 1958.
Headline image credit: symphony orchestra concert philharmonic hall music. Public domain via Pixabay.
Sidebar image credit: Ralph Vaughan Williams. Lebrecht Archive.
The post Job: A Masque for Dancing by Ralph Vaughan Williams appeared first on OUPblog.










Cancer immunology and immunotherapy
My career began in the 1970s in the field of cancer immunology, a subject, which nowadays is at the forefront of cancer research, holding the promise of delivering new therapies for treating patients suffering from a wide range of cancers. Many scientists working in the field are not readily aware that the very first research papers documenting immunity against cancer were published in 1955 in the British Journal of Cancer by Robert (Bob) Baldwin, working in Nottingham, England. Fifty years on from his pioneering work, we have a greater understanding of how the body’s immune system acts as a surveillance mechanism to constantly patrol the body and destroy newly formed tumor cells, and the conditions where tumors escape immune detection and their progress is unchecked. In the 1960s and 1970s there was an explosion in research aimed at stimulating host immunity, without the in-depth knowledge we now have on the genetic basis of cancer and malignant progression and spread of the cancer beyond the confines of the primary tumor.
Robert Baldwin pioneered research into immune-stimulants, using for example bacteria, such as the attenuated BCG organism used to immunize against tuberculosis, and led the field in the production and clinical application of monoclonal antibodies; this has resulted in several antibody based therapies being introduced into the clinic. For example, Herceptin, an antibody that recognizes a protein known as Her2/neu, is used to treat a sub-group of breast cancer patients. Because other tumors express this protein, vaccine therapy against Her2/neu in prostate as well as breast cancer is undergoing trials at the present time. I was fortunate enough to work at the Nottingham Cancer Research Campaign Laboratories in the 1970s, where this experience acted as a springboard for my continuing interest and career in immunology, principally cancer immunology.

Having subsequently worked at Sheffield Medical School, and The National Cancer Institute in Bethesda and in Philadelphia, in 1996, I found myself back in Nottingham, heading my own team of scientists, now established as The John van Geest Cancer Research Centre, where our research interests are clearly focused on (1) identifying new cancer genes and antigens for developing personalized medical care through the use of cancer biomarkers, and (2) developing new forms of immunotherapy, where the combination of therapeutic vaccination and treatments designed to decrease the effectiveness of tumor escape from immune detection, are truly “center stage.” The work at our Institute and other institutes, will, I believe, make a real impact on patient survival and clinical management in the next decade. All of the major Pharmaceutical companies have entered the race to develop immunotherapy products and a variety of approaches are now undergoing clinical trials or are already approved for use by the FDA.
What makes us believe in the immune system as a means of treating cancer is the fact that in cases where the immunity is compromised, either through the use of immune-suppressive drugs or by infection with, for example, HIV, then cancers arise. We now have an in-depth understanding of how cancer occurs and progresses, the immune cells that mediate anti-cancer activity and the molecules that switch immunity off causing a “depression” in immune function. The controversial research into “cancer stem cells” now requires considerable resource and research input over the next decade or so. It will be important to define their role in tumor progression and determine their susceptibility to immunotherapy. Their mere existence in many cancers has been difficult to establish since they represent a minority population within tumors, although where they have been identified it is true to say that they appear to represent a highly aggressive, therapy resistant cell type with the ability to self-renew and give rise to differentiated cells within the tumor mass. This finding, together with our increasing understanding of “Tumor cell plasticity” will be important to consider as we aim to utilise the power of the immune system to successfully treat cancer.
Headline image: Cancer cells by Dr. Cecil Fox (Photographer) for the National Cancer Institute. Public domain via Wikimedia Commons.
The post Cancer immunology and immunotherapy appeared first on OUPblog.










Biting, whipping, tickling
The following is an extract from Comedy: A Very Short Introduction, by Matthew Bevis. It explores the relationship between laughter and aggression.
‘Laughter is men’s way of biting,’ Baudelaire proclaimed. The sociologist Norbert Elias offered a rejoinder: ‘He who laughs cannot bite.’ So does laughter embody or diffuse aggression? One theory, offered by the neuroscientist Vilayanur Ramachandran, is that the laugh may be an aborted cry of concern, a way of announcing to a group that there has been a false alarm. The smile could operate in a similar way: when one of our ancestral primates saw another individual from a distance, he perhaps initially bared his canines as a threatening grimace before recognizing the individual as friend, not foe. So his grimace was abandoned halfway to produce a smile, which in turn may have evolved into a ritualized human greeting. Another researcher, Robert Provine, notes that chimp laughter is commonly triggered by physical contact (biting or tickling) or by the threat of such contact (chasing games) and argues that the ‘pant-pant’ of apes and the ‘ha-ha’ of humans evolved from the breathlessness of physical play. This, together with the show of teeth necessitated by the play face, has been ritualized into the rhythmic pant of the laugh. Behind the smile, then, may lie a socialized snarl; and behind the laugh, a play fight. But behind both of these facial expressions lie real snarls and real fights.
People often claim to be ‘only joking’, but many a true word is spoken in jest. Ridicule and derision are both rooted in laughter (from ridere, to laugh). The comic may loiter with shady intent on the borders of aggression; ‘a joke’, Aristotle suggested, ‘is a kind of abuse’. And comedy itself can be abused as well as used—racist and sexist jokes point to its potential cruelty. As Waters says of Price’s stand-up act in Trevor Griffiths’s The Comedians (1975): ‘Love, care, concern, call it what you like, you junked it over the side.’ Comedy is clearly at home in the company of insults, abuse, curses, and diatribes, but the mode can also lend an unusual inflection to these utterances. From Greek iambi to the licensed raillery of the Roman Saturnalia, from Pete and Dud on the implications of being called a fucking cunt to the game of The Dozens, in which numerous aspersions are cast upon Yo Mama’s character, something strange happens to aggression when it is stylized or performed. W. H. Auden pondered choreographed exchanges of insult—from Old English flyting to the modern-day exchanges of truck drivers— and observed that ‘the protagonists are not thinking about each other but about language and their pleasure in employing it inventively … Playful anger is intrinsically comic because, of all emotions, anger is the least compatible with play.’ From this perspective, comedy is the moment at which outrage becomes outrageous. Some kinds of ferocity can be delectable.
‘Playful anger’ sounds like a contradiction in terms, yet in Plato’s Philebus, Socrates notes ‘the curious mixture of pleasure and pain that lies in the malice of amusement’. Descartes suggests in The Passions of The Soul (1649) that ‘Derision or scorn is a sort of joy mingled with hatred.’ This chapter examines such curious mixtures and minglings of feeling by considering modes of comedy that seem to have a target in their sights—versions of satire, mock-heroic, parody, and caricature. We might turn first to the satirist; Walter Benjamin identified him as ‘the figure in whom the cannibal was received into civilization’. So the satirist is at once savage and civilized; he cuts us up after having been granted permission (perhaps even encouraged) to take that liberty. What is it, then, that we need this cannibal to do for us? The satirist, it would initially appear, is the comedian who allows audiences to join him on a mission. Satire is a scourge of vice, a spur to virtue; Horace imagines his ideal listener as ‘baring his teeth in a grin’. So far so good, but the listener may also get bitten from time to time: ‘What are you laughing at?’ the poet asks us, ‘Change the name and you are the subject of the story.’ Indeed, as Hamlet would later quip, ‘use every man after his desert, and who should scape whipping?’
Image credit: Business team laughing, © YanC, via iStock Photo.
The post Biting, whipping, tickling appeared first on OUPblog.










August 14, 2014
The terror metanarrative and the Rabaa massacre
Just after dawn prayers on the morning of 14 August 2013, Egyptian security forces raided a large sit-in based at Cairo’s Rabaa al-Adawiyya Square and another at al-Nahda Square. Six weeks earlier, military leader and Minister of Defense Abdel Fattah al-Sisi staged a coup to remove Egypt’s first democratically elected president, the Muslim Brotherhood’s Mohamed Morsi, from office. In response, hundreds of thousands of Egyptians across the country congregated in public spaces to protest the coup and the perceived reversal of the revolutionary moment that began in early 2011 with the overthrow of Hosni Mubarak’s three-decade long authoritarian rule.
As they opened fire on the encampment, security forces killed over one thousand Egyptians. The exact figure has been difficult to ascertain, in part because officials reportedly burned the bodies of those killed during the course of the twelve-hour operation. Graphic images of the charred interior of the Rabaa al-Adawiyya Mosque began making the rounds on social media within hours of the raid. A recently published investigative report by Human Rights Watch contends that “police and army forces systematically and intentionally used excessive lethal force in their policing, resulting in killings of protesters on a scale unprecedented in Egypt.” The report also asserts that no Egyptian officials have been held accountable for the Rabba massacre, while all state inquiries have essentially justified the army’s actions.
Just as shocking as the new military regime’s repressive clampdown on the Islamist opposition has been the widespread support for such measures across broad swaths of Egyptian society. In addition to the hundreds of thousands who supported Morsi’s overthrow by taking to the streets on 30 June, a month later Sisi called upon Egyptians to rally in Tahrir Square in support of the military’s aim to “fight terrorism”—code for the continued clampdown on Morsi’s supporters. It is under the shroud of this popular support that the state could commit the horrors at Rabaa without batting an eye.

One year later, there is little moral outrage in Egypt over the appalling course of events at Rabaa. Rather than offer up a moment of collective introspection, the passage of time and the newfound political stability under Sisi have only more deeply entrenched the dominant narrative that the protesters got what they deserved. In Egypt’s “new normal,” popular culture has internalized the necessity of extreme state violence against a perceived minority of violent political agitators.
To be sure, the critiques of the Muslim Brotherhood spanned a wide array of issue areas, from the group’s vision for an Islamic government to its contentious interactions with state institutions and revolutionary forces. However, the emphasis on the group’s supposed inclinations toward organized violence is singled out here for its propensity to validate egregious human rights violations by state authorities in the name of security.
The dehumanization of thousands of ordinary men, women, and children, many of whom are not even members of the Muslim Brotherhood, occurred as state officials and media personalities continually utilized the imagery of terrorism and violent extremism to depict the protestors. Footage of police raids was set to the soundtracks of Hollywood action films and televised with large captions reading “Egypt Fights Terrorism” in Arabic as well as English.
Given its enduring quality, however, it would be a mistake to assume that this incitement campaign against the Muslim Brotherhood is a recent incarnation. Far from being a makeshift construct that aided in Sisi’s alarmingly rapid political ascent, the recent application of the “war on terror” motif stems from a historic struggle over the Egyptian national narrative that pits the state against one of the country’s oldest social movement organizations.
In their attempt to overturn a popular mass movement that had made limited revolutionary gains, counter-revolutionary forces constructed a broad narrative that placed the historical trajectory of the Muslim Brotherhood within the state’s struggle to combat terrorism that dated back to the mid-twentieth century. To press its case to a public that is largely ignorant of the historical nuances involved, the anti-Muslim Brotherhood movement made exceptionally anachronistic use of various flashpoints in modern Egyptian history.
Shortly after Morsi’s election in 2012, during a commemorative event for the sixtieth anniversary of the 23 July 1952 revolution, self-declared Nasserists lamented that Egyptians had not learned the lessons of Gamal Abdel Nasser’s experience with the Muslim Brotherhood. “They were never to be trusted,” said one prominent spokesperson for the group. In successive weeks, other writers and commentators referred to the campaign of political violence that dated back to the 1940s, placing the blame squarely on the Muslim Brotherhood and its brand of Islamic activism.
Elsewhere, the chorus of critics recalled the turbulent 1970s and the rise of underground militant groups that they attributed to the Muslim Brotherhood and in particular the writings of Sayyid Qutb, the organization’s leading ideologue until his execution by the Nasser regime in 1966. The rise of an Islamic insurgency culminated with the assassination of Anwar al-Sadat in 1981. The chronology continues well into the Mubarak era, as prominent media personalities impugned the Muslim Brotherhood for its supposed role in the outbreak of anti-state violence in the mid-1990s.
If one follows this chronology to its logical conclusion, one could reasonably believe that the Muslim Brotherhood was founded with an ideological bent toward violent, anti-state contention, which it pursued through the active development of a military wing and then sustained through successive waves of terrorist acts over the course of eighty-six years.
The problem with the terror metanarrative is that it represents a gross misreading of history and a transparent effort by the state to paint its opposition with the broad brush of extremism. In reality, the Muslim Brotherhood confronted the question of political violence at various stages in the development of its activist mission. The appearance of its militia during the 1940s is well documented and has been examined at length by numerous scholars. Many of the recent references to this research, however, fail to mention that the Muslim Brotherhood’s armed wing existed within the chaotic field of post-war Egyptian politics in which every major political party and social actor was as likely to fight its battles in the streets as much as in the parliament or the newspapers.
The Secret Apparatus, responsible for covert attacks against public officials in the late 1940s, was dismantled following Nasser’s repression of the Muslim Brotherhood in 1954. As it reorganized itself in later years, the remnants of the Muslim Brotherhood’s core leadership internalized many of the elements of this nebulous section of the organization—its strict hierarchical structure, discipline across the ranks, emphasis on secrecy and indoctrination—but notably not its inclinations toward violence. In other words, the proponents of the Secret Apparatus, figures like Mustafa Mashhur and Kamal al-Sananiri, believed in its tenets as a means of enduring state repression, not actively resisting it.
When the Muslim Brotherhood resumed its activism in the mid-1970s after a two-decade absence, it was in the shadow of major developments within the Islamic movement that covered both the ideological and the organizational realms. The pressures of a repressive political climate and the widespread use of torture in Nasser’s prisons threatened to fracture the Islamic movement, leading a small minority of former Muslim Brotherhood members and impressionable young Islamic activists to adopt a militant outlook that found inspiration in Qutb’s impassioned and uncompromising view of the Nasserist state. Qutb’s most fervent supporters believed Egyptian society to have become so corrupted by a secular dictatorship that the gradual reformist mission of the Muslim Brotherhood would simply not suffice. Instead, they argued for the path of violent revolution led by a vanguard of true believers.
For all the attention it has received in recent years, this view never prevailed among the mainstream Muslim Brotherhood leaders, most of whom worked actively to discredit it. In 1969, the group’s imprisoned leader, Hasan al-Hudaybi, authored a tract entitled Preachers, Not Judges, which argued forcefully in favor of a reformist approach to political empowerment that hinged upon popular preaching and mobilization across all segments of Egyptian society. Hudaybi directly repudiated the practice of “takfir,” or declaring fellow Muslims to be unbelievers, limiting the role of Islamic activists to one of “du‘a” or callers to the faith.
In spite of the alarming rise of a number of Islamic militant groups that committed notorious crimes throughout the late 1970s, the more important (and certainly more enduring) story of the decade was the ability of the Muslim Brotherhood to reconstitute itself as the chief representative of the mainstream Islamic movement. Hudaybi’s successor, a lawyer named ‘Umar al-Tilmisani, oversaw the group’s reemergence by constructing an Islamic call, or “da‘wa” that found widespread appeal within a new generation of Islamic activists across Egypt’s colleges and universities. By the end of the Sadat era, hundreds of thousands of Egyptians had found in the Muslim Brotherhood a forum for oppositional politics premised on building a strong social base and gradual engagement with state institutions. In fact, as several student leaders from the era have since argued, were it not for the moderate and gradualist Islamism packaged and distributed by Tilmisani’s Muslim Brotherhood, the spread of militancy among the nation’s disaffected youth would have been far more pervasive.
That sentiment is worth recalling as one unpacks the implications of the coup government’s efforts to eradicate one of the country’s oldest social movements from Egyptian society. In the past year, the organization was declared illegal by judicial decree as well as a cabinet decision. As the state’s campaign of intimidation, indefinite detentions, torture, and mass executions continues to descend upon the nation’s independent activists, Sisi’s pledge to destroy the opposition presents a haunting prospect. “There will be nothing called the Muslim Brotherhood during my tenure,” he told an interviewer last May. Sisi’s aggressive social engineering project is bound to hold grave consequences for a country that is already reeling from several years of social and economic volatility and a regional insurgency that become more potent after the military’s takeover.
Despite its desperate attempts to do so, the Sisi regime has yet to demonstrate that the Muslim Brotherhood has had a hand in any of the militant bombings that have occurred since Morsi’s overthrow. For all of its faults—and they are many—the organization has maintained a consistent record of non-violent contention against successive authoritarian rulers, having reasserted its ideological as well as institutional mission in the 1970s.
As recent events in neighboring states have demonstrated, when the avenues for the legitimate expression of an Islamically oriented political program are closed, extremism prevails. The alarming rise of the Islamic State of Iraq and Syria is just one such example. In a recent online video, an ISIS spokesman commenting on events in Egypt reserved the bulk of his condemnation for Morsi, not Sisi. He declared the imprisoned Muslim Brotherhood leader “an apostate” and relished at the prospect of serving as his executioner. The greatest threat to religious militancy is not an equally violent state-sponsored secularism, but rather an open political climate that accommodates competing modes of activism irrespective of their religious, sectarian, or ideological leanings.
By conflating the Muslim Brotherhood’s legacy of oppositional politics with violent incarnations of anti-state contention, the terror metanarrative attempts to establish on a false basis the state’s ability to respond to perceived threats with all means at its disposal. The memory of the massacre at Rabaa will live on as a reminder of the painfully high cost of the abuse of history.
Headline image credit: AFP PHOTO / MOSAAB EL-SHAMY, CC BY-NC 2.0 via Flickr.
The post The terror metanarrative and the Rabaa massacre appeared first on OUPblog.










Ethical issues in managing the current Ebola crisis
Until the current epidemic, Ebola was largely regarded as not a Western problem. Although fearsome, Ebola seemed contained to remote corners of Africa, far from major international airports. We are now learning the hard way that Ebola is not—and indeed was never—just someone else’s problem. Yes, this outbreak is different: it originated in West Africa, at the border of three countries, where the transportation infrastructure was better developed, and was well under way before it was recognized. But we should have understood that we are “all in this together” for Ebola, as for any, infectious disease.
Understanding that we were profoundly wrong about Ebola can help us to see ethical considerations that should shape how we go forward. Here, I have space just to outline two: reciprocity and fairness.
In the aftermath of the global SARS epidemic that spread to Canada, the Joint Centre for Bioethics at the University of Toronto produced a touchstone document for pandemic planning, Stand on Guard for Thee, which highlights reciprocity as a value. When health care workers take risks to protect us all, we owe them special concern if they are harmed. Dr. Bruce Ribner, speaking on ABC, described Emory University Hospital as willing to take two US health care workers who became infected abroad because they believed these workers deserved the best available treatment for the risks they took for humanitarian ends. Calls to ban the return of US workers—or treatment in the United States of other infected front-line workers—forget that contagious diseases do not occur in a vacuum. Even Ann Coulter recognized, in her own unwitting way, that we owe support to first responders for the burdens they undertake for us all when she excoriated Dr. Kent Brantly for humanitarian work abroad rather than in the United States.
We too often fail to recognize that all the health care and public health workers at risk in the Ebola epidemic—and many have died—are owed duties of special concern. Yet unlike health care workers at Emory, health care workers on the front lines in Africa must make do with limited equipment under circumstances in which it is very difficult for them to be safe, according to a recent Wall Street Journal article. As we go forward we must remember the importance of providing adequately for these workers and for workers in the next predictable epidemics — not just for Americans who are able to return to the US for care. Supporting these workers means providing immediate care for those who fall ill, as well as ongoing care for them and their families if they die or are not longer able to work. But this is not all; health care workers on the front lines can be supported by efforts to minimize disease spread—for example conducting burials to minimize risks of infection from the dead—as well as unceasing attention to the development of public health infrastructures so that risks can be swiftly identified and contained and care can be delivered as safely as possible.

Fairness requires treating others as we would like to be treated ourselves. A way of thinking about what is fair is to ask what we would want done if we did not know our position under the circumstances at hand. In a classic of political philosophy, A Theory of Justice, John Rawls suggested the thought experiment of asking what principles of justice we would be willing to accept for a society in which we were to live, if we didn’t know anything about ourselves except that we would be somewhere in that society. Infectious disease confronts us all with an actual possibility of the Rawlsian thought experiment. We are all enmeshed in a web of infectious organisms, potential vectors to one another and hence potential victims, too. We never know at any given point in time whether we will be victim, vector, or both. It’s as though we were all on a giant airplane, not knowing who might cough, or spit, or bleed, what to whom, and when. So we need to ask what would be fair under these brute facts of human interconnectedness.
At a minimum, we need to ask what would be fair about the allocation of Ebola treatments, both before and if they become validated and more widely available. Ethical issues such as informed consent and exploitation of vulnerable populations in testing of experimental medicines certainly matter but should not obscure that fairness does, too, whether we view the medications as experimental or last-ditch treatment. Should limited supplies be administered to the worst off? Are these the sickest, most impoverished, or those subjected to the greatest risks, especially risks of injustice? Or, should limited supplies be directed where they might do the most good—where health care workers are deeply fearful and abandoning patients, or where we need to encourage people who have been exposed to be monitored and isolated if needed?
These questions of fairness occur in the broader context of medicine development and distribution. ZMAPP (the experimental monoclonal antibody administered on a compassionate use basis to the two Americans) was jointly developed by the US government, the Public Health Agency of Canada, and a few very small companies. Ebola has not drawn a great deal of drug development attention; indeed, infectious diseases more generally have not drawn their fair share of attention from Big Pharma, as least as measured by the global burden of disease.
WHO has declared the Ebola epidemic an international emergency and is convening ethics experts to consider such questions as whether and how the experimental treatment administered to the two Americans should be made available to others. I expect that the values of reciprocity and fairness will surface in these discussions. Let us hope they do, and that their import is remembered beyond the immediate emergency.
Headline Image credit: Ebola virus virion. Created by CDC microbiologist Cynthia Goldsmith, this colorized transmission electron micrograph (TEM) revealed some of the ultrastructural morphology displayed by an Ebola virus virion. Centers for Disease Control and Prevention’s Public Health Image Library, #10816 . Public domain via Wikimedia Commons.
The post Ethical issues in managing the current Ebola crisis appeared first on OUPblog.










Living in the dark
It is well known that many of the permanent inhabitants of caves have evolved a bizarre, convergent morphology, including loss of eyes and pigment, elongation and thinning of appendages, and other adaptations to conditions of complete darkness and scarce food. These species include the European cave salamander, or olm, studied since the time of Lamarck.

Sometimes, the extremes of morphology of cave animals strain credibility, as is the case of a springtail from a Cambodian cave, with antennae several times the length of its body.

The adaptations shown by the olm and the springtail illustrated make sense in an environment of constant darkness and scarce food.
Species with morphologies like the olm and the Cambodian cave springtail, occur in and have evolved in habitats that only share the physical feature of darkness with caves. There are seven different kinds of dark habitats that occur close to the boundary of lighted and dark habitats:
Extremely shallow ground water only a few centimeters underground that emerges in very small seepage springs
The underflow of rivers
The cracks and tiny solution tubes at the top of limestone deposits
The cracks and crevices in rocks
Shallow aquifers created by the precipitation of calcium carbonate in arid conditions
The soil
Lava tubes, which unlike limestone caves, always form a few meters from the surface.
All of these habitat harbor de-pigmented and eyeless species, even though there is often abundant organic matter present, and there are strong seasonal and sometimes daily fluctuations in temperature and other environmental conditions. Except for lava tubes, none provide the allure and adventure of caves.
The first of these categories, the fauna of seepage springs and the associated groundwater, epitomizes the ecological and evolutionary conundrums these shallow subterranean habitats pose. The habitat itself consists of a mixture of rocks and leaf litter underlain by a clay layer. The habitat is relatively rich in organic matter (both dissolved and particulate) and nutrients. Essentially, these are miniature drainage basins, that typically cover a few thousand square meters, and appear to be little more than wet spots in the woods.

These seepage springs and their fauna were first described from sites on Medvednica Mountain in Croatia in 1963 by Milan Meštrov, in several papers that are largely forgotten.
What he did leave is a tongue-twisting name for the habitat—hypotelminorheic, perhaps not surprising for a French word with Greek roots first coined by a Croatian. Unlike deep caves, the hypotelminorheic is high variable, and in many places the seepage spring dries up during the summer months, and most of the water is retained in the colloidal clay. The habitat is so shallow that there are daily temperature fluctuations. In spite of all this, these seeps harbor a number of amphipod, isopod, and snail species with the characteristic long antennae and absence of eyes and pigment characteristic of the deep cave fauna.
In one case, there are enough species of one genus of amphipods (Stygobromus), that relative size of antennae can be compared, and no differences between cave and hypotelminorheic species were found. What was different among the different subterranean habitats, was body size. A repeated pattern of small animals in habitats with small dimensions (soil and the upper layer of limestone) and large animals in habitats with large dimenions (lava tubes and deep caves). The conclusion is that absence of light and habitat size, not availability or organic matter or environmental variability, drives the evolution of the convergent morphology of subterranean animals. In fact, divergence as well as convergence occurs in subterranean habitats. Cene Fišer and his colleagues from the University of Ljubljana, have shown that when three or more species of the amphipod genus Niphargus are present in a subterranean site, their morphological divergence is greater than expected by chance. The task for biologists studying the subterranean fauna is to tease out the convergent and divergent aspects of adaptation.
The post Living in the dark appeared first on OUPblog.










The rise of choral jazz

The genre of ‘choral jazz’ has become increasingly prevalent among choirs, with the jazz mass the ultimate form. Settings of the Latin mass by Lalo Schifrin and Scott Stroman have enjoyed popular following, while more recently Bob Chilcott’s A Little Jazz Mass and Nidaros Jazz Mass have established the genre in the wider choral tradition, reaching choirs from across the choral spectrum and audiences young and old.
Composed in 2001, Will Todd‘s Mass in Blue is a further example of the genre, presenting an innate fusion of jazz elements within choral writing. The composer describes the piece as ‘a real watershed work’, combining his passion for jazz with his previous experience of church and choral music, including as a boy treble.
As a choral composer Will Todd rose to prominence with pieces such as My Lord has come and The Call of Wisdom (commissioned as part of the celebrations to mark Her Majesty’s Diamond Jubilee), while as a jazz pianist he records and performs extensively with his own trio. His playing can be heard, for example, on the Vasari Singers’ recording of Mass in Blue and on OUP’s Mass in Blue backing CD.
2014 sees the publication of a new edition of Todd’s Mass in Blue, in which the composer has sought to enhance the flexibility and accessibility of the work while retaining its essence and drive. For instance, the choral parts and textures have been simplified in places, while the piano part increases support to the choir and has been revised to accommodate players of more modest ability. Optional exemplar solos are provided in the instrumental parts (piano, bass, and drum-kit, with optional saxophone) and additional cues have been added to the piano part to aid rehearsal.
Why? Will Todd observes that he has ‘experienced the work in a wide variety of guises and venues’, and the revised edition should allow the piece to travel still further. For a composer who says that his music is ‘about bringing people together’, the jazz mass is the perfect vehicle. The form lends itself to universality, with its synthesis of the sacred and the secular, of a traditional text with contemporary jazz styles, and an ability to unite musicians from diverse musical backgrounds.
Image credit: Choir Sing Cheer Joyfull Voices Vocals A Capella. Public domain via Pixabay
The post The rise of choral jazz appeared first on OUPblog.










What goes up must come down
Biomechanics is the study of how animals move. It’s a very broad field, including concepts such as how muscles are used, and even how the timing of respiration is associated with moving. Biomechanics can date its beginnings back to the 1600s, when Giovanni Alfonso Borelli first began investigating animal movements. More detailed analyses by pioneers such as Etienne Jules Marey and Eadweard Muybridge, in around the late 1800s started examining the individual frames of videos of moving animals. These initial attempts led to a field known as kinematics – the study of animal movement, but this is only one side of the coin. Kinetics, the study of motion and its causes, and kinematics together provide a very strong tool for fully understanding the strategies animals use to move as well as why they move the way they do.
One factor that really changes the way an animal moves is its body size. Small animals tend to have a much more z-shaped leg posture (when looking at them from a lateral view), and so are considered to be more crouched as their joints are more flexed. Larger animals on the other hand have straighter legs, and if you look at the extreme (e.g. elephant), they have very columnar legs. Just this one change in morphology has a significant effect on the way an animal can move.
We know that the environment animals live in is not uniform, but is cluttered with many different obstacles that must be overcome to successfully move and survive. One type of terrain that animals will frequently encounter is slopes: inclines and declines. Each of the two different types of slopes impose different mechanical challenges on the locomotor system. Inclines require much greater work from the muscles to move uphill against gravity! On declines, an animal is moving with gravity and so the limbs need to brake to prevent a headlong rush down the slope. Theoretically, there are many ways an animal can achieve successful locomotion on slopes, but, to date, there has been no consensus across species or animals of differing body sizes as to whether they do use similar strategies on slopes.
From published literature we generated an overview of how animals, ranging in size from ants to horses, move across slopes. We also investigated and analysed how strategies of moving uphill and downhill change with body size, using a traditional method for scaling analyses. What really took us by surprise was the lack of information on how animals move down slopes. There was nearly double the number of studies on inclines as opposed to declines. This is remarkable given that, if an animal climbs up something inevitably it has to find a way to come back down, either on its own or by having their owner call the fire department out to help!
Most animals tend to move slower up inclines and keep limbs in contact with the ground longer; this allows more time for the muscles to generate work to fight against gravity. Although larger animals have to do more absolute work than smaller animals to move up inclines, the relative stride length did not change across body size or on inclines. Even though there is much less data in the literature on how animals move downhill, we did notice that smaller animals (
Our study highlights the lack of information we have about how size affects non-level locomotion and emphasises what future work should focus on. We really do not have any idea of how animals deal with stability issues going downhill, nor whether both small and large animals are capable of moving downhill without injuring themselves. It is clear that body size is important in determining the strategies an animal will use as it moves on inclines and declines. Gaining a better understanding of this relationship will be crucial for demonstrating how these mechanical challenges have affected the evolution of the locomotor system and the diversification of animals into various ecological niches.
Image credit: Mountain goat, near Masada, by mogos gazhai. CC-BY-2.5 via Wikimedia Commons.
The post What goes up must come down appeared first on OUPblog.










Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
