Oxford University Press's Blog, page 452
October 18, 2016
Why is the world changing so fast?
Over the past 30 years, I have worked on many reference books, and so am no stranger to recording change. However, the pace of change seems to have become more frantic in the second decade of this century. Why might this be? One reason, of course, is that, with 24-hour news and the internet, information is transmitted at great speed. Nearly every country has online news sites (some clearly more partial than others) which give an indication of the issues of political importance; while government websites present a more official view (for example, the North Korean government’s English-language website provides a good case of totalitarianism in action). However, increased knowledge is but a tool. Underlying the change is a number of major themes.
While the first decade of the twenty-first century began with optimism, this quickly faded – perhaps triggered by the consequences of the invasion of Iraq in 2003 and the global financial crisis of 2008 – with regional, economic, and political instability spreading to many parts of the world. From financial meltdown in Iceland to political crisis in Brazil, from the destruction of Yemen to the dysfunctional birth of South Sudan, from the rise of nationalism and of religious fundamentalism, to the weakness of international organizations, the last few years have been riven with crises.
Always a destabilizing influence on the established status of countries, nationalism remains a potent issue. The world’s newest independent country, South Sudan, separated from the rest of Sudan in 2011. But, sadly, the enthusiasm for self-determination as a way of improving the prospects of its citizens quickly descended in to a violent and destructive civil war. There was a real chance that it would have been joined by another new country: Scotland. If Scotland had voted in 2014 to become independent (and it nearly did), it would by now be a sovereign nation. Catalonia’s provincial government is also doing all it can to break away from Spain, not to mention what the consequences of Britain leaving the EU will be (a decision in which nationalistic rhetoric played an important part). There are some remaining anti-colonial movements, such as in New Caledonia, with countries seeking independence from colonial rulers, in this case France. Few regions are in a position to take self-declared action, but Somaliland has, and though not recognized internationally, it is much more stable than Somalia, the country from which it has seceded.

If nationalism has been one force changing the identity of countries, another causing shock waves through the world has been the growing strength of religious fundamentalism, most usually Islamic, but not solely – Hindu fundamentalism is playing an ever-stronger role in multicultural India, for example. However, it is the virtual collapse of a number of Islamic countries (Syria, Libya, Yemen, and Somalia) amidst civil and religious war that pose the greatest challenge. How can one describe the social, economic, and political position of countries where all forms of normal government have collapsed? The consequences of these ‘failed’ states have spread beyond their borders – the flow of refugees from Syria into surrounding countries and then on to Europe, Islamic State attacks in Western Europe, and al-Shabab attacks in Kenya. The growing dislocation of inter-racial and inter-religious relations across the world has manifested in many uncomfortable ways, from burkini bans on French beaches to a US presidential candidate proposing that Muslims should be banned from travelling to the USA.
As always, economic changes continue to challenge governments. Immediately following the 2008 global financial crisis, it was developed countries and their financial systems that felt the pressure, while the BRIC countries thrived and their demand for raw materials transformed the economies of many small nations – Mauritania for its iron ore, Equatorial Guinea for its oil, Mozambique for its gas and coal. China’s investment in African countries, in order to secure the raw materials it needs, has fundamentally altered the geopolitics of the region. However, as China’s speed of development slowed, so too, did its demand for resources, and the prices of raw materials tumbled. Few countries thought they had joined a natural resources rollercoaster, as their revenues collapsed and their plans and dreams of a secure future imploded before their eyes.

To complete this listing of global woes, the need to contain global warming has brought into sharp focus the difficulties of implementing change that needs to work at a global level through national policies, where domestic politics will tend to promote the self-interest of each individual nation. The Paris talks on combatting global climate change held at the end of 2015 provided new targets for countries to aim at, and change is coming – for example, in just a few years China has become the world’s largest generator of solar power, and its solar panel industry is bringing change to Africa and the Middle East. However, it still remains necessary for the government of Kiribati to buy land in Fiji, to resettle its inhabitants as rising sea levels begin to make their islands uninhabitable.
At least as a partial counter to this litany of problems, significant progress was made towards meeting the Millennium Development Goals – there are fewer people living in absolute poverty and more children are receiving education than in 2000. Maintaining progress amid more turbulent economic times is going to be much harder, with world leaders hoping that new Sustainable Development Goals, agreed in 2015, will assist over the next fifteen years. But progress there has been, and this is something to celebrate amidst the global challenges of nationalism, religious fundamentalism, economic uncertainty, and climate change.
There is certainly no chance of developments slowing down. In just two weeks in September, there were parliamentary elections in the Seychelles, Belarus, Croatia, and Russia, while a new leader emerged in Uzbekistan and a peace agreement was signed between the Colombian government and the FARC rebels after half a century of conflict. As for unpredictable events, well, we will just have to see what happens.
Featured image credit: The Urban Landscape Kaohsiung by Tingyaoh. Public domain via Pixabay.
The post Why is the world changing so fast? appeared first on OUPblog.

October 17, 2016
You have to read Henry Green
Henry Green is renowned for being a “writer’s-writer’s writer” and a “neglected” author. The two, it would seem, go hand in hand, but neither are quite true. This list of reasons to read Henry Green sets out to loosen the inscrutability of the man and his work.
Born Henry Yorke, in 1905, Henry’s maternal grandfather – Baron Leconfield – was among the richest of the British aristocracy; however, although educated at Eton and Magdalen College, Oxford, Henry chooses to leave Oxford early, before graduating, in order to work in a Birmingham factory – later the subject matter of Living. The man and his writing resonate with similarly alluring, subtle paradoxes. He allows, for example, Cecil Beaton to take photographs of him, but only with his back turned. Whilst Rosamond Lehmann, in a list of aptly shape-shifting descriptors, saw him as “an eccentric, fire-fighting, efficient, pub-and-nightclub-haunting monk, voluble, frivolous, ironic, worldly, austerely vowed to the invisible cell he inhabited”. To get better acquainted yourself there’s only one thing to do: you have to read Henry Green. And, as you will find below, now is as good a time as any.
Be a Bright Young Thing.
Henry Yorke (pseudonym Henry Green) and his wife, Dig, were the exemplar IT couple of the 1920s and 30s. Nancy Mitford and Evelyn Waugh referred to them as the “Bright Young Yorkes” in their letters. They were indeed well connected – Dig’s friend, the Duchess of York (later Queen Elizabeth the Queen Mother), became godmother to their son, Sebastian, in 1934. But to read Green’s novels of class, Living (1929) – “the best proletarian novel ever written” (Isherwood) – Party Going (1939) and Loving (1945), alongside Waugh’s evocations of class privilege in Vile Bodies (1930), A Handful of Dust (1934), and Brideshead Revisited (1945), is to enter a much more nuanced, unsentimental interwar landscape.
Henry Green is, paradoxically, one of the most highly praised writers of the 20th century, whilst remaining one of the least read.
You’ll be in good company.
Henry Green is, paradoxically, one of the most highly praised writers of the 20th century, whilst remaining one of the least read. The list of writers praising his work is exhilaratingly extensive: T.S. Eliot singled out “the novels of Henry Green” as evidence that “creative advance in our age is in prose fiction”; John Updike speaks of aspiring to “Green’s tone, his touch of truth, his air of peddling nothing and knowing everything … for sheer transparence of eye and ear, he seems to me unmatched among living writers;” and Sebastian Faulks has a quotation from Green pinned above his writing desk as inspiration.
Other writers and contemporaries who have raved about Green’s prose include: John Ashbery, W.H. Auden, Elizabeth Bowen, A.S. Byatt, Christopher Isherwood, Frank Kermode, Rosamond Lehmann, David Lodge, Katherine Mansfield, Nancy Mitford, Lady Ottoline Morrell, Tim Parks, Anthony Powell, V.S. Pritchett, Evelyn Waugh, Eudora Welty, Angus Wilson, James Wood, and Virginia Woolf.
The prose is mesmerizing. Have a taste for yourself.
Non-Fiction:
“Prose is not to be read aloud but to oneself alone at night, and it is not quick as poetry but rather a gathering web of insinuations which go further than names however shared can go. Prose should be a direct intimacy between strangers with no appeal to what both may have known. It should slowly appeal to fears unexpressed, it should in the end draw tears out of the stone.” Pack My Bag (1940)
Fiction:
“Jim Dale had bitterness inside him like girders and when Arthur began singing his music was like acid to that man and it was like that girder was being melted and bitterness and anger decrystallized, up rising in him till he was full and would have broken out.” Living (1929)
There’s no excuse. It’s all in print.
In the next few weeks, all the novels of Henry Green, from Blindness (1926) to Doting (1952), will be reprinted in the US by NYRB Classics. These editions are stunningly rendered, with fresh, exhilarating introductions to each work: Adam Thirlwell introduces Living (1929), James Wood Caught (1943), Roxana Robinson Loving (1945), and Deborah Eisenberg Back (1946).
In the UK, Caught (1943), Back (1946), and Concluding (1948) were published together by Penguin for the first time.
Read more on Green
This week’s New Yorker features an article entitled: “Doings and Undoings: How great was the novelist Henry Green.”
Featured image credit: “Literature” by MabelAmber. CC0 Public Domain via Pixabay.
The post You have to read Henry Green appeared first on OUPblog.

How university students infantilise themselves
In June 1962, 59 student activists met in Port Huron, Michigan to draft a manifesto of their core principles. They condemned racism in the United States and the nuclear arms race with the Soviet Union. Most of all, though, they indicted their own institution, the modern US university, for ignoring and suppressing their voice. Students needed to ‘wrest control of the educational process from the administrative bureaucracy’, The Port Huron Statement declared. Otherwise, the empty suits who ran the university would drown its emancipatory potential in a sea of bland rituals and senseless rules.
We heard echoes of these sentiments in the protests that seized US campuses last November. Like their forebears in the 1960s, today’s students blasted university leaders as slick mouthpieces who cared more about their reputations than about the people in their charge. But unlike their predecessors, these protesters demand more administrative control over university affairs, not less. That’s a childlike position. It’s time for them to take control of their future, instead of waiting for administrators to shape it.
Most of the protests last fall focused on racism at the universities themselves, rather than in US society generally. Nearly every formal demand issued by the students included a request for a new university office, rule, or regulation. Administrators typically complied, too, marking yet another departure from the 1960s. Back then, university officials regarded student protesters as an existential threat to the university itself. Today’s administrators embraced the protesters, promising to ‘do better’ – and, not incidentally, to provide more administrators. Some schools pledged to hire ‘Chief Diversity Officers’; others agreed to institute new diversity training and programming; still others announced new multicultural and counselling centres, aimed especially at assisting minority students.

What’s going on here? At the simplest level, the protests over race and diversity reflected the increase in diversity across US higher education. The number of black college students tripled between 1976 and 2012, when African Americans went from ten per cent to 14 per cent of the undergraduate population. Hispanic enrollment rose even more sharply, from just three per cent in 1976 to 14 per cent – the same fraction as African Americans – in 2012. To be sure, vast racial inequalities remain: although minorities represent a third of all college students, for example, they make up about one-seventh of students at selective private and public colleges. But these schools also compete fiercely for the most qualified minority students, which reflects another huge difference from the past. Indeed, among elite institutions, diversity has become a key emblem of elite status. So in one sense, universities were a victim of their own success. Attracting a critical mass of minority students, they faced new demands from that same clientele.
These years also witnessed a dramatic shift in patterns of university employment, away from faculty and towards administrators. In 1975, universities had almost twice as many professors as administrators; 40 years later, the administrators outnumber the faculty. Over this span, the number of ‘executive, administrative, and managerial employees’ at universities rose by 85 per cent; meanwhile, so-called ‘professional staff’ – accountants, counsellors, and so on – ballooned by an astonishing 240 per cent. Part of the reason lay in the perverse economic competition between different schools, which offered a host of new student services and amenities in order to attract more paying customers. There was also a growing maze of federal and state regulations, which required new teams of officers to ensure compliance. Consider the recently promulgated federal instructions under Title IX of the US education code, which requires universities to establish systems for preventing and punishing sexual assault. That in turn forces them to hire dozens of counsellors and investigators, lest the universities run afoul of the new rules.
As universities layered on more and more bureaucracy, students came to believe that every campus problem had a bureaucratic solution. Officials are expected to remove every trace of racism, ranging from outright bigotry to smaller ‘microaggressions’; examples include asking a minority student if she is from ‘the ghetto’, or whether she was admitted to school under affirmative action. Protesters in November demanded that universities institute penalties for these types of comments, like mandatory diversity training for miscreants. Never mind that regulations of campus speech have been found unconstitutional by every court that has addressed them, or that most studies of diversity training have failed to show that it improves race relations. Symbolically, at least, adding a new rule or requirement will show that the university is ‘doing something’. And when it falls short, as it inevitably must, it will be asked to do more.
How can we break this cycle of administrative demand and dashed expectation? We might start by encouraging students to revive older traditions of direct action, which would leverage their own power instead of ceding yet more of it to university officials. If you’re the target of racial slights or insults, don’t wait for your school to institute yet another speech code or diversity training: organise your own teach-ins and rallies, where students can enlighten each other. If sexual assault is rife on campus, don’t rely on administrators to eliminate it: stage protests outside dormitories and fraternity houses, reminding everyone what goes on inside of them. If the school newspaper prints an article that offends you, don’t tell the university to de-fund the paper: publish your own online blogs and journals, and circulate them far and wide.
Asking administrators to solve every problem infantilises students, even as it contributes to the top-heavy bloat of our universities. Our students need to grow up, in the most political way, by wresting control of the educational process from an administrative bureaucracy that wields way too much authority already.
This article was originally published at Aeon and has been republished under Creative Commons.
Featured image credit: March through Minneapolis against the Washington football team name by Fibonacci Blue. CC BY 2.0 via Flickr.
The post How university students infantilise themselves appeared first on OUPblog.

Big data in the nineteenth century
Initially, they had envisaged dozens of them: slim booklets that would handily summarize all of the important aspects of every parish in Ireland. It was the 1830s, and such a fantasy of comprehensive knowledge seemed within the grasp of the employees of the Ordnance Survey in Ireland. These fantasies were, in fact, to be found all over Europe and its colonies – it was a time of confidence in the availability of the world to be turned into facts (to be gathered in maps, censuses, encyclopedias, and statistical reports), and in the capacity of humankind to describe that whole world. Nineteenth-century Europe saw the invention of big data as a tool of effective government. But the fantasy of comprehensive knowledge found its limit in the colonies, and nowhere more clearly so than in Ireland.
As part of the improving zeal of the British government in 1820s, a parliamentary commission was set up to investigate the necessity of re-surveying the island of Ireland, with a view to establishing new and more accurate land values (The classic study of the Survey in Ireland is J.H. Andrews, A Paper Landscape: The Ordnance Survey in Nineteenth Century Ireland, OUP 1975. Much of the historical detail below comes from this book). The commission tasked the Ordnance Survey, a branch of the British Army, with making an accurate and comprehensive map of Ireland at a scale of six inches to one mile, or 1:10,560. The scale might seem unexceptional to anyone alive now who grew up with the Ordnance Survey’s maps of Ireland and Britain, but at the time it was nothing short of revolutionary – it called for enormous maps of frequently sparsely inhabited areas, and at a level of detail never before seen across such a vast expanse of land. How was the Survey to gather the information to fill in such detailed maps? The answer was to task a crew of fieldworkers, not only to map the physical features of the landscape, but to record every possible aspect of the landscape from its placenames to its productive economy.
With all of these data being gathered, there was no room and no protocol for putting them onto a map. There was simply too much information – a placename might be included, but not its etymology; a mill might find its way onto the map, but not its history and ownership. The solution was to publish a series of printed gazetteers to accompany the map, which would record all of the extra information that the Survey officers would be able to gather in the course of their fieldwork. Col. Thomas Larcom, in laying out instructions to the officers about what kind of information to collect for just one of the many sections of their reports, asked them to note:
Habits of the people. Not the general style of the cottages, as stone, mud, slated, glass windows, one story or two, number of rooms, comfort and cleanliness. Food; fuel; dress; longevity; usual number in a family; early marriages; any remarkable instance on either of these heads? What are their amusements and recreations?
While not all officers filled out these reports, and some were more thorough than others, it is easy to see how the anticipated slim volumes generated a mountain of information, at a scale that was practically impossible to manage. Reports on the geology, meteorology, history, archeology, literature, and culture of parishes created scenes straight out of the fiction of Jorge Luis Borges, as the available information rapidly eclipsed the Survey’s capacity to edit and publish it. One Survey officer wrote that, to study the placenames of Ireland alone was “to pass in review the local history of an entire country” – a paradox of scale, and almost a parody of the fantasy of comprehensive knowledge that drove the British colonial administration in Ireland. How could a Survey be both intensive and extensive at once?
In the end, only one publication emerged from the whole project, covering just one parish (Templemore) in Co. Derry. At 350 pages in length, the ‘memoir’ as it was now called, had ballooned in size, and cost more than three times the estimated budget for the entire county. Begun in about 1834, it wasn’t finished until November 1837. Though praised by many as a volume of true scholarship, the memoir was so large and so expensive and so slow to emerge that it sank the whole project – shortly after it was published the entire scheme was cancelled, and despite many years of attempts to have it restarted, the money was never found. Among the reasons for its cancellation was one that is rather ironic, given the Ordnance Survey’s later reputation in Ireland as a force destructive of Irish history and culture – it was objected that the Templemore memoir stoked nationalist pride, and that the topographical department of the Survey (which had charge of the historical and archaeological reports) was a hotbed of nationalist feeling and agitation. This prime example of scientific rationalism and colonial governmental efficiency was seen as having stoked up anti-imperial feeling. Big data, it seems, had some very serious unintended consequences.
It wasn’t the end of big data in Ireland – the first ever complete national census took place there in 1841. But the Survey’s woes highlighted the tensions between noise and speech, between information and knowledge, giving pause to those who presumed that it was within the reach of human capacity to capture and record “the local history of an entire nation.” The assumption of many proponents of big data is that we can have it all, but sometimes it’s all just too much.
Featured image credit: The Long Room of Trinity College Old Library by Diliff. CC BY-SA 4.0 via Wikimedia Commons.
The post Big data in the nineteenth century appeared first on OUPblog.

The transition of China into an innovation nation
The writing is on the wall: China is the world second largest economy and the growth rate has slowed sharply. The wages are rising, so that the fabled army of Chinese cheap labor is now among the most costly in Asian emerging economies. China, in the last thirty years has brought hundreds of millions of people out of poverty, but this miracle would stall unless China can undertake another transformation of becoming an innovation nation. Historically, leading national economies are almost inevitably the global leaders in technology. Can China accomplish this new transition?
The Chinese government thinks so. In 2006, it issued a report called Medium and Long-term Plan for Science and Technology, which envisions China joining the top rank of “innovative nations” by 2020, and become a world leader by the mid-21st century. Central to this vision is indigenous innovation, stressing autonomous and strategic control of innovation through promoting domestic intellectual property rights (IPR) and brands. Under this vision, China’s R&D budget skyrocketed since 2007 from government or non-government sources.
The results are now visible. China is home to some of the world’s most innovative companies such as Alibaba on e-commerce, Tencent on social media, and Huawei on telecommunication. Measured by scientific publication and patent, China also rose to be among the world top rank. However, critics charge that bureaucratic control of the R&D, the rampant IPR violation, inefficient state-owned enterprises, and a rigid education system would ultimately doom China’s innovative drive.
I argue that the most solid evidence of China’s innovative capacity can be found through an analysis of Chinese diverse and dynamic industries. Innovation in each industry involves transformation of their organization, upgrading of technology, and access and competitiveness in the domestic and global market. Integrate findings from each industry allow us to put this ‘elephant’ of Chinese innovation together, through an acronym, DYNAMIC:

D: Dual tracks
Differing from the image of the commanding height of a state-capitalism system, Chinese innovation is powered by both top-down and bottom-up forces, depending on the nature of the industries. The extreme top-down case is China’s railroad. Through massive state investment and debt financing, China Railroad Corporation has installed and put in operation the world’s largest high-speed rail system within one decade. Yet, similar top-down attempts to create national champions in automobile and semi-conductor foundries during the 1980s-90s ended up in disappointment as such enterprises failed to keep up with global leaders. The extreme bottom-up case is represented by mobile phone manufacturing where the grass-roots players mobilized by intense market competition have powered innovation, moving companies from producing knockoffs to leading brands.
Y: Young entrepreneurs and enterprises
The first group of Chinese competitive technological companies was created from scratch only since the 1980s and mostly only after 2000s. The founding entrepreneurs behind these companies are in their 40s and 50s. They are still in the prime age of accumulating expertise, thus are open to new ideas and experimentations.
N: Networked Eco-system
A: Adaptive incremental changes
C: Clustering
These three are related, and represent a collective and incremental approach of Chinese innovation through geographical clustering. Innovation is not usually an individual feat and novel breakthrough. Even Steve Jobs’ accomplishments have been built upon collective efforts by many others. This approach of incremental adaptation of foreign products is even more evident in China. Geographical clusters such as the cell phone cluster around Shenzhen, the IC design cluster in Shanghai, and the internet cluster in Beijing are centers of innovation as clusters facilitate specialization and exchanges among networked enterprises, improving flexibility and aid technology upgrade.
M: middle of the market
I: integration of technology
These two represent the key strategies to attain strategic control in indigenous innovation. Foreign firms occupy the premier market segments in China, so domestic enterprises target the middle market by providing similar but affordable products, called good enough innovation. The middle market has seen intensified competition for price and quality, forcing Chinese industrial leaders to improve on both measures.
Most Chinese companies rely on others for sophisticated core technology, but they are able to gain strategic control through system integration of technology from multiple sources. They learn from working with customers and global suppliers, and benefited from comparison and integration of global progress. If companies also invest in internal R&D, they can gradually catch up with the global leaders. Examples are found across the board, particularly prominent in high speed rail and alternative energy. This suggests that the so-called Chinese indigenous innovation is a result of China’s integration with, rather than separation, from global technological development.
In sum, the common caricatures of China as a copycat with no originality, or a Leviathan state intend of global domination are misleading. Institutional deficits in the Chinese innovation system are real, but can be overcome. While paths of innovation are long and inherently uncertain, close reading of trajectories of China’s industries reveals that the prospect is bright.
Featured image credit:China Railway Highspeed(CRH) CRH-2 380A high speed train by Alancrh. CC-BY-SA 3.0 via Wikimedia Commons.
The post The transition of China into an innovation nation appeared first on OUPblog.

Brexit: environmental accountability and EU governance
Civil society will be preoccupied in the years to come with ensuring the maintenance of environmental standards formerly set by EU environmental law. Keeping the same headline environmental standards post-Brexit will be a victory worth celebrating. Day to day questions of governance are however fundamental if we are to ensure that we create not just the appearance of strong environmental law, but environmental law that means something in practice. This blog provides some thoughts on the less visible aspects of EU environmental governance, aspects that must be held up to scrutiny as we develop an accountability framework ‘independent’ of the rules and institutions of the European Union.
Certain aspects of environmental law, such as quality standards for water and air, rights to participate in environmental decision making, substantive protection of sites hosting valued species and habitats, are highly visible. Important as they are, however, they are just the beginning. EU law has also shaped UK environmental protection less direct, less easily visible, ways. It routinely imposes a framework of governance obligations on Member States: to plan publicly for implementation; to report publicly and to the Commission and other Member States on how they’re doing; to explain failures to comply, or the lawful use of derogations and exceptions; and to explain how compliance will be achieved in the future. Even if the reporting sometimes falls short of its potential, these obligations require the generation and publication of powerful environmental information, and constitute an important part of environmental accountability, enabling political and legal, formal and informal, peer and citizen, scrutiny of government action.
The long reach of EU governance mechanisms can be seen throughout environmental law. The hard substantive protections provided by the Habitats Directive are famously vulnerable post-Brexit. There will be a battle over standards, but we should not forget that these standards are bound up in a typical multi-level governance framework, which is crucial to their application. The Directive contains the usual obligations to report periodically to the Commission on the implementation of the Directive. Any exception to the prima facie position that a project that will ‘adversely affect the integrity’ of a protected site cannot go ahead, involves the Commission, and a certain amount of publicity. A harmful project can go ahead only for ‘imperative reasons of overriding public interest’, provided also that there are no alternative solutions, and that the Member State takes appropriate ‘compensatory measures’; the Commission must be informed. Even more strikingly, when the site hosts a priority habitat type or species, imperative reasons of overriding public interest other than human health, public safety or ‘beneficial consequences of primary importance for the environment’, may only be considered ‘further to an opinion from the Commission’.
The Habitats Directive is certainly a special piece of environmental law. But the multi-level, dense relationship it illustrates is far from unique. The River Basin Management plans required by the Water Framework Directive, for example, must contain a wealth of material, including: a summary of the measures put in place to achieve ‘good water status’ and ‘no-deterioration’; an explanation of any failure to meet those objectives, or any risk of failure; an explanation of the extra monitoring and remedial obligations that kick in when basic aims are not met; an explanation of the use of alternatives or exceptions to the good water status or no-deterioration norms. As under the Habitats Directive, derogations and exceptions cannot be used quietly – they must be explicitly acknowledged and explained, creating an opportunity for the application of political or legal pressure.
There will be a battle over standards, but we should not forget that these standards are bound up in a typical multi-level governance framework, which is crucial to their application.
The first question raised by Brexit is whether we want continued reporting and planning in UK (English, Northern Irish, Scottish, Welsh) environmental law. Are the obligations to report, plan and disclose an important form of accountability and an opportunity for iterative environmental improvement, or are they ‘red tape’? The way I pose the question probably makes my own views clear. If we decide that planning, reporting, and explaining has the potential to improve process and substance, the next question is who the plans, reports and explanations are for. Simply to require publication, so that anyone can see the material, scrutinise and respond, would be relatively straightforward. Such publicity is important, but the demands it makes on civil society should not be underestimated – and if these reports go into a void, they become red tape, in a self-fulfilling prophesy. An obligation on a specific public body to respond to reporting and planning, as the Commission currently does, creates the beginnings of an accountability loop.
And when an external assent is necessary, who gives it? It is far from obvious who should take the role of the Commission under the Habitats Directive, and Brexit could mean the loss of a whole level of institutional checks. Moreover, even if a formal opinion isn’t demanded, the scrutiny goes in part to a check on the legality of government actions. The special space for the enforcement of EU environmental law, through Commission action and in the national courts, will be much missed post-Brexit – not just by those who litigate, but by those who use the authority of law to shape political change.
And finally for now, how will the substantive, but often sparse, norms set out in legislation be developed and interpreted? So far, this has largely been dealt with as a fairly straightforward technical question – we shall provide in well-drafted legislation for the role of existing and future ECJ judgments. But that avoids the sensitivity of these questions. What constitutes an ‘imperative reason of overriding public interest’, and who decides? Is housing such a reason? Do we turn to government agency, courts alone, committees, lists altered at ministerial discretion, or statutory instrument?
As we try to work these questions through, we may find at least the beginnings of a national model in the Climate Change Act 2008: a highly structured system of mandatory (probably justiciable) planning and reporting, depending on independent expertise, ministerial responsibility and parliamentary (as well as public) scrutiny. Importantly, this isn’t a closed national approach, but is explicitly open to developments at international and EU level.
Understanding the ways in which the EU’s institutional and legal machinery enhances accountability is less headline-grabbing than standards for the protection of bats and birds, just as tricky as standards for toxics or waste, and as important as either. How we fill the gaps post-Brexit is not a small technical detail, which can comfortably be left to the bureaucrats while civil society concentrates on the meaty substance. It is a profound political question. The location of authority in decisions on our environment, and the mechanisms through which that authority may be held to account, must be made visible in the debate around Brexit, and subjected to careful scrutiny and noisy debate.
Featured image credit: Europe-England. CC0 Public Domain via Pixabay.
The post Brexit: environmental accountability and EU governance appeared first on OUPblog.

October 16, 2016
The first 1000 days
Nowadays we use the term ‘first ‘1000 days’ to mean the time between conception and a child’s second birthday. We know that providing good nutrients and care during this period are key to child development and giving a baby the optimum start in life.
Mina and Apollo are gazing at their newborn baby, Tunu, – with bitter sweet smiles. They are thrilled with the safe arrival of a healthy baby – but they remember the child they have lost. Their last baby, Suzy, was born so small and early that she had no strength to suckle.
It was while they, and the rest of the family, were grieving for Suzy that a neighbour told Mina about a new community group, called a Care Group. In a Care Group, trained ‘Leader Mothers’ teach young mothers, through monthly discussions and home visits, how to care and feed for young children and the rest of the family. Mina was curious so one afternoon she went along to a meeting and found she enjoyed the friendly discussions and cooking demonstrations. One of the things she learnt was the importance of the ‘first 1000 days’.
By attending the Care Group meetings, Mina learnt that her future babies would have the greatest chance of being healthy if:
She herself is well nourished before she becomes pregnant so her body is ready for the stress of pregnancy;

She eats well and avoids malaria infection (e.g. by sleeping under a bed net) during pregnancy;
She breastfeeds the newborn baby within one hour of birth and does not give other liquids or ritual foods, and continues exclusive breastfeeding for 6 months;
At the age of six months she introduces clean nutrient-rich foods (such as porridges enriched with eggs and fish, and fruits and vegetables) in addition to breast milk, and continues breastfeeding until the child is at least two years old.
This was a lot of new ideas for Mina and she wondered if they were practical – what would her mother and mother-in-law think about them? What would Apollo think? Could they obtain the nutritious foods that she and her child would need?
Those organizing Care Groups recognize the importance of reaching out to other household members to build support for new practices – particularly fathers and grandmothers and sometimes village elders or religious leaders. Some include demonstration gardens where members learn to produce vegetables and fruit and sometimes raise poultry to improve access to healthy foods especially where they are expensive in local markets.
Mina was lucky – her mother, mother-in-law, and Apollo were supportive and helped Mina eat well before and after she became pregnant again; they too were eager for the next baby to be healthy. So everyone was thrilled when little Tunu was born with a good birthweight, and eager to suckle Mina’s precious colostrum, and Mina, because she had eaten well and taken her prescribed iron/folic acid pills, was strong and not anaemic.
Mina and Apollo have plans for Tunu – that she will grow into a beautiful girl; that they will give her a good education, and that they will make sure she is fully mature before thinking about marriage and babies. They know now that by breastfeeding Tunu for the remainder of her ‘first 1000 days’, and introducing healthy family foods when she is six months old they are giving her a priceless gift – the best start in life. In three years’ time Mina and Apollo plan to start another baby, confident that this one too will be healthy, and a good companion for Tunu.
Featured image credit: Baby by isaiasbartolomeu. CC0 public domain via Pixabay.
The post The first 1000 days appeared first on OUPblog.

Engendering debate and collaboration in African universities
A quick scan of issues of the most highly-ranked African studies journals published within the past year will reveal only a handful of articles published by Africa-based authors. The results would not be any better in other fields of study. This under representation of scholars from the continent has led to calls for changes in African universities, with a focus on capacity building. It is, however, important that these initiatives to build the capacity of Africa-based academics are accompanied by micro-level efforts to encourage and strengthen academic debate and collaboration within institutions.
The barriers to research and publication in most public universities in Africa are many and have been extensively documented. They include heavy teaching loads, poor infrastructure, political interference, low salaries, and the lack of research funding. These problems are compounded by the fact that many universities do not reward research or publication in high-impact journals, and continue to value quantity over quality when making decisions about promotion.
These barriers have contributed to the creation of spaces in which academic debate and the thoughtful critique of scholarship doesn’t always thrive. Many scholars have few opportunities to discuss their ideas and to receive essential feedback from members of their immediate academic community. Consequently, the development of ideas and the subsequent writing-up of these ideas for submission to journals sometimes happen with little or no input from colleagues with shared interests. Working in this manner doesn’t only cause scholars to miss opportunities to improve upon their work, but makes it difficult for them to develop their departments and research centres into spaces of research excellence.
Various organizations and universities have introduced initiatives to address some of these problems. The recently launched African Research Universities Alliance (ARUA) seeks to build research excellence in universities across the continent. The University of Ghana, where I am a research fellow, offers research and conference travel grants. The university’s Office of Research Innovation and Development is staffed with dedicated people who support researchers to apply for external grants. The Council for the Development of Social Science Research in Africa (CODESRIA) also seeks to support scholars, by organizing research and academic writing workshops.
These initiatives are a much-needed step toward dealing with some of the challenges facing many academics. Scholars, who seek to conduct original research and to publish in high-impact journals, should complement these macro-level programs with micro-level efforts, such as departmental seminars and reading groups, to foster discussions and collaboration within their units. While these forums exist and are vibrant in many African universities, my conversations with colleagues in Ghana and elsewhere indicate that not all units have them. Furthermore, they are not always geared toward improving the work that is being done by members of faculty.
Academics in many African universities confront multiple barriers to research and publication. These are problems that cannot be addressed without major structural and managerial reforms.
The objective of these forums, whether they are limited to members of a department or open to faculty and students from across the university, should be to help each other produce theoretically-grounded, methodologically-rigorous, and interesting work that answers important research questions. They should occupy a central place in the activity of all departments and research centres, and in addition to teaching and mentoring, should make up the institution’s lifeblood.
Scholars should develop these forums alongside their personal networks. One of the first things I did when I joined the University of Ghana was to begin connecting with scholars – some of them senior – who shared some of my research interests, and were producing interesting work in high-impact journals. In a short time, they have become a small community of people whom I rely on in various ways. Among other things, we share information such as call for papers, discuss our research designs, and receive and give feedback on work-in-progress. I am also collaborating on several projects, including a multi-country study, with members of this community. I have also connected some of them to members of my international networks in order to facilitate information exchange and collaboration and they have done the same for me.
A community in which informed and vibrant debate and constructive criticism thrives is important for all scholars. While some scholars are able to get feedback from conferences and from personal networks they have built over time, others do not have access to these resources. Furthermore, spaces in which rich conversations and exchange occur are critical for the training of doctoral students and for the development of ideas that will solve policy problems on the continent, shift paradigms, and challenge the status quo.
Academics in many African universities confront multiple barriers to research and publication. These are problems that cannot be addressed without major structural and managerial reforms. As universities begin to introduce some of these reforms with the support of various organizations, academics in these universities can also try to foster discussion and exchange within their individual units. Although a collaborative environment is not a panacea, it can provide much-needed support to scholars, who work under difficult conditions, but want to conduct research and publish in international peer-reviewed journals. It is, therefore, up to such scholars to begin to strengthen their communities.
Featured image credit: sunset birds cloud sun sky by AdinaVoicu. Public Domain via Pixabay.
The post Engendering debate and collaboration in African universities appeared first on OUPblog.

The University: past, present, … and future?
By nearly all accounts, higher education has in recent years been lurching toward a period of creative destruction. Presumed job prospects and state budgetary battles pit the STEM disciplines against the humanities in much of our popular and political discourse. On many fronts, the future of the university, at least in its recognizable form as a veritable institution of knowledge, has been cast into doubt. Has the university, whose origins trace back to 12th and 13th century Paris, Bologna, Oxford, and Padua, now outlived its sell-by date? Sages of Silicon Valley, for starters, would offer a resounding yes.
But the anxieties of the present invite reflection on higher education’s past. If one digs deeper, a curious point emerges. In our current cultural moment, it may come as a surprise that the modern scientific research university, born in early 19th century Berlin in the context of war, revolution, and swelling national interest—circumstances not entirely unlike our own—was founded by … a theologian.
In the late 18th century, universities as institutions appeared on the brink of collapse. The Enlightenment, the French Revolution, and the Napoleonic era subjected universities—and theological faculties in particular—to an unrelenting onslaught of hostility. As the armies of the French Revolution spread across Europe, they seized university endowments for the state and suppressed theological and other faculties in favor of specialized professional and technical academies. In 1789, Europe counted 143 universities; by 1815 there were only 83. France had abolished its 24 universities; Spain lost 15 out of 25; and in Germany, 7 Protestant and 9 Catholic universities folded. From the 1820s to the 1840s, Swiss reformers proposed collapsing all of Switzerland’s universities into one remaining national institution.
It was in this context that the modern university system found its legs in Berlin in 1810, when Humboldt University, initially called simply the University of Berlin, first opened its doors. Its principal intellectual architect was Friedrich Schleiermacher (1768–1834), pioneer in religion, hermeneutics, and Plato scholarship, among other domains, and the soon-to-be dean of Berlin’s first theological faculty.
Schleiermacher’s intellectual blueprint, laid out in a few short, fascinating treatises, belonged to a remarkable Prussian political initiative in response to Prussia’s humiliating loss of the University of Halle to Napoleon in 1806. The initiative also attracted proposals from such illustrious names as Wilhelm von Humboldt, F. W. J. Schelling, and J. G. Fichte, each committed to the unity of knowledge understood in organic, idealist terms, and addressing the structure and ethos of a new university and the proper balance between the free pursuit of knowledge and the interests of the state. Never before in Western history—nor, arguably, since—has the founding of a university attracted such excitement, promise, and self-conscious reflection.
Though drawing from earlier Enlightenment precedents, especially at Halle and Göttingen, and reinterpreting practices traceable to the Renaissance and even Aristotle and the ancient world, Berlin marked, in the eyes of many, a new creation: an institution that embraced the ideal of academic freedom, prioritized cross-disciplinary teaching, exhibited a novel research imperative, nurtured ethical character formation, and above all promoted critical and rigorous scholarship—the latter summed up in the German word Wissenschaft. By the first decades of the 20th century, the so-called German model or Prussian model of the university would rise to become the global standard of higher education. Indeed, quipped G. W. F. Hegel, Germany’s star philosopher of the 19th century, “our universities and schools are our churches.” With missionary zeal, students and scholars alike spread the idealized institution’s fame across Europe and across the Atlantic to the New World, carrying the tools of professionalization and specialization. For them, universities functioned as communities that cultivate practices and virtues in which knowledge is a legitimate good. Knowledge or scholarship depended on the forging of a scientific character marked by rigor, a critical disposition, clear communication, and sustained exchange.
In an important, unexpected feature of Schleiermacher’s vision, professors of seemingly “practical” disciplines, such as law, medicine, or theology, who did not make an effort to contribute to “philosophy”—understood in the widest sense to include fields like history and philology alongside metaphysics and ethics—should be excluded from the university. One could not do without the other. In short, the onset of “modernity” meant rethinking prior classifications of knowledge. The process of cross-fertilization, in fact, would characterize some of the greatest intellectual achievements of the age.
Seen in such a light, present polarizations appear myopic at best. We are once again in a time of chaos and turmoil—different in important respects, to be sure, though similarly full of perils and possibilities. As we contemplate the future of the university in our own time, there is much to gain by thinking carefully about its past. As Chad Wellmon notes, from the medieval university, epistemic humility and a respect for traditions stand out; the Enlightenment university offers a rich commitment to a broader social good; and the modern university, launched by Schleiermacher, exhibits the virtues of a research community, including open debate, demand for evidence, and attention to detail.
History may not repeat itself, but it certainly warrants pondering: the past should inform our prognostications about the future and chasten our assertions about disciplines scholarly and practical. Or, as Lorraine Daston recently put it, history is not psychoanalysis (for good reason), but it, too, can profoundly unsettle current academic and institutional assumptions that we take for granted, and do so for our benefit.
Featured image credit: University of Berlin, Germany, ca. 1900, from the Detroit Publishing Company, Detroit, Michigan. Public domain via Wikimedia Commons.
The post The University: past, present, … and future? appeared first on OUPblog.

How much do you know about al‐Kindī? [quiz]
This October, the OUP Philosophy team honors al-Kindī (c. 800-870) as their Philosopher of the Month. Known as the “first philosopher of the Arabs,” al-Kindī was one of the most important mathematicians, physicians, astronomers and philosophers of his time.
How much do you know about al-Kindī? Test your knowledge of this celebrated Arab philosopher with our quiz!
Featured image: photo by معتز توفيق اغبارية. CC-BY-4.0 via Wikimedia Commons.
Quiz image: Featured image: Iranian glazed ceramic tile work, from the ceiling of the Tomb of Hafez in Shiraz, Iran. Province of Fars. Photo by Pentocelo. CC BY 3.0 via Wikimedia Commons.
The post How much do you know about al‐Kindī? [quiz] appeared first on OUPblog.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
