Oxford University Press's Blog, page 253

May 23, 2018

The New England Watch and Ward Society

Ninety-two years ago this month, a confrontation took place on the Boston Common between New England’s Protestant establishment and a coalition of secular activists. Representing these two positions were J. Frank Chase, chief agent for the New England Watch and Ward Society, and H.L. Mencken, the well-known Baltimore journalist and editor of the avant-garde American Mercury. Chase had banned the  from sale in Boston because it contained a story about a prostitute, and Mencken came to town to challenge the ban in court. At stake, that afternoon was the role of religion in the public square.


Historians often cast the outcome of censorship debates in the 1920s as the inevitable triumph of progressive enlightenment over Puritan prudery. This narrative, however, masks a critical factor present in the debates. Standing in the Whig-Republican tradition, which dominated New England for most of the nineteenth century, the Protestant moral reformers of the Watch and Ward Society insisted that the state had a responsibility to protect people from immorality.


We typically think of New England’s leading liberals—Congregationalists, Episcopalians, and Unitarians—as progressive, refined, and tolerant. But it was liberal Protestants, not fundamentalists, who formed a vice squad in 1878 because they embraced the Whig-Republican tradition’s communal view of society. This tradition stressed self-discipline and social responsibility and rejected the privatization of religious morality. To Protestant activists, the free market should not determine what books were available to the public. Nor should what people read be only a matter of individual choice. Instead, they insisted, the state must safeguard public morality. These Protestants saw themselves as their brothers’ moral keepers.


Drawing upon the Jeffersonian tradition, the libertarian Mencken contended that state’s central task was to defend individual liberty. A longtime critic of censorship, Mencken sought to subvert the “organized terrorism” of the Watch and Ward Society. To this end, he printed several articles in the Mercury ridiculing the society. After the Mercury published the story about the prostitute in April 1926, an outraged Chase banned its sale.


Chase agreed to meet Mencken at the famed “Brimstone Corner” on the Boston Common by Park Street Church. It was a symbolically significant location. “Brimstone Corner” had acquired its colorful name because seventeenth-century Puritans reportedly spread hot ashes over the area to illustrate the nature of the hell that awaited unbelievers. There, the editor would sell him a copy of the Mercury. Chase would then order Mencken’s arrest, enabling him to test the ban in court.


“I’m Mencken, please nab me,” the writer declared as the onlooking crowd roared. While hundreds of Mencken’s supporters followed the paddy wagon, it was the Protestant establishment that was about to be swept away.


At Mencken’s trial, famed ACLU attorney Arthur Garfield Hays accused the Watch and Ward Society of “attempting to take away the liberty of the majority.” Every effort to make people “moral by law,” he added, had failed. The state’s duty was to defend individual freedom, not promote private morality. The judge found Mencken not guilty.


Following subsequent controversies in Boston over the sale of such popular works as Sinclair Lewis’s Elmer Gantry and D.H. Lawrence’s Lady Chatterley’s Lover, a coalition of secular activists, inspired by Mencken, revised Massachusetts anti-obscenity law in 1930 to permit the sale of such popular works.


This revision signaled more than just the decline of mainline Protestantism’s ability to regulate public morality. The debate over censorship brought the conflict between the Whig-Republican and Jeffersonian visions of civil society into public view. These alternative visions of the nature of civil society continue to inform contemporary debates about same-sex marriage, gun control, and other contentious issues dividing American culture.


The Whig-Republican and Jeffersonian traditions, however, do not correspond exactly to the contemporary conservative-liberal divide in today’s so-called culture war. On certain issues, such as abortion, contemporary conservatives stand within the Whig-Republican tradition because they want the state to regulate personal morality while liberals favor a more libertarian position. But on gun control, conservatives take a more libertarian position while liberals generally espouse the Whig-Republican tradition. Even if today’s cultural warriors on both the left and right are unaware of this history, the Whig-Republican and Jeffersonian traditions continue to shape their thinking.


Featured Image: Park Street Church in Boston by Detroit Publishing Company. Public Domain via Wikimedia Commons.


The post The New England Watch and Ward Society appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on May 23, 2018 06:30

Spick and span: a suspicious hybrid

Etymology is a peaceful area of study. But read the following: “Spick and Span”—these words have been sadly tortured by our etymologists—we shall, therefore, do our best to deliver them from further persecution. Tooke is here more than usually abusive of his predecessors; however, Nemesis, always on the watch, has permitted him to give a lumbering, half Dutch, half German, etymology; of ‘shining new from the warehouse’—as if such simple colloquial terms were formed in this clumsy round-about way. Spick-new is simply nail-new, and span-new, chipnew. Many similar expressions are current in the north of Europe; fire-new, spark-new, splinternew, also used in Cumberland; High German, nagelneu, equivalent to the Lower Saxon spikernew, and various others. The leading idea is that of something quickly produced or used only once.” [Note: Dutch spyker, that is, spijker means “nail,” but its homonym spijker exists. It is a dialectal word from the south of the country for “granary in the loft of a house.”]

That was an extract from an article published in The Quarterly Review for September, 1835. All contributions to such periodicals were anonymous. Whoever wrote the piece enjoyed the vitriolic style typical of nineteenth-century British journalism. The remarks, quoted above, are apt, but, curiously, only one “torturer,” Horne Tooke, is mentioned by name. Tooke, whose two-volume work on etymology has the English title The Diversions of Purley, has often appeared in this blog (16 December 2015; 10 May 2017; and August 2017), invariably in a negative context. In the August post, you can see his portrait and Stephen Goranson’s curious comment.


An ideal place for diversions. This is where the bellicose Horne Tooke worked on his contribution to etymology. Image credit: The ‘Brass Monkey’, Russell Hill Road, Purley by Dr Neil Clifton. CC BY-SA 2.0 via Wikimedia Commons.

As I keep repeating, English etymology is a branch of linguistics without history. Thousands of lines about the origin of English words have disappeared in a huge black hole. The belligerent contributor to The Quarterly Review may have missed a few reasonable suggestions about the origin of spick and span. Yet some guesses were indeed wild. In The Gentleman’s Magazine for 1755 (vol. 25, p. 115), an equally learned and equally anonymous correspondent wrote: “Spick and span new… the words want explanation; …which, I presume, are a corruption of the Italian Spiccata da la Spanna, snatched from the hand…. it is well known that our language abounds with Italicisms, and it is probable the expression before us was coined when the English were as much bigoted to Italian fashions, as they now are to those of the French.”


According to Samuel Johnson, spanna meant “to stretch” in Old English (to be sure, such a word could not be an Old English verb!), with span-new emerging as “fresh from the stretchers or frames, alluding to cloth, a very old manufacture of the country; and spick and span is fresh from the spike, or tenter, and frames.” This explanation made its way into the once immensely popular Dictionary of Phrase and Fable by E. Cobham Brewer (1870). Brewer referred to stretchers and hooks and then added Italian spicco “brightness” for good measure and even Dutch spyker.



Image credits: (top) CPRR Flat Head Iron Spike 1868 by Centpacrr. CC BY-SA 3.0 via Wikimedia Commons. (bottom) A Russian Khokhloma wooden spoon by Winstonza. CC BY-SA 4.0 by Wikimedia Commons.

Span-new already occurred in Middle English and looks like a calque (translation loan) of Old Icelandic spánnýr, literally “new like a chip” (thus, no connection with stretchers!).  Several other guesses may be ignored. In any case, span-new does not mean “newly spun,” as has once been suggested. John Jamieson (1750-1828), the author of a great Scottish dictionary, mentioned split and span, both of which denote a splinter or chip. Before him, Johan Ihre (1707-1780), a distinguished Swedish philologist, translated Swedish sping-spang as “quite new.” Jamieson knew Ihre’s works and in the Supplement cited spang-new. He pointed out the connection between spingla “chip, splinter” and spangla “thin metal plate.” The English phrase would then mean “fire-new.” In Cornwall, they said (and perhaps still say) spack and spang new. By the way, the contributor to The Gentleman’s Magazine also referred to Engl. “fresh from the mint”; brand-new springs to mind too.


We have seen an attempt to trace spick and span to Italian. A fanciful derivation from Latin turned up as late as 1900: spick from spica “an ear of corn” and span from spatium “space, a measure of length” and figuratively “hand.” But the phrase, whatever its ultimate origin, must be Germanic. There is nothing similar in Italian, French, or Spanish. Only Germanic analogs are numerous. Such is German splitter-neu and spannagelneu. Dutch (spik)-spinter-nieuw, Swedish spik och spänn, and Norwegian spik og spenning. Only the Swedish and Norwegian versions are close to English, but, unexpectedly, they do not reproduce the Old Icelandic “archetype.” Dutch spijk– is undoubtedly native; hence the hypothesis that Engl. spick– experienced the influence of Dutch. By the same token, Swedish and Norwegian might have taken their spik from Low German.


Spick and span. Image credit: Good suit by Orbitburco12. CC BY-SA 4.0 via Wikimedia Commons.

But what was so attractive in the Dutch word, and how could it be added to a phrase of rather obviously Scandinavian extraction? We risk returning to Horne Tooke’s warehouse. Span-new causes no trouble. Engl. spoon is a cognate of Icelandic spán “chip,” because the earliest spoons were of course made of wood. Spick is the older form of spike. Something or somebody can be sharp and shining as a new nail. There seems to have been two common North-European idioms, perhaps part of the lingua franca of itinerant workmen: things could be “nail-new” and “spike/spick-new.” Later, a hybrid was formed. Let us also remember spack from Cornwall. If spack and span ever existed, it would have become spick and span, because in words of the ticktack and pit-a-pat type, the first vowel is usually closed and the second open. But a bulky phrase like spick-span new had no chance of survival, and new was dropped. An excellent Swedish dialectal dictionary mentions spik spangande ny, and Skeat cited it. Though the history of the English word is partly obscure, alliteration (sp- ~ sp-) must have played a decisive role in it.


Toward International Spelling Congress: London, May 30, 2018


Since the congress is approaching, I decided not to wait for the “May gleanings” (next week) and answer the letters and comments I received. There is nothing in the discussion of Spelling Reform that has not been said many times, so that what follows is not new either. A correspondent from New York objects to reforming the present system because language, as she believes, should develop naturally, rather than being imposed upon by the Big Brother. This statement is suspicious, even if by language we agree to understand only grammar and usage. The invention of printing made it imperative to follow a certain “standard” norm at the expense of many other “norms.” For instance, though the double negative has not gone anywhere, I don’t know nothin’ is not everybody’s favorite variant. In other cases, the Standard has to bow to popular tastes. As I said is more genteel, but “everybody’s” preference (in the US) is for like I said. So be it. The same is true of who versus whom in American English. Also, when I read in an article by the Associated Press “The crash left the bus laying on its side…,” I realize that by this time one can lie under oath but only lay on one’s back. Too bad, but one cannot always fight to win.


Spelling, unlike oral speech, is not a natural phenomenon. It was invented to reflect people’s pronunciation and can be changed by decree or by consensus, as has been done more than once, also in the English-speaking world. We are told that, if we simplify English spelling, we will destroy our past. Whence this touching dedication to everything that’s old?  Knife, with its k-, I hear, looks so attractive, because it reminds us of the word’s ancient pronunciation. Alas and alack! What about listen from Old Engl. hlystan? Are we much the worse for the absence of initial h and final n? And should we restore y in the middle? Or take acquiesce. Even in Latin, the spelling acquiescere made little sense, because ac– was the remnant of the prefix ad-, but who needs this c and another c before e in English acquiesce?


English is overdue for Spelling Reform. Image credit: Cicero Denounces Catiline by Cesare Maccari. Public Domain via Wikimedia Commons.

The most fervent defendants of the etymological principle often have a dim idea of the history of English. Scissors is a monster not only because of its ss but because of sc-: the late Middle English form was sisoures. Unfortunately, some learned people thought that the root of the word was the same as in Latin scindere “to cut” and added an extra letter. Such mangled words are numerous. Yet more important is another consideration. Does anybody believe that the complexity of English spelling makes our students feel at home in Latin or Greek? Once they learn how to spell symbol and cyst, will they be closer to Homer or Thucydides? Thousands of our graduates have not even been able to master the difference between its and it’s, who’s and whose. Let Thucydides and Caesar rest (lie) in peace. Or have the Italians who spell simbolo “symbol” and cisto “cyst” forfeited an important part of ancient culture? And if we are so dedicated to etymology, why should the British spell colour the Old French way (today its couleur!)? Latin color is closer to the source. Anyway, English spelling cannot be used as a springboard to etymology.


I have also been told that it is the aim of spelling to reflect meaning, rather than be phonic. Really? Is it then admissible or even desirable to write choir and pronounce quire? Are we speaking about letters or hieroglyphs? Many countries have reformed their spelling, and the aim of the move has always been to make orthography more “phonic.” True, English spelling is too chaotic to be exploded all at once, but even a few tiny steps in the right direction would be most beneficial. Rest assured: sissors spelled so will cut as well as before (perhaps even better).


Featured Image: The oldest spelling system of which we are aware. Featured Image Credit: “Antiquity Characters Places Of Interest Temple” by fotoerich. CC0 via Pixabay


The post Spick and span: a suspicious hybrid appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on May 23, 2018 04:30

Do liquid biopsies have potential to outperform tissue diagnostics?

Cancer diagnosis can often be an exhausting, extensive process with endless tests, scans, and screenings. We all know the importance of early detection and successful treatments to potentially save thousands of lives every year, so could liquid biopsies offer the lifeline we’ve been holding out for?


Liquid biopsies are an alternative to surgical methods which involve using a blood sample to detect circulating fragments of tumour DNA not associated with a cell, (ctDNA), in the patient’s blood serum or plasma. Conventionally, blood tests are used to detect abnormalities—either in blood cell counts, organ function, or protein and hormone levels. In cancer screening, the presence of certain proteins in the blood is used as an indicator for tumours, e.g. cancer antigen test for detecting ovarian cancer or prostate specific antigen for prostate cancer. These tests have a tendency to suggest many false positives, as a result of many of the indicating proteins being present in other conditions. However, advances in molecular profiling have led to the use of blood samples to study tumour genomics in a much more precise and less invasive way compared to standard methods.


Tissue biopsy, either by surgery or needle, has long remained the established method of tumour diagnosis, but liquid biopsies offer a range of advantages. Because some tumour sites can lie in hard to reach places, tissue biopsies can carry increased risk and difficulty to obtain. The use of liquid biopsies solves this issue, with blood samples being relatively quick and easy to obtain. Both between different tumours and within the same tumour, there is a degree of heterogeneity. For this reason, tissue biopsies may not always be representative of the entire tumour, whereas blood samples can give visibility of tumours as a whole.



Image credit: Colonic adenocarcinoma Endoscopic biopsy. CC BY-SA 3.0 via Wikimedia Commons.

Recently, researchers at the John Hopkins Kimmel Cancer Centre in Maryland, US made headlines with news of a single blood test capable of detecting eight different cancers (esophageal, ovarian, hepatic, lung, breast, stomach, and pancreatic) via mutated ctDNA and certain protein presence. Currently, five of these cancers have no screening available—however, sensitivity does vary depending on type. Breast cancer detection was approximately 30% effective, whereas hepatic and ovarian tumours exceeded a 95% detection rate.


The world of medicine is continuously moving towards targeted treatments bespoke to individual genetic profiles and ultimately precision medicine is associated with better outcomes. Using liquid biopsies, the number of RAS mutations detected in metastatic colorectal patients was found to be higher in plasma specimens compared to tumour biopsies for progressing tumours. The ability to consistently obtain samples via liquid biopsy allows for monitoring of tumours and mutations, hence ctDNA has been shown to indicate optimum treatment options and also suggest likelihood of resistance. Similarly, a small study focused on breast cancers, reports the use of liquid biopsies to detect acquired resistance to treatment inhibiting certain proteins known as cyclin dependent kinases.


ctDNA presence in plasma was also used after surgery in melanoma patients to determine prognosis and predict probability of relapse. The majority of patients with detectable levels of ctDNA taken in samples 12 weeks after surgery deteriorated within a year, and so it was concluded that ctDNA may be indicative of relapse in Stage II/III melanoma, where radioactive imaging falls short.



In the last few years, growth of research into liquid biopsies and diagnostics has been exponential



In the last few years, growth of research into liquid biopsies and diagnostics has been exponential, however many studies have been centred around already formed tumours. Evaluation of liquid biopsies as a diagnostic tool is somewhat far away, as researchers have to give enough time for a tumour to develop before recording results. But in terms of tailoring and predicting the right treatment, ctDNA and liquid biopsies appear to offer a much greater insight than tissue biopsies.


Featured image credit: Blood by Mark Marschalko. CC BY 2.0 via Flickr.


The post Do liquid biopsies have potential to outperform tissue diagnostics? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on May 23, 2018 03:30

Is the pre-crisis central bank model still viable after the Great Recession?

The global financial crisis, which started a decade ago and led to the Great Recession, caused profound changes in central bank practices. Extraordinary responsibilities were thrust upon central banks, particularly the European Central Bank and the US Federal Reserve. A key issue now is whether there will be a return to the central bank model which dominated advanced economies for over two decades leading up to the crisis, or whether the changes caused by the crisis have irrevocably altered that model.


The pre-crisis central bank model was well-honed, even elegant. Moreover, the Great Moderation, which prevailed for about two decades between the mid 1980s and the beginning of the crisis, seemed to have validated the model beyond any doubt, as it was considered to have fostered stable growth and stable, low inflation. Nearly a century after the abandonment of the gold standard – a robust, but inefficient, monetary technology to maintain price stability – the conviction prevailed that a central banking model had been finally found to reconcile a fiat currency with effective inflation control. An independent central bank with price stability as its dominant objective was the most important component of that model. More specific components, like the Wicksellian approach to monetary policy, inflation targeting, different variants of the Taylor rule, and the corridor approach to control interest rates, beautifully complemented that model.


However, as the 2008 crisis unfolded – initially in the US and later during the sovereign debt crisis in Europe – disproportionate consequences rippled throughout the financial system. Extensive data analyses illustrate these consequences. In response to the dramatic situation, central banks acted swiftly to avoid the economy spiralling out of control. As a result, several deep changes to the traditional central bank model occurred.


First, financial stability, an area neglected by central banks, hit back with a vengeance during the crisis and the task to achieve and maintain financial stability fell heavily upon central banks shoulders. Second, the need to complement the interest rate tool with the management of the central bank balance sheet, which grew enormously during the crisis, blurred the border between monetary and fiscal policy. Third, the support central banks gave to imprudent financial institutions and, in the euro-area, to imprudent sovereigns risked inducing them to repeat the behaviour which had led them into trouble, in the expectation of being rescued again. Fourth, specifically in the euro-area, the European Central Bank had to step in to salvage the euro from what looked like an existential threat. Fifth, the European Central Bank participated, together with the EU Commission and the IMF, in the so called troika, which imposed intrusive economic policy measures on the countries in the periphery of the euro-area that had to resort to external help during the crisis. The undesired consequence was that the European Central Bank had to move well beyond its original sphere of responsibility and competence, namely monetary policy. Finally, both the US Federal Reserve and the European Central Bank had to more fully incorporate into their decisions the global spillovers of their actions, which inevitably changed the equilibrium conditions throughout the global financial and economic system.



Frankfurt am Main: Building complex of the European Central Bank as seen from North-West (December 2014) by Epizentrum. CC-BY-SA-3.0 via Wikimedia Commons.

The question now is whether these hits have jeopardized the central banking model that prevailed before the Great Recession. Utmost caution is required before answering positively to this question, lest the progress achieved in reaching price stability is lost.


Two radical approaches can be considered in addressing the future of central banking. The first radical approach would be to return to the situation that prevailed in many countries in the 1970s, when most central banks were really only glorified government departments. The second would be to eliminate all the extensions to central banks’ responsibilities and powers brought about by the crisis and re-establish a “narrow central banking model,” with an independent central bank having the sole task of moving interest rates in the pursuit of price stability.


The first approach is undesirable; the model of dependent central banks was tested for decades and failed: where it prevailed the systematic consequence was price instability.


The second approach is desirable; however, it would be imprudent to assume that it could be implemented, because there is no assurance that all the challenges to the pre-crisis central bank model that appeared during the Great Recession will simply fade away. Three issues in particular are likely to persist: (1) financial stability will continue to vie with price stability as the dominant objective of central banks, (2) the risk of confusion between monetary and fiscal policy could persist, and (3) the potential moral hazard that bank and sovereign rescues can cause could still affect equilibrium conditions.


A more fruitful direction is to find adaptations of the pre-crisis central bank model that would obviate the problems that emerged during the crisis without radically altering that model. These adaptations are mostly to be found in the governance structure of central banks. Specifically whenever the financial stability objective would conflict with the price stability objective, the central bank, acting as an agent of collectivity, should receive instructions, say from Parliament, on whether to prioritize one objective or the other.


Still in the governance area, special majorities could be required when the central bank seeks to use its balance sheet to complement the interest rate tool. As regards moral hazard, central banks should leave enough of the negative consequences of their imprudent actions on financial institutions or sovereigns to avoid a repeat of this kind of behaviour in the future.


Featured image credit: Coins currency investment by stevepb. Public domain via Pixabay.



The post Is the pre-crisis central bank model still viable after the Great Recession? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on May 23, 2018 02:30

May 22, 2018

Counting usage: why do we need a new Code of Practice?

The COUNTER (Counting Online Usage of Networked Electronic Resources) Code of Practice is the industry-standard format for usage reporting of electronic resources. COUNTER has published a new Code of Practice, Release 5. We spoke with Lorraine Estelle, COUNTER’s Director and Company Secretary, to gain an insight into COUNTER, the new Code of Practice, and what it means for libraries.


How did you get involved in COUNTER? 


My first involvement in COUNTER came in 2003 when I was the director of Jisc Collections. We were a young library consortium at the time, and this was in the early days of COUNTER. We really supported COUNTER because we could see that having data to support decision-making was very important. We gained the funding to develop JUSP (the Journal Usage Statistics Portal). One thing that came out of JUSP was an improvement in COUNTER usage statistics. Librarians would download their usage statistics and note errors, but were too busy to pursue these. Instead, JUSP on a national level was able to go to publishers and inform them of problems with usage statistics. This feedback helped improve the quality of usage reporting. I joined the Board of Directors at COUNTER in 2014, and a year later moved to my current role of Director.


Why do we need usage statistics?


Libraries are investing a great deal in electronic resources, and they have to make an informed decision on what they buy. Usage is not the sole reason for purchasing, but it is a very important piece of information in making informed and evidence-based decisions. And usage statistics are not just about the resources to which a library already subscribes. Access denial reports (also known as turnaways) show when a user is unable to access a unique content item because their institution does not have a license to the content, or their institution’s cap on the number of simultaneous users has been exceeded. These are other pieces of evidence that can inform the librarian.


And why do we need COUNTER?


If you’re using usage statistics to make informed decisions, you need to know that the statistics that you’re getting from publishers are consistent, credible, and comparable – that publishers are counting the same thing and using the same terms. At COUNTER, we set the Code of Practice – how to process your data, how to report – and we also require publishers to have an annual COUNTER audit, to make sure that they are implementing the Code of Practice correctly, and to ensure that there is comparability between publishers.



If you’re using usage statistics to make informed decisions, you need to know that the statistics that you’re getting from publishers are consistent, credible, and comparable…



What is involved in creating a new Code of Practice?


A lot, a lot of work! For Release 5, we have had a Technical Sub-group create the Code of Practice. Clearly, the Code of Practice has to work for everyone in the COUNTER community – publishers, vendors, library consortia, and librarians. We have a really brilliant team of volunteers from across this community. They come together, collaborate, and work together to create the Code of Practice.


Another key element is a conversation with the wider community. We had two consultation periods. From feedback to the first draft Code of Practice, we found we weren’t explaining it as well as we could have done. This first round of feedback helped us to look at the way we were talking about the new Code of Practice and improve clarity. In the second round, we had a lot less feedback, because we addressed the main concerns in the first round.


Why did COUNTER decide that we needed a new Code of Practice?


It is important that usage statistics are comparable across different publishers and vendors. That was becoming increasingly difficult, because technology has moved on a lot since Release 4. A lot of current publishing platform functionality wasn’t there when Release 4 was designed. We have developed Release 5 so it is more adaptable in the future as functionality evolves.


Also, we needed to ensure that we revised the Code of Practice to be clearer and remove ambiguities. When I became Director at COUNTER in 2015, I carried out a large survey that asked the COUNTER community what they thought of Release 4. The feedback showed strongly that because of ambiguities, publishers were interpreting things rather differently. This feedback was coming from across the COUNTER community – publishers, vendors, library consortia, and librarians.


If we could understand something better about COUNTER Release 5, what would it be?


One of our common questions is about the metrics: “Why have you changed the metrics? What does it mean?” The answer is that we are counting user behaviour, user actions. This is important because when a librarian uses COUNTER statistics for evaluation, what they really want to know is how useful a resource is for their users. We have these new concepts of Investigations and Requests. For example, from a list of search results a user may open three article abstracts; these would be counted as Investigations. After reading the abstracts, the user might download PDFs for two of the articles, both from the same journal; these would be counted as Requests.


I think one little misconception is that at first, people said, “Wow, there’s all this data here… we’ll never be able to handle all this.” I think the really clever thing about COUNTER Release 5 is a Master Report. These can be sliced and diced to suit the needs of the librarian. The Master Reports enable librarians, or indeed publishers, to roll up or drill down through reports with ease.


Master Reports each have several pre-set filtered Standard Views. Often, librarians will want to look at Title Requests and run a cost per download calculation, so that is why we have the Standard Views to address common use cases. They are essentially a set of pre-defined attributes and filters for the corresponding Master Report. The Master Reports provide additional data, such as information about Data Types, Access Types, and Year of Publication.


I think of a Standard View as a templated filter of a Master Report.


Yes, that’s a good way of putting it.


What’s going to be the biggest challenge, and biggest opportunity, for future usage reporting?


I think the biggest improvement will be that with Release 5, it’s going to be much easier to manage and much more comparable because I think we’ve dealt with the issue of different technology on publishers’ platforms. It is designed to be adapted and extended as digital publishing changes over the years.


I think the biggest challenge is learning the new vocabulary in Release 5 – terms such as “Investigation” and “Request” – and getting used to it. We are doing what we can to help, so we have published The Friendly Guide to Release 5 for Librarians. The author, Tasha Mellins-Cohen, is a genius at taking complicated technical things and translating them into things that people like me can understand. We also have a series of webinars planned, to talk librarians through the new Code of Practice. Next year, I’m hoping that we can put together some training modules. Where I’d like to start is training for librarians who have no experience at all of COUNTER, because things aren’t obvious if you’ve never done it before. After that, I’d like to create resources for libraries to train staff, and then building up to a complex level.


In one word, what makes Release 5 better for libraries?


Flexibility.


 


Featured image credit: Home office by Free-Photos. CC0 via Pixabay.



The post Counting usage: why do we need a new Code of Practice? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on May 22, 2018 05:30

Modernising royal weddings: a historical perspective

Prince Harry and Meghan Markle’s wedding demonstrated on a spectacular scale that there is an enduring interest among sections of the press and public in royal love stories. Amidst all the pomp and circumstance, and alongside all the usual reports on street parties, flowers, presents, and the bridal dress, the media coverage focused on the couple’s desire to “democratise” the celebrations by enabling a greater number of ordinary people to share in their wedding day than ever before. But this is not the first time that the younger brother of a future king has chosen to marry a woman whose modernising agenda has worked to transform the monarchy’s public relations strategy. Nor is it the first time that the media have celebrated a royal wedding for momentarily bringing together a British people who are otherwise deeply divided by recent events. We need to instead look back to 1934 and the little-known royal wedding of Prince George (youngest surviving son of King George V) and the famously stylish Princess Marina of Greece as a key moment when royal romance was “reinvented” for mass consumption with the explicit aim of generating British national unity at a time marked by social and political turbulence.


Image credit: The wedding portrait of Princess Marina and Prince George, 1934. © National Portrait Gallery, London NPGx158915

Royal weddings were first staged as mass public events in the years immediately after the First World War. In a period marked by industrial unrest, economic instability, and elite concerns about the appeal of radical socialism to newly-enfranchised working-class voters, the royal household worked in tandem with the British media to stage royal weddings as exercises in nation-building. The media presented these royal weddings as having an all-encompassing effect on the British public leading to the temporary suspension of political divisions and social animosities in favour of a national unity centred on the happy couple.


But modern royal weddings were not simply official public relations exercises designed by courtiers and news editors to bind British subjects of the crown more closely together around the symbolic focal point of a family monarchy. Rather, these marriages carried a broad cultural appeal for media audiences too. There was a public appetite for news about the love stories and glamorous personalities at the heart of these occasions. In the years between the wars, a new culture of romance emerged in Britain, which placed special emphasis on “true love” rooted in emotional fulfilment, like-mindedness, and intimacy. The royal marriages of the interwar period were celebrated as “true love” matches and, in 1934, Prince George and Princess Marina became the first British royal couple to speak candidly to news reporters about their emotions and excitement following their engagement. A few weeks later, they would also become the first royal couple to be pictured kissing by the tabloids and newsreels.


Image credit: Princess Marina pictured in the special royal wedding edition of fashion magazine Vogue, 28 November 1934.  © The British Library Board 

While the prince and princess certainly played up the romance of their relationship for the sake of photographers and interviewers, the media took on a key role in making royal love stories more accessible to the public. Journalists relentlessly pursued members of the House of Windsor, hunting for “human-interest” stories that might cast some light on the “real” people behind the royal public images. In many ways, royalty became Britain’s answer to Hollywood celebrity after 1918 and, in Marina, the press finally met their match. As a royal exile who had had to flee her Greek homeland following the political revolutions of the early 1920s, she became an extremely adept self-publicist who was able to attract positive media attention despite her inauspicious status. With her assured Parisian fashion sense and good looks, she cast herself as a new kind of royal woman—a change welcomed by Prince George who viewed his fiancée as a breath of fresh air and wrote to Marina’s brother-in-law, Prince Paul of Yugoslavia, telling him so:


Everyone is so delighted with her—the crowd especially—’cos when she arrived at Victoria Station they expected a dowdy princess—such as unfortunately my family are—but when they saw this lovely chic creature—they could hardly believe it and even the men were interested and shouted ‘Don’t change—don’t let them change you!


Marina’s modernising agenda didn’t stop there. She became the first member of the British royal family to engage with crowds by waving to them. Right at the time that the European dictators were using new gestural salutes to foster the loyalty of their respective peoples, so Marina’s wave worked to endear her to members of the public who felt that they shared a close emotional bond with her. George and Marina also sat for England’s leading society photographer, Dorothy Wilding, who pictured them looking just like the modern film stars of the day. George gave permission for the photos to be sold as souvenir postcards—again signalling a break with royal tradition, which, until 1934, had ensured that intimate romantic images like these were kept hidden from public view.


Image credit: Princess Marina and her famous wave on the front-page of the Daily Sketch, 17 September 1934. © The British Library Board

But perhaps the most significant innovation of all was the way the prince allowed the BBC to broadcast his wedding ceremony live to the nation and empire from Westminster Abbey. In our multimedia age, it is difficult to appreciate the imaginative power of radio, but those people who listened in to the 1934 royal wedding ceremony described in letters written to organisers of the event how they felt as though they had been transported to the Abbey. The transmission of the words spoken by the couple as part of the service, and the music and sounds of the crowds that gathered in central London along the procession route, created an intensely immersive experience. And yet, at the same time, the letter writers described how they had felt connected to the millions of other listeners who had tuned in for the event—the BBC’s royal wedding broadcast inspiring in them a sense of a shared national community centred on George and Marina’s love story.


Image credit: The first royal wedding ceremony broadcast live to Britain and the world. Daily Mirror, 30 November 1934. © The British Library Board

The royal marriages of the interwar period projected the British royal family as the symbolic focal point of mass society. The religious principles that underpinned royal domesticity were publicly championed through the marriages of George V’s children, although this virtuous model created problems for his eldest son and successor, Edward VIII, who, in searching for his own true love match chose to pursue romance with a divorcée outside the confines of Christian marriage, resulting in his abdication. Nevertheless, the stage was set for a century of royal weddings that would continue to bring members of the public into closer emotional communion with their royal rulers, providing temporary respite from the social, cultural, and political divisions that have periodically divided the British public through celebrations of that peculiarly modern emotion—romantic love.


Featured image credit: Prince Harry and Meghan Markle visit Belfast by Northern Ireland Office. CC BY 2.0 via Wikimedia Commons


The post Modernising royal weddings: a historical perspective appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on May 22, 2018 04:30

Library discovery: past, present, and future

Librarians have been rising to the challenge of helping users discover content as long as libraries have existed, and evolving discovery solutions are an interesting byproduct of the information dissemination challenges of the time. Before the printing press, medieval libraries were typically geographically isolated with a small number of hand-copied texts. Discovery tools included handwritten omnibus catalogs listing collections from the libraries of other nearby cloisters or monasteries, so the limited number of books could be more widely discoverable. The medieval library itself could also double as a discovery tool via stained glass windows and paintings, which were arranged to correlate with the subjects of the books found around them.


Fast forward several hundred years to the era of the physical card catalog, and handwritten discovery tools were still in use. Library school curricula included training on how to hand-write catalog cards in a special library script before typewriters were widely available. The era of the card catalog lasted more than 150 years—OCLC printed its last catalog cards less than three years ago in 2015. Card catalogs began to cede their role in primary discovery functionality when MARC records started to gain traction in the 1970s. Although MARC still has a stronghold in library workflows, we are now well into the era of web scale discovery services. While it is difficult to determine actual numbers, a 2016 Library Technology Report states, “EBSCO, OCLC, and ProQuest—reported a combined 11,700 libraries using products that rely on their knowledge bases. Ex Libris’s corporate website lists another 5,600 total customers.”


Strong relationships between discovery services and publishers are crucial to facilitate end-user discovery in this content-abundant era, and my role as Discoverability Associate at Oxford University Press is devoted to managing these relationships and ensuring OUP is meeting industry standards for discovery. I have regular conversations to discuss mutual customer questions, new products, and industry trends with discovery vendors including EBSCO, ExLibris, OCLC, and ProQuest. Along with discovery services, content is indexed across a wide range of discovery tools, including academic search engines like Google Scholar and more subject-specific abstracting & indexing services. OUP is also committed to providing high-quality, industry-standard metadata to facilitate discovery and access. A fuller picture of core metadata elements provided, how metadata is delivered to discovery partners, and a list of all discovery partners indexing OUP content is available in the NISO Open Discovery Initiative checklist posted on the Journals and Online Products Librarian Resource Center pages, along with the most current KBART files.



Strong relationships between discovery services and publishers are crucial to facilitate end-user discovery in this content-abundant era…



Before the card catalog was readily available as a library discovery tool, finding titles in the library was often dependent on the memory of the librarian. Unfortunately, this is still the case in many ways. One of the main limitations with current web scale discovery tools is the difficulty tracking institutional holdings in the discovery knowledgebase. This work is typically left to the individual librarians who must manually track different collection packages across all content providers with whom they have agreements.


To begin conversation on how to address this industry need, in 2017 NISO formed a KBART Automation Working Group to expand on the 2014 KBART Phase II recommendations. The group was comprised of industry stakeholders from content provider, discovery service, and library backgrounds, and the output of a year of bi-monthly meetings including an overview of the current landscape, use case examples where automation will solve current problems, and a prototype for automated delivery will be available for public comment in the coming months. It is clear where the industry is heading, and OUP will continue to invest in our KBART capabilities in the years ahead.


It is not surprising that the discovery tools that have risen to prevalence in the Information Age carry with them many of the same challenges inherent to the unprecedented volume of content library users must comb through to discover the information they need. As many have pointed out before, we are still at the beginning of the Information Age. There is a lot of work to be done to make present-day discovery more encompassing and efficient to enable the library user to find the information that they need. If catalog cards can stay relevant for 150 years, KBART and other contemporary discovery tools inevitably face extensive evolution in the years ahead.


Featured image credit: “network-3213667_1280” by geralt. CC0 via Pixabay.


The post Library discovery: past, present, and future appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on May 22, 2018 02:30

May 21, 2018

Does consolidation in health care mean patients suffer?

The recent news that US retail giant CVS Health will purchase insurance giant Aetna, in part to gain millions of new customers for its prescription drug and primary care businesses, is another ominous sign for patients. Patients should worry about all the continued consolidation in the health care industry, whether it is Walgreens buying Rite-Aid to increase their pharmacy clout; Anthem’s ill-fated attempt to purchase Cigna to become an insurance monopoly; or hospital systems like Partners Healthcare in Boston trying to buy the hospitals and physician networks in and around its service area to control patient flow and increase market share. Consolidation often limits competition, and when that happens in market-based systems the result, says good research, is often that the cost of health care goes up. This does not benefit patients, who increasingly are paying more out of pocket for their insurance and for the services they receive from doctors, hospitals, labs, and drug companies.


The Affordable Care Act did little to encourage greater competition in the health care marketplace. That was probably by design, since those creating the legislation held an implicit assumption that the bigger players in each of the different industry areas like insurance, pharmacy, and hospital care could deliver given the size of the insurance expansion the ACA would promote. As we see from the existing premium inflation on the exchanges across the country and with prescription drugs, and the continued long delays in people’s ability to access care, this assumption was not accurate. To the contrary, the ACA’s focus on new and unproven structures like accountable care organizations; new payment models that reward scale and resource investment in things like information technology; and rewarding those organizations that have the most comprehensive performance measurement infrastructures has encouraged the kind of profit-oriented consolidation in the industry that does less to improve the overall system. Also, given the increased squeeze by payers like Medicare on payments to hospitals for example, mergers and acquisitions are a natural yet dysfunctional corporate response to higher levels of uncertainty in the external health care environment.



Life insurance care application by rawpixel. Public domain via Pixabay.

What does this all mean for patients? First, it means less choice from everything to where they get their medications from which doctors and hospitals they can go to. For example, the Boston metro area is dominated by two very large delivery systems available to patients: Partners or Beth Israel, the latter which intends to merge with Lahey Clinic, creating another care delivery giant in Boston region. Once you are in these systems of care, increasingly they don’t let you out. They need to capitalize on the investments made in getting bigger by forcing patients to see only their doctors, labs, and surgical centers. Once choice is limited, prices can be increased, and without a lot of competitive pressure, for example, exerted on care quality or patient satisfaction, the systems may underinvest in these things. Health care costs in Massachusetts are among the highest in the nation, and this reality is one reason why.


A second everyday implication of consolidation is that it makes the health care experience feel more transactional rather than relational. A transactional experience is impersonal, standardized, and organization- rather than professional-driven. The focus is on efficiency and the organization’s interest in turning over volume rather than the patient’s interest in tailored service delivery. In this way, patient contact with the system becomes a maze of 800 numbers, call centers, automated replies, and web-based clicks that move us through standard templates. Consolidation produces large, bureaucratic organizations that have a harder time seeing us as unique individuals, and catering to our parochial preferences and needs, which in health care is important for keeping patients healthy. This often produces a generic and substandard experience for many patients; causes many to lower their expectations for what they will get for their investment of time and money; and makes us all too reliant on “the company” for taking care of us, rather than the clinician.


Should we take the waves of consolidation in all parts of the health care system more seriously in terms of its direct impact on patients? We sure should. Closer regulatory scrutiny would help. Often this scrutiny is watered down in the face of specious arguments about consolidation in health care “saving jobs” and contributing positively in terms of economic impact to a given locale. The paradoxical rationale that consolidation will reduce health care costs and improve care quality also gets put forth, even in the face of evidence to the contrary. In a health care industry that remains beholden to market-based principles for doing business, state, and federal governments must do a better job of enforcing existing anti-trust laws, certificate of need requirements, and other tools available to ensure health care competition and protect patient interests. That said in some geographic locales and parts of the industry, it may already be too late.


A version of this article originally appeared on The Healthcare Blog.


Featured image credit: business office contract by rawpixel. Public domain via Pixabay .


The post Does consolidation in health care mean patients suffer? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on May 21, 2018 03:30

WHO advances the right to health for universal health coverage

The World Health Organization (WHO) has been central to the development of human rights for public health, and as the Organization seeks to mainstream human rights in global health governance, this year’s World Health Assembly (May 21-25) comes at a unique time and provides a key forum to advance the right to health as a moral foundation and political catalyst to advance universal health coverage.


WHO was established to provide global support for state efforts to realize health as a human right, with the WHO Constitution framing international health cooperation under the then-unprecedented declaration that “the enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being.” With both the WHO Constitution and the Universal Declaration of Human Rights (UDHR) coming into force in 1948, there was great promise that these two commitments would complement each other, with WHO—like all United Nations (UN) specialized agencies—supporting human rights principles and standards in all its health policies, programs, and practices.


Yet, the promise of human rights to advance public health was long threatened by political constraints during the Cold War, WHO resistance to legal discourses, and medical ambivalence toward human rights. Where the 1966 International Covenant on Economic, Social and Cultural Rights (ICESCR) was seen to narrow the legal development of the right to health, subsequent leadership changes in WHO would seek to revitalize organizational efforts to engage with human rights, redefining international health goals to reflect human rights norms in the 1978 Declaration of Alma-Ata. This rights-based approach to health would endure in the early international response to HIV/AIDS, with WHO’s Global Programme on AIDS applying human rights principles to address the individual behaviors leading to HIV transmission, viewing respect for individual rights as a precondition for the public’s health.



Yet, the promise of human rights to advance public health was long threatened by political constraints during the Cold War, WHO resistance to legal discourses, and medical ambivalence toward human rights.



As human rights advancements flourished in the aftermath of the Cold War, WHO came to consider a more systematic operationalization of civil, cultural, economic, political, and social rights to an array of public health challenges. The 1993 World Conference on Human Rights first articulated organizational responsibilities for human rights through the Vienna Declaration and Programme of Action, expanding human rights implementation beyond the UN’s human rights mechanisms to encompass the entire UN system. Reflecting this cross-cutting approach to human rights, UN Secretary-General Kofi Annan called on UN specialized agencies in 1997 to “mainstream” human rights in all their activities. WHO took up this UN call, enlisting human rights advisors to operationalize human rights in WHO policies, programs, and practice. Building from this evolving work to advance a rights-based approach to health, WHO in 2012 brought together the core values of gender, equity, and human rights (GER) into one centralized mainstreaming team. This GER Team has emphasized the primacy of these interconnected values to health programming, offering the Organization a more unified, systematic, and standardized approach to mainstreaming across health programs.


The 2017 election of Tedros Adhanom Ghebreyesus as WHO Director-General has provided renewed leadership in advancing human rights in global health. With Dr. Tedros advocating tirelessly during his campaign that “universal health coverage is our best path to live up to WHO’s constitutional commitment to the right to health,” WHO has since invoked human rights as a foundation for its flagship universal health coverage (UHC) initiative—ensuring that quality health services can be accessed equitably and without financial hardship. Signaling Dr. Tedros’s determination to facilitate accountability for the progressive realization of the right to health through UHC efforts, WHO has recently sought to expand collaborations with civil society and signed a Memorandum of Understanding with the Office of the High Commissioner for Human Rights (OHCHR). These new partnerships examine human rights “to health and through health,” providing a moral foundation for WHO’s continuing efforts to frame UHC as the overarching focus of all WHO activities.


This week’s World Health Assembly, the first since the election of Dr. Tedros as Director-General, provides a landmark forum for consolidating these shifts toward human rights in global health governance. Drawing on a history of over sixty World Health Assembly resolutions that have addressed human rights on a variety of health programs, the Assembly will be adopting a series of rights-based resolutions alongside the WHO Global Programme of Work (GPW) for 2019-2023. Groundbreaking in its focus on rights-based political leadership in health, the current Draft GPW commits WHO to advocate for health at the “highest political level,” detailing that:


Consistent with its Constitution, WHO will be at the forefront of advocating for the right to health in order to achieve the highest attainable standard of health for all…. WHO will strengthen its health diplomacy and work to include health in global political bodies such as G20, G7, BRICS, and in regional and municipal political bodies.


The adoption of this GPW will serve as a political catalyst for WHO’s human rights leadership in global health governance, promoting the implementation of gender equality, health equity, and human rights in ways that address underlying determinants of health across the newly-developed Sustainable Development Goals (SDGs).


Looking beyond WHO, there is an imperative to understand human rights implementation across the larger global governance landscape that underlies public health. This rapidly expanding global health governance landscape—from the occupational safety and health policies of the International Labor Organization (ILO) to the nutrition security assistance of the UN Food and Agriculture Organization (FAO)—has led to a diverse range of institutional approaches to human rights mainstreaming. Where WHO seeks to put human rights at the center of global health governance, the proliferation of global institutions for public health provides a basis and a rationale to compare the unique institutional structures that facilitate organizational action to implement the right to health, health-related human rights, and rights-based approaches to health. As we simultaneously celebrate seventy years of human rights advancements since the UDHR and seventy years of global health governance through WHO, it is necessary to understand the institutional determinants of human rights mainstreaming as a basis to realize a future for human rights in global health.


Featured image credit: The World Health Assembly meets at the World Health Organization in Geneva, Switzerland (World Health Organization/Pierre Albouy)


The post WHO advances the right to health for universal health coverage appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on May 21, 2018 00:30

May 20, 2018

Covert action is theatre – and the curtain isn’t coming down yet

We are told that intelligence activities are eye-wateringly secret. Yet they have been surprisingly prominent of late. Senior politicians and armies of online bloggers alike are trading bitter accusations about dark arts and dirty tricks.


Most prominently, Russia stands accused of using “little green men” to annex Crimea, of influencing the 2016 US presidential election, and of spreading divisive so-called “fake news” across Europe to undermine faith in institutions. Many blame the Kremlin for orchestrating the attempted assassination of former intelligence officer Sergei Skripal in Salisbury.


Meanwhile, Russia accuses the US of interference in the post-Soviet space. And American covert support for various Syrian rebel groups is an open secret. Britain is often mentioned in the same breath.


This is covert action: perhaps the most sensitive – and controversial – of all state activity. Commonly understood as interference in the affairs of other states in a “plausibly deniable” manner, it is, of course, nothing new and is most associated with the CIA and the Cold War. What is striking, however, is the visibility of the supposedly hidden hand behind recent operations.


We seem to be living in an era of open secrecy.


Although “plausible deniability” has always been a fig leaf to an extent, it is certainly difficult to maintain secret sponsorship in the twenty-first century. Whistle-blowers, investigative journalists, and civil society all armed with camera phones, USB sticks, and modern communications technology challenge the ability of states to operate in the shadows. Sophisticated algorithms are increasingly able to unravel the origins of computational propaganda campaigns, whilst the proliferation of special forces and private military companies make it easier to spot kinetic operations.


This appears to suggest that states are “sloppy” at covert action; that senior officials are delusional about their ability to maintain secrecy; and that covert action is a dying art form unfit for the twenty-first century.


Upon closer inspection, however, this is simply not the case.


Premiers, policymakers, and even the press are becoming aware of a remarkable truth: covert action does not require absolute secrecy to be successful. In fact, lack of secrecy is not as damaging as we might expect and can actually be deliberate. Uncovering the hidden hand behind an operation is not therefore necessarily an intelligence coup. Covert action has multiple audiences and multiple degrees of exposure.


Indeed, some leaders are embracing implausible deniability.


Covert actions leave calling cards to generate coercion. This is barely disguised “bullying” by any other name. When visible, yet unacknowledged, covert actions allow leaders to communicate quietly, without escalating crises into dangerous conflict.


 States use secret intelligence services to interfere and influence, but they expect to get found out. Exposure is part of the plot.

These covert actions also deliberately create ambiguity. Aided by “fake news” and competing narratives, they blur the lines between truth and fiction, legitimate and illegitimate activity, internal disorder and external intervention, even war and peace. This makes it difficult for the West, and institutions like NATO, to respond. Ambiguity also enables myths to take hold; implausible deniability is a fabulous instrument for creating fear.


Let us take the Skripal case as an example. The Kremlin vehemently denies involvement and yet many people – including the British Foreign Secretary – have pointed the finger squarely at Moscow. If the Russian state was behind the attempted assassination, it raises the question of why do something so visible. Why use a military grade nerve agent traceable to Russia?


It would be naïve to assume that, if Russia was behind it, the Kremlin expected their involvement to remain hidden from foreign intelligence services and the international community alike. The means of execution demonstrated resolve and communicated a message of strength and deterrence without risking escalation. Non-acknowledgement limited the British response.


The means of execution – if Russia was behind it – also deliberately created exploitable ambiguity. It put instant pressure on the US-UK alliance by forcing the American leadership to rate the British intelligence assessment; it put instant pressure on the White House by daring President Trump to speak out against Russia; and it put instant pressure on the UK by forcing agencies to share unprecedented levels of intelligence with European allies in order to implicate Russia.


And if the Russian state was not behind the attack, the swirling narratives – the blurred lines between fact and fiction – have still helped cultivate Putin’s strongman image. They have demonstrated resolve regardless.


Non-acknowledged intervention is performance; covert action is secret theatre. Recognising this, and moving beyond the flawed notion of “plausible deniability”, offers fresh insight into what is going on around us. States use secret intelligence services to interfere and influence, but they expect to get found out. Exposure is part of the plot. Trump and Putin both love a performance. And so, the curtain is not coming down just yet.


Featured image credit: ‘Before the Show’ by Rob Laughter. Public Domain via Unsplash .


The post Covert action is theatre – and the curtain isn’t coming down yet appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on May 20, 2018 03:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.