Richard Veryard's Blog, page 7

November 7, 2018

YouTube Growth Hacking

Fascinating talk by @sophiehbishop at @DigiCultureKCL about YouTube growth hacks, and the "algorithm bros" that promote them (and themselves).

Sophie Bishop talking about 'algorithm bros'
 •  0 comments  •  flag
Share on Twitter
Published on November 07, 2018 13:48

November 1, 2018

Ethical communication in a digital age

At the @BritishAcademy_ yesterday evening for a lecture by Onora O'Neill on Ethical Communication in a Digital Age, supported by two more philosophy professors, Rowan Cruft and Rae Langton.

Much of the discussion was about the threats posed to public reason by electronically mediated speech acts, and the challenges of regulating social media. However, although the tech giants and regulators have an important role, the primary question in the event billing was not about Them but about Us - how do *we* communicate ethically in an increasingly digital age.

I don't claim to know as much about ethics as the three professors, but I do know a bit about communication and digital technology, so here is my take on the subject from that perspective.

The kind of communication we are talking about involves at least four different players - the speaker, the spoken-to, the spoken-about, and the medium / mediator. Communication can be punctuated into a series of atomic speech acts, but it is often the cumulative effects (on public reason or public decency) that worry us.

So let me look at each of the elements of this communication in turn.


First the speech act itself. O'Neill quoted Plato, who complained that the technology of writing served to decouple the writer from the text. On social media, the authorship of speech acts becomes more problematic still. This is not just because many of the speakers are anonymous, and we may not know whether they are bots or people. It is also because the dissemination mechanisms offered by the social media platforms allow people to dissociate themselves from the contents that they may "like" or "retweet". Thus people may disseminate nasty material while perceiving themselves not as the authors of this material but as merely mediators of it, and therefore not holding themselves personally responsible for the truth or decency of the material.

Did I say truth? Even propositional speech acts are not always easily sorted into truth and lies, while many of the speech acts that pollute the internet are not propositions but other rhetorical gestures. For example, endless repetition of "what about her emails?" and "lock her up", which are designed to frame public discourse to accord with the rhetorical goals of the speaker. (I'll come back to the question of framing later.)

The popular social media platforms offer to punctuate our speech into discrete units - the tweet, the post, the YouTube video, or whatever. Each unit is then measured separately, and the speaker may be rewarded (financially or psychologically) when a unit becomes popular (or "goes viral"). We tend to take this punctuation at face value, but systems thinkers including Bateson and Maturana have drawn attention to the relationship between punctuation and epistemology.

(Note to self - add something here about metacommunication, which is a concept Bateson took from Benjamin Lee Whorf.)


Full communication requires a listener (the spoken-to) as well as a speaker. Much of the digital literacy agenda is about coaching people to interpret and evaluate material found on the internet, enabling them to work out who is actually speaking, and whether there is a hidden commercial or political agenda.

One of the challenges of the digital age is that I don't know who else is being spoken to. Am I part of an undifferentiated crowd (unlikely) or a filter bubble (probably)? The digital platforms have developed sophisticated mechanisms for targeting people who may be particularly receptive to particular messages or content. So why have I been selected for this message, why exactly does Twitter or Facebook think this would be of interest to me? This is a fundamental divergence from older forms of mass communication - the public meeting, the newspaper, the broadcast.

And sometimes a person can be targeted with violent threats and other unpleasantries. Harassment and trolling techniques developed as part of the #GamerGate campaign are now widely used across the internet, and may often be successful in intimidating and silencing the recipients.



The third (and often unwilling) party to communication is the person or community spoken about. Where this is an individual, there may be issues around privacy as well as avoidance of libel or slander. It is sometimes thought that people in the public eye (such as Hillary Clinton or George Soros) are somehow "fair game" for any criticism or disparagement that is thrown in their direction, whereas other people (especially children) deserve some protection. The gutter press has always pushed the boundaries of this, and the Internet undoubtedly amplifies this phenomenon.

What I find even more interesting here is the way recent political debate has focused on scapegoating certain groups. Geoff Shullenberger attributes some of this to Peter Thiel.

"Peter Thiel, whose support for Trump earned him a place on the transition team, is a former student of the most significant theorist of scapegoating, the late literary scholar and anthropologist of religion René Girard. Girard built an ambitious theory around the claim that scapegoating pervades social life in an occluded form and plays a foundational role in religion and politics. For Girard, the task of modern thought is to reveal and overcome the scapegoat mechanism–to defuse its immense potency by explaining its operation. Conversely, Thiel’s political agenda and successful career setting up the new pillars of our social world bear the unmistakable traces of someone who believes in the salvationary power of scapegoating as a positive project."

Clearly there are some ethical issues here to be addressed.


Fourthly we come onto the role of the medium / mediator. O'Neill talked about disintermediation, as if the Internet allowed people to talk directly to people without having to pass through gatekeepers such as newspaper editors and government censors. But as Rae Langton pointed out, this is not true disintermediation, as these mediators are merely being replaced by others - often amateur curators. Furthermore, the new mediators can't be expected to have the same establishment standards as the old mediators. (This may or may not be a good thing.)

Even the old mediators can't be relied upon to maintain the old standards. The BBC is often accused of bias, and its response to these accusations appears to be to hide behind a perverse notion of "balance" and "objectivity" that requires it to provide a platform for climate change denial and other farragoes.

Obviously the tech giants have a commercial agenda, linked to the Attention Economy. As Zeynep Tufekci and others have pointed out, people can be presented with increasingly extreme content in order to keep them on the platform, and this appears to be a significant force behind the emergence of radical groups, as well as a substantial shift in the Overton window. There appears to be some correlation between Facebook usage and attacks on migrants, although it may be difficult to establish the direction of causality.

But the platforms themselves are also subject to political influence. Around Easter 2016, people were wondering whether Facebook would swing the American election against Trump. A posse of right-wing politicians had a meeting with Zuckerberg in May 2016, who then bent over backwards to avoid anyone thinking that Facebook would give Clinton an unfair advantage. (Spoiler: it didn't.)

So if there is a role for regulation here, it is not only to protect consumers from the commercial interests of the tech giants, but also to protect the tech giants themselves from improper influence.


Finally, I want to emphasize Framing, which is one of the most important ways people can influence public reason. Trump is of course a master of this - constantly moving the terms of the debate, so his opponents are always forced to debate on these terms.

Hashtags provide a simple and powerful framing mechanism, which can work to positive effect (#MeToo) or negative (#GamerGate). Trump's frequent invocation of #FakeNews enables him to preempt and negate inconvenient facts.

In other words Rhetoric eats Epistemology for breakfast. (Perhaps that will give my philosopher friends something to chew on?)



Anthony Cuthbertson, Facebook use linked to attacks on refugees, says study (Independent, 22 August 2018)
Paul F. Dell, Understanding Bateson and Maturana: Toward a Biological Foundation for the Social Sciences (Journal of Marital and Family Therapy, 1985, Vol. 11, No. 1, 1-20). (Note: even though I have both Bateson and Maturana on my bookshelf, the lazy way to get a reference is to use Google, which points me towards secondary sources like this. When I have time, I'll put the original references in.)

Alex Johnson and Matthew DeLuca, Facebook's Mark Zuckerberg Meets Conservatives Amid 'Trending' Furor (NBC News, 19 May 2016)

Robinson Meyer, How Facebook Could Tilt the 2016 Election (Atlantic, 18 April 2016)

Geoff Shullenberger, The Scapegoating Machine (The New Inquiry, 30 November 2016)

Zeynep Tufekci, YouTube, the Great Radicalizer (New York Times, 10 March 2018)



Wikipedia: Attention Economy, Disintermediation, Framing, Gamergate Controversy, Metacommunication, Overton Window

 •  0 comments  •  flag
Share on Twitter
Published on November 01, 2018 12:55

June 13, 2018

Practical Ethics

A lot of ethical judgements appear to be binary ones. Good versus bad. Acceptable versus unacceptable. Angels versus Devils.

Where questions of ethics reach the public sphere, it is common for people to take strong positions for or against. For example, there have been some high-profile cases involving seriously sick children, whether they should be provided with some experimental treatment, or even whether they should be kept alive at all. These are incredibly difficult decisions for those closely involved, but the experts are then subjected to vitriolic attack from armchair critics (often from the other side of the world) who think they know better.

Practical ethics are mostly about trade-offs, interpreting the evidence, predicting the consequences, estimating and balancing the benefits and risks. There isn't a simple formula that can be applied, each case must be carefully considered to determine where it sits on a spectrum.

The same is true of technology ethics. There isn't a blanket rule that says that these forms of persuasion are good and these forms are bad, there are just different degrees of nudge. We might want to regard all nudges with some suspicion, but retailers have always nudged people to purchase things. The question is whether this particular form of nudge is acceptable in this context, or whether it crosses some fuzzy line. Where does this particular project sit on the spectrum?

Technologists sometimes abdicate responsibility for such questions. Whatever the client wants, or whatever the technology enables, is okay. Responsibility means owning that judgement.

When Google published its AI ethics recently, Eric Newcomer complained that balancing the benefits and risks sounded like the utilitarianism he learned about at high school. But he also complained that Google's approach lacks impartiality and agent-neutrality. It would therefore be more accurate to describe Google's approach as consequentialism.

In the real world, even the question of agent-neutrality is complicated. Sometimes this is interpreted as a call to disregard any judgement made by a stakeholder, on the grounds that they must be biased. For example, ignoring professional opinions (doctors, teachers) because they might be trying to protect their own professional status. But taking important decisions about healthcare or education away from the professionals doesn't solve the problem of bias, it merely replaces professional bias with some other form of bias.

In Google's case, people are entitled to question how exactly Google will make these difficult judgements, and the extent to which these judgements may be subject to some conflict of interest. But if there is no other credible body that can make these judgements, perhaps the best we can ask for (at least for now) is some kind of transparency or scrutiny.

As I said above, practical ethics are mostly about consequences - which philosophers call consequentialism. But not entirely. Ethical arguments about the human subject aren't always framed in terms of observable effects, but may be framed in terms of human values. For example, the idea people should be given control over something or other, not because it makes them happier, but just because, you know, they should. Or the idea that certain things (truth, human life, etc.) are sacrosanct.

In his book The Human Use of Human Beings, first published in 1950, Norbert Wiener based his computer ethics on what he called four great principles of justice. So this is not just about balancing outcomes.
Freedom. Justice requires “the liberty of each human being to develop in his freedom the full measure of the human possibilities embodied in him.”  
Equality. Justice requires “the equality by which what is just for A and B remains just when the positions of A and B are interchanged.” 
Benevolence. Justice requires “a good will between man and man that knows no limits short of those of humanity itself.”  
Minimum Infringement of Freedom. “What compulsion the very existence of the community and the state may demand must be exercised in such a way as to produce no unnecessary infringement of freedom”

Of course, a complex issue may require more than a single dimension. It may be useful to draw spider diagrams or radar charts, to help to visualize the relevant factors. Alternatively, Cathy O'Neil recommends the Ethical or Stakeholder Matrix technique, originally invented by Professor Ben Mepham.

"A construction from the world of bio-ethics, the ethical or “stakeholder” matrix is a way of determining the answer to the question, does this algorithm work? It does so by considering all the stakeholders, and all of their concerns, be them positive (accuracy, profitability) or negative (false negatives, bad data), and in particular allows the deployer to think about and gauge all types of best case and worst case scenarios before they happen. The matrix is color coded with red, yellow, or green boxes to alert people to problem areas." [Source: ORCAA]
"The Ethical Matrix is a versatile tool for analysing ethical issues. It is intended to help people make ethical decisions, particularly about new technologies. It is an aid to rational thought and democratic deliberation, not a substitute for them. ... The Ethical Matrix sets out a framework to help individuals and groups to work through these debates in relation to a particular issue. It is designed so that a broader than usual range of ethical concerns is aired, differences of perspective become openly discussed, and the weighting of each concern against the others is made explicit. The matrix is based in established ethical theory but, as far as possible, employs user-friendly language." [Source: Food Ethics Council]



Jessi Hempel, Want to prove your business is fair? Audit your algorithm (Wired 9 May 2018)

Ben Mepham, Ethical Principles and the Ethical Matrix. Chapter 3 in J. Peter Clark Christopher Ritson (eds), Practical Ethics for Food Professionals: Ethics in Research, Education and the Workplace (Wiley 2013)

Eric Newcomer, What Google's AI Principles Left Out (Bloomberg 8 June 2018)

Tom Upchurch, To work for society, data scientists need a hippocratic oath with teeth (Wired, 8 April 2018)



Stanford Encyclopedia of Philosophy: Computer and Information Ethics, Consequentialism, Utilitarianism

Related posts: Conflict of Interest (March 2018), Data and Intelligence Principles From Major Players (June 2018)

 •  0 comments  •  flag
Share on Twitter
Published on June 13, 2018 12:22

March 25, 2018

Ethics as a Service

In the real world, ethics is rarely if ever the primary focus. People engaging with practical issues may need guidance or prompts to engage with ethical questions, as well as appropriate levels of governance.


@JPSlosar ‏calls for
"a set of easily recognizable ethics indicators that would signal the presence of an ethics issue before it becomes entrenched, irresolvable or even just obviously apparent".

Slosar's particular interest is in healthcare. He wants to proactively integrate ethics in person-centered care, as a key enabler of the multiple (and sometimes conflicting) objectives of healthcare: improved outcomes, reduced costs and the best possible patient and provider experience. These four objectives are known as the Quadruple Aim.

According to Slosar, ethics can be understood as a service aimed at reducing, minimizing or avoiding harm. Harm can sometimes be caused deliberately, or blamed on human inattentiveness, but it is more commonly caused by system and process errors.

A team of researchers at Carnegie-Mellon, Berkeley and Microsoft Research have proposed an approach to ethics-as-a-service involving crowd-sourcing ethical decisions. This was presented at an Ethics-By-Design workshop in 2013.


Meanwhile, Ozdemir and Knoppers distinguish between two types of Upstream Ethics: Type 1 refers to early ethical engagement, while Type 2 refers to the choice of ethical principles, which they call "prenormative", part of the process by which "normativity" is achieved. Given that most of the discussion of EthicsByDesign assumes early ethical engagement in a project (Type 1), their Type 2 might be better called EthicsByFiat.




Cristian Bravo-Lillo, Serge Egelman, Cormac Herley, Stuart Schechter and Janice Tsai, Reusable Ethics‐Compliance Infrastructure for Human Subjects Research (CREDS 2013)

Derek Feeley, The Triple Aim or the Quadruple Aim? Four Points to Help Set Your Strategy (IHI, 28 November 2017)

Vural Ozdemir and Bartha Maria Knoppers, One Size Does Not Fit All: Toward “Upstream Ethics”? (The American Journal of Bioethics, Volume 10 Issue 6, 2010) https://doi.org/10.1080/15265161.2010.482639

John Paul Slosar, Embedding Clinical Ethics Upstream: What Non-Ethicists Need to Know (Health Care Ethics, Vol 24 No 3, Summer 2016)

 •  0 comments  •  flag
Share on Twitter
Published on March 25, 2018 13:42

Conflict of Interest

@riptari (Natasha Lomas) has a few questions for DeepMind's AI ethics research unit. She suggests that
"it really shouldn’t need a roster of learned academics and institutions to point out the gigantic conflict of interest in a commercial AI giant researching the ethics of its own technology’s societal impacts"

and points out that
"there’s a reason no one trusts the survey touting the amazing health benefits of a particular foodstuff carried out by the makers of said foodstuff".

As @marionnestle remarks in relation to the health claims of chocolate,
"industry-funded research tends to set up questions that will give them desirable results, and tends to be interpreted in ways that are beneficial to their interests". (via Nik Fleming)




Nic Fleming, The dark truth about chocolate (Observer, 25 March 2018)

Natasha Lomas, DeepMind now has an AI ethics research unit. We have a few questions for it… (TechCrunch, 4 Oct 2017)

 •  0 comments  •  flag
Share on Twitter
Published on March 25, 2018 02:41

March 18, 2018

Security is downstream from strategy

Following @carolecadwalla's latest revelations about the misuse of personal data involving Facebook, she gets a response from Alex Stamos, Facebook's Chief Security Officer.

Hi, Carole. First off, I work on security, not strategy, and I agree that this is a serious issue. It's also a nuanced and difficult one, which is lost in headlines like this. pic.twitter.com/FaFUbeuxTs— Alex Stamos (@alexstamos) March 17, 2018


So let's take a look at some of his hand-wringing Tweets.
I work on security not strategy. https://twitter.com/alexstamos/status/975049688847024128This is a difficult issue. https://twitter.com/alexstamos/status/975049688847024128I should have done a better job weighing in. https://twitter.com/alexstamos/status/975069709140877312I’ve been trying to warn folks about this (relating to a different issue). https://twitter.com/alexstamos/status/974315632589025280I just wish I was better about talking about these things (presumably in general). https://twitter.com/alexstamos/status/975070166127067136 
I'm sure many security professionals would sympathize with this. Nobody listens to me. Strategy and innovation surge ahead, and security is always an afterthought.

According to his Linked-In entry, Stamos joined Facebook in June 2015. Before that he had been Chief Security Officer at Yahoo!, which suffered a major breach under his watch in late 2014, affecting over 500 million user accounts. So perhaps a mere 50 million Facebook users having their data used for nefarious purposes doesn't really count as much of a breach in his book.

In one of her pieces today, Carole Cadwalladr quotes the Breitbart doctrine
"politics is downstream from culture, so to change politics you need to change culture"
And culture eats strategy. And security is downstream from everything else. So much then for "by design and by default".
Facebook (and Google, too!) have great security teams. Some of the best in the business, no doubt. Full of conscientious people. But they can’t mitigate the business model. ¯\_(ツ)_/¯— zeynep tufekci (@zeynep) March 17, 2018



Carole Cadwalladr ‘I made Steve Bannon’s psychological warfare tool’: meet the data war whistleblower(Observer, 18 Mar 2018) via @BiellaColeman

Carole Cadwalladr and Emma Graham-Harrison, How Cambridge Analytica turned Facebook ‘likes’ into a lucrative political tool (Guardian, 17 Mar 2018)

Hannes Grassegger and Mikael Krogerus, The Data That Turned the World Upside Down (Motherboard, 28 Jan 2017) via @BiellaColeman

Justin Hendrix, Follow-Up Questions For Facebook, Cambridge Analytica and Trump Campaign on Massive Breach (Just Security, 17 March 2018)

Mattathias Schwartz, Facebook failed to protect 30 million users from having their data harvested by Trump campaign affiliate (The Intercept, 30 March 2017)


Wikipedia: Yahoo data breaches




 •  0 comments  •  flag
Share on Twitter
Published on March 18, 2018 15:26

March 9, 2018

Fail Fast - Burger Robotics

As @jjvincent observes, integrating robots into human jobs is tougher than it looks. Four days after it was installed in a Pasadena CA burger joint, Flippy the robot has been taken out of service for an upgrade. Turns out it wasn't fast enough to handle the demand. Does this count as Fail Fast?

Flippy's human minders have put a positive spin on the failure, crediting the presence of the robot for an unexpected increase in demand. As James wryly suggests, Flippy is primarily earning its keep as a visitor attraction.

If this is a failure at all, what kind of failure is it? Drawing on earlier work by James Reason, Phil Boxer distinguishes between errors of intention, planning and execution.

If the intention for the robot is to improve productivity and throughput at peak periods, then the designers have got more work to do. And the productivity-throughput problem may be broader than just burger flipping: making Flippy faster may simply expose a bottleneck somewhere else in the system. But if the intention for the robot is to attract customers, this is of greatest value at off-peak periods. In which case, perhaps the robot already works perfectly.


Philip Boxer, ‘Unintentional’ errors and unconscious valencies (Asymmetric Leadership, 1 May 2008)

John Donohue, Fail Fast, Fail Often, Fail Everywhere (New Yorker, 31 May 2015)

Lora Kolodny, Meet Flippy, a burger-grilling robot from Miso Robotics and CaliBurger (TechCrunch 7 Mar 2017)

Brian Heater, Flippy, the robot hamburger chef, goes to work(TechCrunch, 5 March 2018)

James Vincent, Burger-flipping robot takes four-day break immediately after landing new job (Verge, 8 March 2018)




 •  0 comments  •  flag
Share on Twitter
Published on March 09, 2018 11:30

January 15, 2018

Carillion Struck By Lightning

@NilsPratley blames delusion in the boardroom (on a grand scale, he says) for Carillion's collapse. "In the end, it comes down to judgments made in the boardroom."

A letter to the editor of the Financial Times agrees.
"This situation has been caused, in part, by the unprofessional, fatalistic and blasé attitude to contract risk management of some senior executives in the UK construction industry."


By no means the first company brought low by delusion (I've talked some about Enron on this blog, as well as in my book on organizational intelligence), and probably not the last.

And given that Carillion was the beneficiary of some very large public sector contracts, we could also talk about delusion and poor risk management in government circles. As @econtratacion points out, "the public sector had had information pointing towards Carillion's increasingly dire financial situation for a while".



As it happens, the Home Secretary was at the London Stock Exchange today, talking to female executives about gender diversity at board level. So I thought I'd just check the gender make-up of the Carillion board. According to the Carillion website, there were two female executives and two female non-executive directors in a board of twelve.

In the future, Amber Rudd would like half of all directors to be female. An earlier Government-backed review had recommended that at least a third should be female by 2020.
Lombard: Sir Philip to review why so few female senior executives. He could use a beefed up UK's governance code to propel women to top jobs— kate burgess ft (@katebur95633594) February 9, 2016

But compared to other large UK companies, the Carillion gender ratio wasn't too bad. "On paper, the directors looked well qualified", writes Kate Burgess in the Financial Times, noting that "the board ticked all the boxes in terms of good governance". But now even the Institute of Directors has expressed belated concerns about the effective governance at Carillion, and Burgess says the board fell into what she calls "a series of textbook traps".

So what kind of traps were these? The board paid large dividends to the shareholders and awarded large bonuses to themselves and other top executives, despite the fact that key performance targets were not met, and there was a massive hole in the pension fund. In other words, they looked after themselves first and the shareholders second, and to hell with pensioners and other stakeholders. Meanwhile, Larry Elliott notes that the directors of the company took steps to shield themselves from financial risk. These are not textbook traps, they are not errors of judgement, they are moral failings.

Of course we shouldn't rely solely on the moral integrity of company executives. If there is no regulation or regulator able to prevent a board behaving in this way, this points to a fundamental weakness in the financial system as a whole.


There is a strong case that diversity mitigates against groupthink - but as I've argued in my earlier posts, this needs to be real diversity not just symbolic or imaginary diversity (ticking boxes). And even if having more women or ethnic minorities on the board might possibly reduce errors of judgement, women as well as men can have moral failings. It's as if we imagined that Ivanka Trump was going to be a wise and restraining influence on her father, simply because of her gender.

As it happens, the remuneration director at Carillion was a woman. We may never know whether she was coerced or misled by her fellow directors or whether she participated enthusiastically in the gravy. But we cannot say that having a woman in that position is automatically going to be better than having a man. Women on boards may be a necessary step, but it is not a sufficient one.




Martin Bentham, Amber Rudd: 'It makes no sense to have more men than women in the boardroom' (Evening Standard, 15 January 2018)

Mark Bull, A lesson on risk from Carillion’s collapse (FT Letters to the Editor, 16 January 2018)

Kate Burgess, Carillion’s board: misguided or incompetent? (FT, 17 January 2018) HT @AidanWard3

Larry Elliott, Four lessons the Carillion crisis can teach business, government and us (Guardian, 17 January 2018)

Vanessa Fuhrmans, Companies With Diverse Executive Teams Posted Bigger Profit Margins, Study Shows (WSJ, 18 January 2018)

Simon Goodley, Carillion's 'highly inappropriate' pay packets criticised (Guardian, 15 January 2018)

Nils Pratley, Blame the deluded board members for Carillion's collapse (Guardian, 15 January 2018)

Albert Sánchez-Graells, Some thoughts on Carillion's liquidation and systemic risk management in public procurement (15 January 2018)

Rebecca Smith, Women should hold one third of senior executive jobs at FTSE 100 firms by 2020, says Sir Philip Hampton's review (City Am, 6 November 2016)



Related posts

Explaining Enron (January 2010)
The Purpose of Diversity (January 2010)
Organizational Intelligence and Gender (October 2010)
Delusion and Diversity (October 2012)
Intelligence and Governance (February 2013)
More on the Purpose of Diversity (December 2014)


Updated 18 January 2018

 •  0 comments  •  flag
Share on Twitter
Published on January 15, 2018 14:25

November 24, 2017

Pax Technica - The Conference

#paxtechnica Today I was at the @CRASSHlive conference in Cambridge to hear a series of talks and panel discussions on The Implications of the Internet of Things. For a comprehensive account, see @LaurieJ's livenotes.

When I read Howard's book last week, I wondered why he had devoted so much of his book on such internet phenomena as social media and junk news, when the notional topic of the book was the Internet of Things. His keynote address today made the connection much clearer. While social media provides data about attitudes and aspirations, the internet of things provides data about behaviour. When these different types of data are combined, this produces a much richer web of information.

For example, Howard mentioned a certain coffee company that wanted to use IoT sensors to track the entire coffee journey from farm to disposed cup. (Although another speaker expressed scepticism about the value of this data, arguing that most of the added value of IoT came from actuators rather than sensors.)

To the extent that the data involves personal information, this raises political concerns. Some of the speakers today spoke of surveillance capitalism, and there were useful talks on security and privacy, which I may cover in separate posts.

In his 2014 essay on the Internet of Things, Bruce Sterling characterizes the Internet of Things as "an epic transformation: all-purpose electronic automation through digital surveillance by wireless broadband". According to Sterling, powerful stakeholders like the slogan 'Internet of Things' "because it sounds peaceable and progressive".

Peaceable? Phil Howard uses the term Pax. This refers to a period in which the centre is stable and relatively peaceful, although the periphery may be marked by local skirmishes and violence (p7). His historical examples are the Pax Romana, the Pax Britanica and the Pax Americana. He argues that we are currently living in a similar period, which he calls Pax Technica.

For Howard, "a pax indicates a moment of agreement between government and the technology industry about a shared project and way of seeing the world" (p6). This seems akin to Gramsci's notion of cultural hegemony, "the idea that the ruling class can manipulate the value system and mores of a society, so that their view becomes the world view or Weltanschauung" (Wikipedia).

But whose tech? Howard has documented significant threats to democracy from foreign governments using social media bots to propagate junk news. There are widespread fears that this propaganda has had a significant effect on several recent elections. And if the Russians are often mentioned in the context of social media bots and junk news, the Chinese are often mentioned in the context of dodgy Internet of Things devices. While some political factions in the West are accused of collaborating with the Russians, and some commercial interests (notably pharma) may be using similar propaganda techniques, it seems odd to frame this as part of a shared project between government and the technology industry. Howard's research indicates a new technological cold war, in which techniques originally developed by the authoritarian regimes to control their own citizens are repurposed to undermine and destabilize democratic ones.

David Runciman talked provocatively about government of the things, by the things, for the things. (Someone from the audience linked this, perhaps optimistically, to Bruno Latour's Parliament of Things.) But Runciman's formulation foregrounds the devices (the "things") and overlooks the relationships behind the devices (the "internet of"). (This is related to Albert Borgmann's notion of the Device Paradigm.) As consumers we may spend good money on products with embedded internet-enabled devices, then we discover that these devices don't truly belong to ourselves but remain loyal to their manufacturers. They monitor our behaviour, they may refuse to work with non-branded spare parts, or they may terminate service altogether. As Ian Steadman reports, it's becoming more and more common for everyday appliances to have features we don't expect. (Worth reading Steadman's article in full. He also quotes some prescient science fiction from Philip K Dick's 1969 novel Ubik.) "Very soon your house will betray you" warns architect Rem Koolhaas (Guardian 12 March 2014).

There are important ethical questions here, relating to non-human agency and the Principal-Agent problem.

But the invasion of IoT into our lives doesn't stop there. McGuirk worries that "our countless daily actions and choices around the house become what define us", and quotes a line from Dave Eggers' 2013 novel, The Circle

"Having a matrix of preferences presented as your essence, as the whole you? … It was some kind of mirror, but it was incomplete, distorted."
So personal identity and socioeconomic status may become precarious. This needs more thinking about. In the meantime, here is a quote from Teston.

"Wearable technologies ... are non-human actors that interact with other structural conditions to determine whose bodies count."

Related Posts

Pax Technica - The Book (November 2017)
Pax Technica - On Risk and Security (November 2017)


References

Dan Herman, Dave Eggers' "The Circle" — on tech, big data and the human component (Metaweird, Oct 2013)

Philip Howard, Pax Technica: How The Internet of Things May Set Us Free or Lock Us Up (Yale 2015)

Laura James, Pax Technica Notes (Session 1Session 2Session 3Session 4)

Justin McGuirk, Honeywell, I’m Home! The Internet of Things and the New Domestic Landscape (e-flux #64 April 2015)

John Naughton, 95 Theses about Technology (31 October 2017)

Ian Steadman, Before we give doors and toasters sentience, we should decide what we're comfortable with first (New Statesman, 10 February 2015)

Bruce Sterling, The Epic Struggle of the Internet of Things (2014). Extract via BoingBoing (13 Sept 2014)

Christa Teston, Rhetoric, Precarity, and mHealth Technologies (Rhetoric Society Quarterly, 46:3, 2016) pp 251-268 

Wikipedia: Cultural Hegemony, Device ParadigmHegemony, Principal-Agent problem

 •  0 comments  •  flag
Share on Twitter
Published on November 24, 2017 15:17

April 9, 2017

Creative Tension in the White House

In his 1967 book on Organizational Intelligence, Harold Wilensky praises President Franklin Roosevelt for his unorthodox but apparently effective management style.
"Roosevelt devised an administrative structure that would baffle any conventional student of public administration." (p53)
. @tonyjoyce Roosevelt set up "constructive rivalry ... structuring work so that clashes would be certain". Wilensky on #orgintelligence pic.twitter.com/MczcrYlypI— Richard Veryard (@richardveryard) April 8, 2017
A horrible management technique designed to keep your subordinates so busy fighting with each other they can't challenge you for leadership https://t.co/WSOiHagBOx— Jon H Ayre (@EnterprisingA) April 8, 2017


In contrast with FDR's approach, Wilensky notes some episodes where White House intelligence systems were not fit for purpose, including Korea (Truman) and the Bay of Pigs (Kennedy).

What about President Trump's approach? @tonyjoyce suggests that Trump is failing FDR's first construct - checking and balancing official intelligence vs unorthodox sources. However, Reuters (via the Guardian) quotes Republican strategist Charlie Black, who believes Trump’s White House reflects his traditional approach to running his business. “He’s always had a spokes-to-the-wheel management style,” said Black. “He wants people with differing views among the spokes.“


Sources

Reuters, Kushner and Bannon agree to 'bury the hatchet' after White House peace talks (Guardian, 9 April 2017)

Related posts

Delusion and Diversity (October 2010)
The Art of the New Deal - Trump and Intelligence (February 2017)
Another Update on Deconfliction (April 2017)

 •  0 comments  •  flag
Share on Twitter
Published on April 09, 2017 05:30