Richard Veryard's Blog, page 6

May 14, 2019

Leadership versus Governance

@j2bryson has commented on her blog about the fate of Google's Advanced Technology External Advisory Council (ATEAC), to which she had been appointed.

She argues that the people who were appointed to the ATEAC were selected because they were "prominent" in the field. She notes that "although being prominent doesn't mean you're the best, it probably does mean you're at least pretty good, at least at something".

Ignoring the complexities of university politics, academics generally achieve prominence because they are pretty good at having interesting and original ideas, publishing papers and books, coordinating research, and supervising postgraduate work, as well as representing the field in wider social and intellectual forums (e.g. TED talks). Clearly that can be regarded as an important type of leadership.

Bryson argues that leading is about problem-solving. And clearly there are some aspects of problem-solving in what has brought her to prominence, although that's certainly not the whole story.

But that argument completely misses the point. The purpose of the ATEAC was not problem-solving. Google does not need help with problem-solving, it employs thousands of extremely clever people who spend all day solving problems (although it may sometimes need a bit of help in the diversity stakes).

The stated purpose of the ATEAC was to help Google implement its AI principles. In other words, governance.

When Google published its AI principles last year, the question everyone was asking was about governance:
@mer__edith (Twitter 8 June 2018, tweet no longer available) called for "strong governance, independent external oversight and clarity" @katecrawford (Twitter 8 June 2018) asked "How are they implemented? Who decides? There's no mention of process, or people, or how they'll evaluate if a tool is 'beneficial'. Are they... autonomous ethics?"  and @EricNewcomer (Bloomberg 8 June 2018) asked "who decides if Google has fulfilled its commitments".
Google's appointment of an "advisory" council was clearly a half-hearted attempt to answer this question.

Bryson points out that Kay Coles James (the most controversial appointee) had some experience writing technology policy. But what a truly independent governance body needs is experience monitoring and enforcing policy, which is not the same thing at all.

People talk a lot about transparency in relation to technology ethics. Typically this refers to being able to "look inside" an advanced technological product, such as an algorithm or robot. But transparency is also about process and organization - ability to scrutinize the risk assessment and the design and the potential conflicts of interest. There are many people performing this kind of scrutiny on a full-time basis within large organizations or ecosystems, with far more experience of extremely large and complex development programmes than your average professor.

Had Google really wanted a genuinely independent governance body to scrutinize them properly, could they have appointed a different set of experts? And can people appointed and paid by Google ever be regarded as genuinely independent?

Veena Dubal suggests that the most effective governance over Google is currently coming from Google's own workforce. It seems that their protests were significant in getting Google to disband the ATEAC. Clearly the kind of courageous leadership demonstrated by people like Meredith Whittaker isn't just about problem-solving.



Joanna Bryson, What we lost when we lost Google ATEAC (7 April 2019), What leaders are actually for (13 May 2019)

Veena Dubal, Who stands between you and AI dystopia? These Google activists (The Guardian, 3 May 2019)

Eric Newcomer, What Google's AI Principles Left Out (Bloomberg 8 June 2018)

Kent Walker, An external advisory council to help advance the responsible development of AI (Google, 26 March 2019, updated 4 April 2019)


Related post: Data and Intelligence Principles From Major Players (June 2018)

 •  0 comments  •  flag
Share on Twitter
Published on May 14, 2019 06:14

April 28, 2019

Responsible Transparency

It is difficult to see how we can achieve an ethical technology without some kind of transparency, although we are still trying to work out how this could be achieved in an effective yet responsible manner. There are several concerns that are thought to conflict with transparency, including commercial advantage, security, privacy, and the risk of the device being misused or "gamed" by adversaries. There is a good summary of these issues in Mittelstadt et al (2016).

An important area where demands for transparency conflict with demands for confidentiality is with embedded software that serves the interests of the manufacturer rather than the consumer or the public. For example, a few years ago we learned about a "defeat device" that VW had built in order to cheat the emissions regulations; similar devices have been discovered in televisions to falsify energy consumption ratings.

Even when the manufacturers aren't actually breaking the law, they have a strong commercial interest in concealing the purpose and design of these systems, and they use Digital Rights Management (DRM) and the US Digital Millenium Copyright Act (DMCA) to prevent independent scrutiny. In what appears to be an example of regulatory capture, car manufacturers were abetted by the US EPA, which was persuaded to inhibit transparency of engine software, on the grounds that this would enable drivers to cheat the emissions regulations.

Defending the EPA, David Golumbia sees a choice between two trust models, which he calls democratic and cyberlibertarian. For him, the democratic model "puts trust in bodies specifically chartered and licensed to enforce regulations and laws", such as the EPA, whereas in the cyberlibertarian model, it is the users themselves who get the transparency and can scrutinize how something works. In other words, trusting the wisdom of crowds, or what he patronizingly calls "ordinary citizen security researchers".

(In their book on Trust, John Smith and Aidan Ward describe four types of trust. Golumbia's democratic model involves top-down trust, based on the central authority of the regulator, while the cyberlibertarian model involves decentralized network trust.)

Golumbia argues that the cyberlibertarian position is incoherent. 
"It says, on the one hand, we should not trust manufacturers like Volkswagen to follow the law. We shouldn’t trust them because people, when they have self-interest at heart, will pursue that self-interest even when the rules tell them not to. But then it says we should trust an even larger group of people, among whom many are no less self-interested, and who have fewer formal accountability obligations, to follow the law."
One problem with this argument is that it appears to confuse scrutiny with compliance. Cyberlibertarians may be strongly in favour of deregulation, but increasing transparency isn't only advocated by cyberlibertarians and doesn't necessarily imply deregulation. It could be based on a recognition that regulatory scrutiny and citizen scrutiny are complementary: given the fact that however powerful the tools at their disposal the regulators don't always spot everything, and given that regulators are sometimes subject to improper influence from the companies they are supposed to be regulating (so-called regulatory capture), having independent scrutiny as well as central regulation increases the likelihood that hazards will be discovered and dealt with. This could include the detection of algorithmic bias or previously unidentified hazards/vulnerabilities/malpractice.

Another small problem with his argument is that the defeat device had already hoodwinked the EPA and other regulators for many years.

Golumbia claims that "what the cyberlibertarians want, even demand, is for everyone to have the power to read and modify the emissions software in their cars" and complains that "the more we put law into the hands of those not specifically entrusted to follow it, the more unethical behavior we will have". It is certainly true that some of the advocates of open source are also advocating "right to repair" and customization rights. But there were two separate requests for exemptions to DMCA - one for testing and one for modification. And the researchers quoted by Kyle Wiens, who were disadvantaged by the failure of the EPA to mandate a specific exemption to DMCA to allow safety and security tests, were not casual libertarians or "ordinary citizens" but researchers at the International Council of Clean Transportation and West Virginia University.

It ought to be possible for regulators and academic researchers to collaborate productively in scrutinizing an industry, provided that clear rules, protocols and working practices are established for responsible scrutiny. Perhaps researchers might gain some protection from regulatory action or litigation by notifying a regulator in advance, or by prompt notification of any discovered issues. For example, the UK Data Protection Act 2018 (section 172) defines what it calls "effectiveness testing conditions", under which researchers can legitimately attempt to crack the anonymity of deidentified personal data. Among other things, a successful attempt must be notified to the Information Commissioner within 72 hours.

Meanwhile, in the cybersecurity world there are fairly well-established protocols for responsible disclosure of vulnerabilities, and in some cases rewards are paid to the researchers who find them, provided they are disclosed responsibly. Although not all of us have the expertise to understand the technical detail, the existence of this kind of independent scrutiny should make us all feel more confident about the safety and reliability of the products in question.



David Golumbia, The Volkswagen Scandal: The DMCA Is Not the Problem and Open Source Is Not the Solution (6 October 2015)

Brent Mittelstadt et al, The ethics of algorithms: Mapping the debate (Big Data and Society July–December 2016)

Jonathan Trull, Responsible Disclosure: Cyber Security Ethics (CSO Cyber Security Pulse, 26 February 2015)

Aidan Ward and John Smith, Trust and Mistrust (Wiley 2003)

Kyle Wiens, Opinion: The EPA shot itself in the foot by opposing rules that could've exposed VW (The Verge, 25 September 2015)


Related posts: Defeating the Device Paradigm (October 2015)



 •  0 comments  •  flag
Share on Twitter
Published on April 28, 2019 12:21

April 23, 2019

Decentred Regulation and Responsible Technology

In 2001-2, Julia Black published some papers discussing the concept of Decentred Regulation, with particular relevance to the challenges of globalization. In this post, I shall summarize her position as I understand it, and apply it to the topic of responsible technology.

Black identifies a number of potential failures in regulation, which are commonly attributed to command and control (CAC) regulation - regulation by the state through the use of legal rules backed by (often criminal) sanctions.

instrument failure - the instruments used (laws backed by sanctions) are inappropriate and unsophisticated information and knowledge failure - governments or other authorities have insufficient knowledge to be able to identify the causes of problems, to design solutions that are appropriate, and to identify non-compliance implementation failure - implementation of the regulation is inadequate motivation failure and capture theory - those being regulated are insufficiently inclined to comply, and those doing the regulating are insufficiently motivated to regulated in the public interest
For Black, decentred regulation represents an alternative to CAC regulation, based on five key challenges. These challenges echo the ideas of Michel Foucault around governmentality, which Isabell Lorey (2005, p23) defines as "the structural entanglement between the government of a state and the techniques of self-government in modern Western societies".

complexity - emphasising both causal complexity and the complexity of interactions between actors in society (or systems), which are imperfectly understood and change over time fragmentation - of knowledge, and of power and control. This is not just a question of information asymmetry; no single actor has sufficient knowledge, or sufficient control of the instruments of regulation. interdependencies - including the co-production of problems and solutions by multiple actors across multiple jurisdictions (and amplified by globalization) ungovernability - Black explains this in terms of autopoiesis, the self-regulation, self-production and self-organisation of systems. As a consequence of these (non-linear) system properties, it may be difficult or impossible to control things directly the rejection of a clear distinction between public and private - leading to rethinking the role of formal authority in governance and regulation
In response to these challenges, Black describes a form of regulation with the following characteristics

hybrid - combining governmental and non-governmental actors multifaceted - using a number of different strategies simultaneously or sequentially indirect - this appears to link to what (following Teubner) she calls reflexive regulation - for example setting the decision-making procedures within organizations in such a way that
the goals of public policy are achieved
And she asks if it counts as regulation at all, if we strip away much of what people commonly associate with regulation, and if it lacks some key characteristics, such as intentionality or effectiveness. Does regulation have to be what she calls "cybernetic", which she defines in terms of three functions: standard-setting, information gathering and behaviour modification? (Other definitions of "cybernetic" are available, such as Stafford Beer's Viable Systems Model.)

Meanwhile, how does any of this apply to responsible technology? Apart from the slogan, what I'm about to say would be true of any large technology company, but I'm going to talk about Google, for no other reason than its former use of the slogan "Don't Be Evil". (This is sometimes quoted as "Do No Evil", but for now I shall ignore the difference between being evil and doing evil.) What holds Google to this slogan is not primarily government regulation (mainly US and EU) but mostly an interconnected set of other forces, including investors, customers (much of its revenue coming from advertising), public opinion and its own workforce. Clearly these stakeholders don't all have the same view on what counts as Evil, or what would be an appropriate response to any specific ethical concern.

If we regard each of these stakeholder domains as a large-scale system, each displaying complex and sometimes apparently purposive behaviour, then the combination of all of them can be described as a system of systems. Mark Maier distinguished between three types of System of System (SoS), which he called Directed, Collaborative and Virtual; Philip Boxer identifies a fourth type, which he calls Acknowledged.

Directed - under the control of a single authority Acknowledged - some aspects of regulation are delegated to semi-autonomous authorities, within a centrally planned regime  Collaborative - under the control of multiple autonomous authorities, collaborating voluntarily to achieve an agreed purpose Virtual - multiple authorities with no common purpose
Black's notion of "hybrid" clearly moves from the Directed type to one of the other types of SoS. But which one? Where technology companies are required to interpret and enforce some rules, under the oversight of a government regulator, this might belong to the Acknowledged type. For example, social media platforms being required to enforce some rules about copyright and intellectual property, or content providers being required to limit access to those users who can prove they are over 18. (Small organizations sometimes complain that this kind of regime tends to favour larger organizations, which can more easily absorb the cost of building and implementing the necessary mechanisms.) 

However, one consequence of globalization is that there is no single regulatory authority. In Data Protection, for example, the tech giants are faced with different regulations in different jurisdictions, and can choose whether to adopt a single approach worldwide, or to apply the stricter rules only where necessary. (So for example, Microsoft has announced it will apply GDPR rules worldwide, while other technology companies have apparently migrated personal data of non-EU citizens from Ireland to the US in order to avoid the need to apply GDPR rules to these data subjects.)

But although the detailed rules on privacy and other ethical issues vary significantly between countries and jurisdictions, there is a reasonably broad acceptance of the principle that some privacy is probably a Good Thing. Similarly, although dozens of organizations have published rival sets of ethical principles for AI or robotics or whatever, there appears to be a fair amount of common purpose between them, indicating that all these organizations are travelling (or pretending to travel) in more or less the same direction. Therefore it seems reasonable to regard this as the Collaborative type.


Decentred regulation raises important questions of agency and purpose. And if it is to be maintain relevance and effectiveness in a rapidly changing technological world, there needs to be some kind of emergent / collective intelligence conferring the ability to solve not only downstream problems (making judgements on particular cases) but also upstream problems (evolving governance principles and practices).




Julia Black, Decentring Regulation: Understanding the Role of Regulation and Self-Regulation in a ‘Post-Regulatory’ World(Current Legal Problems, Volume 54, Issue 1, 2001) pp 103–146

Julia Black, Decentred Regulation (LSE Centre for Analysis of Risk and Regulation, 2002)

Martin Innes, Bethan Davies and Morag McDermont, How Co-Production Regulates (Social and Legal Studies, 2008)

Mark W. Maier, Architecting Principles for Systems-of-Systems (Systems Engineering, Vol 1 No 4, 1998)

Isabell Lorey, State of Insecurity (Verso 2015)

Gunther Teubner, Substantive and Reflexive Elements in Modern Law (Law and Society Review, Vol. 17, 1983) pp 239-285

Wikipedia: Don't Be Evil,


Related posts: How Many Ethical Principles (April 2019)

 •  0 comments  •  flag
Share on Twitter
Published on April 23, 2019 13:22

April 20, 2019

Ethics committee raises alarm

Dr @BenGoldacre was the keynote speaker at an IT conference I attended recently. In the context of the growing interest in technology ethics, especially AI ethics, I asked him what IT could learn from medical ethics. He responded by criticising the role of the ethics committee, and mentioned a recent case in which an ethics committee had blocked an initiative that could have collected useful data concerning the effectiveness of statins. This is an example of what Goldacre calls the ethical paradox. As he wrote in 2008,
"You can do something as part of a treatment program, entirely on a whim, and nobody will interfere, as long as it’s not potty (and even then you’ll probably be alright). But the moment you do the exact same thing as part of a research program, trying to see if it actually works or not, adding to the sum total of human knowledge, and helping to save the lives of people you’ll never meet, suddenly a whole bunch of people want to stuck their beaks in."


Within IT, there is considerable controversy about the role of the ethics committee, especially after Google appointed and then disbanded its Ethics Board. In a recent article for Slate, @internetdaniel complains about company ethics boards offering “advice” rather than meaningful oversight, and calls this ethics theatre. @ruchowdh prefers to call it ethics washing.

So I was particularly interested to find a practical example of an ethics committee in action in this morning's Guardian. While the outcome of this case is not yet clear, there seem to be some positive indicators in @sloumarsh's report.

Firstly, the topic (predictive policing) is clearly an important and difficult one. It is not just about applying a simplistic set of ethics principles, but balancing a conflicting set of interests and concerns. (As @oscwilliams reports, this topic has already got the attention of the Information Commissioner's Office.)

Secondly, the discussion is in the open, and the organization is making the right noises. “This is an important area of work, that is why it is right that it is properly scrutinised and those details are made public.” (This contrasts with some of the bad examples of medical ethics cited by Goldacre.)

Thirdly, the ethics committee is (informally) supported by a respected external body (Liberty), which adds weight to its concerns, and has helped bring the case to public attention. (Credit @Hannah_Couchman)

Fourthly, although the ethics committee mandate only applies to a single police force (West Midlands), its findings are likely to be relevant to other police forces across the UK. For those forces that do not have a properly established governance process of their own, the default path may be to follow the West Midlands example.


So it is possible (although not guaranteed) that this particular case may produce a reasonable outcome, with a valuable contribution from the ethics committee and its external supporters. But it is worrying if this is what it takes for governance to work, because this happy combination of positive indicators will not be present in most other cases.




Ben Goldacre, Where’s your ethics committee now, science boy? (Bad Science Blog,23 February 2008), When Ethics Committees Kill (Bad Science Blog, 26 March 2011), Taking transparency beyond results: ethics committees must work in the open (Bad Science Blog, 23 September 2016)

Sarah Marsh, Ethics committee raises alarm over 'predictive policing' tool (The Guardian, 20 April 2019)

Daniel Susser, Ethics Alone Can’t Fix Big Tech (Slate, 17 April 2019)

Jane Wakefield, Google's ethics board shut down (BBC News, 5 April 2019)

Oscar Williams, Some of the UK’s biggest police forces are using algorithms to predict crime (New Statesman, 4 February 2019)



 •  0 comments  •  flag
Share on Twitter
Published on April 20, 2019 04:37

March 9, 2019

Upstream Ethics

We can roughly characterize two places where ethical judgements are called for, which I shall call upstream and downstream. There is some inconsistency about how these terms are used in the literature; here are my definitions.

I use the term upstream ethics to refer to
Establishing priorities and goals - for example, emphasising precaution and preventionEstablishing general principles, processes and practicesEmbedding these in standards, policies and codes of practicesEnacting laws and regulationsEstablishing governance - monitoring and enforcementTraining and awareness - enabling, encouraging and empowering people to pay due attention to ethical concernsApproving and certifying technologies, products, services and supply chains.  (Some people call these "pre-normative" ethics.)


I use the term downstream ethics to refer to
Eliciting values and concerns in a specific context as part of the requirements elicitation processDetecting ethical warning signalsApplying, interpreting and extending upstream ethics to a specific case or challengeAuditing compliance with upstream ethics
There is also a feedback and learning loop, where downstream issues and experiences are used to evaluate and improve the efficacy of upstream ethics.


Downstream ethics does not take place at a single point in time. I use the term early downstream to mean paying attention to ethical questions at an early stage in the process. Among other things, this may involve picking up early warning signals of potential ethical issues affecting a particular case. Early downstream means being ethically proactive - introducing responsibility by design - while late downstream means reacting to ethical issues only after they have been forced upon you by other stakeholders.

However, some writers regard what I'm calling early downstream as another type of upstream. Thus Ozdemir and Knoppers talk about Type 1 and Type 2 upstream. And John Paul Slosar writes

"Early identification of the ethical dimensions of person-centered care before the point at which one might recognize the presence of a more traditionally understood “ethics case” is vital for Proactive Ethics Integration or any effort to move ethics upstream. Ideally, there would be a set of easily recognizable ethics indicators that would signal the presence of an ethics issue before it becomes entrenched, irresolvable or even just obviously apparent."

For his part, as a lawyer specializing in medical technology, Christopher White describes upstream ethics as a question of confidence and supply - in other words, having some level of assurance about responsible sourcing and supply of component technologies and materials. He mentions a range of sourcing issues, including conflict minerals, human slavery, and environmentally sustainable extraction.

Extending this point, advanced technology raises sourcing issues not only for physical resources and components, but also for intangible inputs like data and knowledge. For example, medical innovation may be dependent upon clinical trials, while machine learning may be dependent on large quantities of training data. So there are important questions of upstream ethics as to whether these data were collected properly and responsibly, which may affect the extent to which these data can be used responsibly, or at all.

There is a trade-off between upstream effort and downstream effort. If you take more care upstream, you should hope to experience fewer difficulties downstream. Conversely, some people may wish to invest little or no time upstream, and face the consequences downstream. One way of thinking about responsibility is shifting the balance of effort and attention upstream. But obviously you can't work everything out upstream, so you will always have further stuff to do downstream.

So it's about getting the balance right, and joining the dots. Wherever we choose to draw the line between "upstream" and "downstream", with different institutional arrangements and mobilizing different modes of argumentation and evidence at different stages, "upstream" and "downstream" still need to be properly connected, as part of a single ethical system.



(In a separate post, Ethics - Soft and Hard, I discuss Luciano Floridi's use of the terms hard and soft ethics, which covers some of the same distinctions I'm making here but in a way I find more confusing.)

Vural Ozdemir and Bartha Maria Knoppers, One Size Does Not Fit All: Toward “Upstream Ethics”? (The American Journal of Bioethics, Volume 10 Issue 6, 2010) https://doi.org/10.1080/15265161.2010.482639

John Paul Slosar, Embedding Clinical Ethics Upstream: What Non-Ethicists Need to Know (Health Care Ethics, Vol 24 No 3, Summer 2016)

Christopher White, Looking the Other Way: What About Upstream Corporate Considerations? (MedTech, 29 Mar 2017)

 •  0 comments  •  flag
Share on Twitter
Published on March 09, 2019 15:21

March 3, 2019

Ethics and Uncertainty

How much knowledge is required, in order to make a proper ethical judgement?

Assuming that consequences matter, it would obviously be useful to be able to reason about the consequences. This is typically a combination of inductive reasoning (what has happened when people have done this kind of thing in the past) and predictive reasoning (what is likely to happen when I do this in the future).

There are several difficulties here. The first is the problem of induction - to what extent can we expect the past to be a guide to the future, and how relevant is the available evidence to the current problem. The evidence doesn't speak for itself, it has to be interpreted.

For example, when Stephen Jay Gould was informed that he had a rare cancer of the abdomen, the medical literature indicated that the median survival for this type of cancer was only eight months. However, his statistical analysis of the range of possible outcomes led him to the conclusion that he had a good chance of finding himself at the favourable end of the range, and in fact he lived for another twenty years until an unrelated cancer got him.

The second difficulty is that longer term consequences are harder to predict than short-term consequences. Even if we assume an unchanging environment, we usually don't have as much hard data about longer-term consequences.

For example, a clinical trial of a drug may tell us what happens when people take the drug for six months. But it will take a lot longer before we have a clear picture of what happens when people continue to take the drug for the rest of their lives.

This might suggest that we should be more cautious about actions with long-term consequences. But that is certainly not an excuse for inaction or procrastination. One tactic of Climate Sceptics is to argue that the smallest inaccuracy in any scientific projection of climate change invalidates both the truth of climate science and the need for action. But that's not the point. Gould's abdominal cancer didn't kill him - but only because he took action to improve his prognosis. @Alexandria Ocasio-Cortez has recently started using the term Climate Delayers for those who find excuses for delaying action on climate change.

The third difficulty is that knowledge itself comes packaged in various disciplines or discourses. Medical ethics is dependent upon specialist medical knowledge, and technology ethics is dependent upon specialist technical knowledge. However, it would be wrong to judge ethical issues exclusively on the basis of this technical knowledge, and other kinds of knowledge (social, cultural or whatever) must also be given a voice. This probably entails some degree of cognitive diversity. Will Crouch also points out the uncertainty of predicting the values and preferences of future stakeholders.

The fourth difficulty is that there could always be more knowledge. This raises the question as to whether it is responsible to go ahead on the basis of our current knowledge, and how we can build in mechanisms to make future changes when more knowledge becomes available. Research may sometimes be a moral duty, as Tannert et al argue, but it cannot be an infinite duty.

The question of adequacy of knowledge is itself an ethical question. One of the classic examples in Moral Philosophy concerns a ship owner who sends a ship to sea without bothering to check whether the ship was sea-worthy. Some might argue that the ship owner cannot be held responsible for the deaths of the sailors, because he didn't actually know that the ship would sink. However, most people would see the ship owner having a moral duty of diligence, and would regard him as accountable for neglecting this duty.

But how can we know if we have enough knowledge? This raises the question of the "known unknowns" and "unknown unknowns", which is sometimes used with a shrug to imply that noone can be held responsible for the unknown unknowns.

The French psychoanalyst Jacques Lacan was interested in the opposition between impulsiveness and procrastination, and talks about three phases of decision-making: the instant of seeing (recognizing that some situation exists that calls for a decision), the time for understanding (assembling and analysing the options), and the moment to conclude (the final choice).

The purpose of Responsibility by Design is not just to prevent bad or dangerous consequences, but to promote good and socially useful consequences. The result of applying Responsibility by Design should not be reduced innovation, but better and more responsible innovation. The time for understanding should not be dragged on forever, there should always be a moment to conclude.



Matthew Cantor, Could 'climate delayer' become the political epithet of our times? (The Guardian, 1 March 2019)

Will Crouch, Practical Ethics Given Moral Uncertainty (Oxford University, 30 January 2012)

Stephen Jay Gould, The Median Isn't the Message" (Discover 6, June 1985) pp 40–42.

Christof Tannert, Horst-Dietrich Elvers and Burkhard Jandrig, The ethics of uncertainty. In the light of possible dangers, research becomes a moral duty (EMBO Rep. 8(10) October 2007) pp 892–896

Stanford Encyclopedia of Philosophy: Consequentialism, The Problem of Induction

Wikipedia: There are known knowns 

The ship-owner example can be found in an essay called "The Ethics of Belief" (1877) by W.K. Clifford, in which he states that "it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence".

I describe Lacan's model of time in my book on Organizational Intelligence (Leanpub 2012)

Related posts: Ethics and Intelligence (April 2010), Practical Ethics (June 2018), Big Data and Organizational Intelligence (November 2018)



 •  0 comments  •  flag
Share on Twitter
Published on March 03, 2019 02:39

February 23, 2019

Ethics - Soft and Hard

Professor Luciano @Floridi has recently introduced the concept of Soft Ethics. He wants to make a distinction between ethical judgements that go into laws, regulations and other norms, which he calls Hard Ethics, and ethical judgements that apply and extend these codes of practice in practical situations, which he calls Soft Ethics. The latter he also calls Post-Compliance Ethics - in other words, what you should do over and above complying with all applicable laws and regulations.

The labels "hard" and "soft" are used in many different domains, and carry diverse connotations. Some readers may interpret Floridi's use of these labels as implying that the "hard" ethics are clear-cut while the "soft" ethics are more fuzzy. Others may think that "hard" means difficult or tough, and "soft" means easy or lax. But even if the laws themselves were unambiguous (which obviously they aren't, otherwise lawyers would be redundant), the thinking that goes into the law-making process is complex and polyvocal, and the laws themselves usually represent a compromise between the interests of different stakeholder groups, as well as practical considerations of enforcement. (Floridi refers to "lobbying in favour of some good legislation or to improve that which already exists" as an example of hard ethics.) For this reason, regulations such as GDPR tend to fall short of the grand ethical vision that motivated the initiative in the first place.

In some quarters, the term "pre-normative" is used for the conceptual (and sometimes empirical) work that goes into the formulation of law and regulations. However, this might confuse those philosophers familiar with Peirce's use of the term. So my own preference is for the term "upstream". See my post on Ethics as a Service (March 2018).

Floridi suggests that soft ethics are most relevant "in places of the world where digital regulation is already on the good side of the moral vs. immoral divide", and seems to think it would be a mistake to apply a soft ethics approach in places like the USA, where the current regulation is (in his opinion) not fit for purpose. But then what is the correct ethical framework for global players?

For example, in May 2018, Microsoft announced that it would extend the rights at the heart of GDPR to all of its consumer customers worldwide. In most of the countries in which Microsoft operates, including the USA, this is clearly over and above the demands of local law, and therefore counts as "Soft Ethics" under Floridi's schema. Unless we regard this announcement as a further move in Microsoft's ongoing campaign for national privacy legislation in the United States, in which case it counts as "Hard Ethics". At this point, we start to wonder how useful Floridi's distinction is going to be in practice.

At some point during 2018, Floridi was alerted to the work of Ronald Dworkin by an anonymous reviewer. He therefore inserted a somewhat puzzling paragraph into his second paper, attributing to Dworkin the notion that "legal judgment is and should be guided by principles of soft ethics" which are "implicitly incorporated in the law", while attributing to H.L.A. Hart the position that soft ethics are "external to the legal system and used just for guidance". But if soft ethics is defined as being "over and above" the existing regulation, Floridi appears to be closer to the position he attributes to Hart.

Of course, the more fundamental debate between Dworkin and Hart was about the nature of authority in legal matters. Hart took a position known as Legal Positivism, strongly rejected by Dworkin, in which the validity of law depended on social conventions and customs.
The legal system is norms all the way down, but at its root is a social norm that has the kind of normative force that customs have. It is a regularity of behavior towards which officials take 'the internal point of view': they use it as a standard for guiding and evaluating their own and others' behavior, and this use is displayed in their conduct and speech, including the resort to various forms of social pressure to support the rule and the ready application of normative terms such as 'duty' and 'obligation' when invoking it. (SEP: Legal Positivism)

For Floridi, the authority of the law appears to rely on what he calls a normative cascade. This is a closed loop in which Ethics constrains Law, Law constrains Business/Government, Business/Government constrains People (as consumers or citizens), and the People (by deciding in what society they wish to live) can somehow bring about changes in Ethics. Perhaps Professor Floridi can explain which portions of this loop are Hard and which are Soft?



Julie Brill, Microsoft’s commitment to GDPR, privacy and putting customers in control of their own data (Microsoft, 21 May 2018)

James Feibleman, A Systematic Presentation of Peirce's Ethics (Ethics, Vol. 53, No. 2, January 1943) pp. 98-109

Luciano Floridi, Soft Ethics and the Governance of the Digital (Philos. Technol. 31:1–8, 17 February 2018)

Luciano Floridi, Soft ethics, the governance of the digital and the General Data Protection Regulation (Philosophical Transactions of the Royal Society, Volume 376 Issue 2133, 15 November 2018)

Jack Hirshleifer, Capitalist Ethics--Tough or Soft? (The Journal of Law and Economics, Vol 2, October 1959), pp. 114-119

Bucks County Courier Times, Getting Tough on Soft Ethics (10 February 2015)

Stanford Encyclopedia of Philosophy: Legal Positivism


Related post: Ethics as a Service (March 2018)

 •  0 comments  •  flag
Share on Twitter
Published on February 23, 2019 06:33

February 16, 2019

Shoshana Zuboff on Surveillance Capitalism

@shoshanazuboff's latest book was published at the end of January. 700 pages of detailed research and analysis, and I've been trying to read as much as possible before everyone else. Meanwhile I have seen display copies everywhere - not just at the ICA bookshop which always has more interesting books than I shall ever have time to read, but also in my (excellent) local bookshop.

Although Zuboff spent much of her life at Harvard Business School, and has previously expressed optimism about technology (Distributed Capitalism, the Support Economy), she has form in criticizing the unacceptable face of capitalism (e.g. her 2009 critique of Wall Street). She now regards surveillance capitalism as "a profoundly undemocratic social force" (p 513), and in the passionate conclusion to her book I can hear echoes of Robert Burn's poem "Parcel of Rogues".
"Our lives are scraped and sold to fund their freedom and our subjugation, their knowledge and our ignorance about what they know." (p 498)

One of the key words in the book is "power", especially what she calls instrumentarian power. She describes the emergence of this kind of power as a bloodless coup, and makes a point that will be extremely familiar to readers of Foucault.
"Instead of violence directed at our bodies, the instrumentarian third modernity operates more like a taming. Its solution to the increasingly clamorous demands for effective life pivots on the gradual elimination of chaos, uncertainty, conflict, abnormality, and discord in favor of predictability, automatic regularity, transparency, confluence, persuasion and pacification." (p 515)
In Foucauldian language, this would be described as a shift from sovereign power to disciplinary power, which he describes in terms of the Panopticon. Although Zuboff discusses Foucault and the Information Panopticon at some length in her book on the Smart Machine, I couldn't find a reference to Foucault in her latest book, merely a very brief mention of the panopticon (pp 470-1). So for a fuller explanation of this concept, I turned to the Stanford Encyclopedia of Philosophy.
"Bentham’s Panopticon is, for Foucault, a paradigmatic architectural model of modern disciplinary power. It is a design for a prison, built so that each inmate is separated from and invisible to all the others (in separate “cells”) and each inmate is always visible to a monitor situated in a central tower. Monitors do not in fact always see each inmate; the point is that they could at any time. Since inmates never know whether they are being observed, they must behave as if they are always seen and observed. As a result, control is achieved more by the possibility of internal monitoring of those controlled than by actual supervision or heavy physical constraints." (SEP: Michel Foucault)
I didn't read the Smart Machine when it first came out, so the first time I saw the term "panopticon" applied to information technology was in Mark Poster's brilliant book The Mode of Information, which came out a couple of years later. Introducing the term Superpanopticon to describe the databases of his time, his analysis seems uncannily accurate as a description of the present day.
"Foucault taught us to read a new form of power by deciphering discourse/practice formations instead of intentions of a subject or instrumental actions. Such a discourse analysis when applied to the mode of information yields the uncomfortable discovery that the population participates in its own self-constitution as subjects of the normalizing gaze of the Superpanopticon. We see databases not as an invasion of privacy, as a threat to a centred individual, but as the multiplication of the individual, the constitution of an additional self, one that may be acted upon to the detriment of the 'real' self without that 'real' self ever being aware of what is happening." (pp 97-8)

But the problem with invoking Foucault is that it appears to take agency away from the "parcel of rogues" - Zuckerberg, Bosworth, Nadella and the rest - who are the apparent villains of Zuboff's book. As I've pointed out elsewhere, the panopticon holds both the watcher and watched alike in its disciplinary power.

In his long and detailed review, Evgeny Morozov thinks the book owes more to Alfred Chandler, advocate of managerial capitalism, than to Foucault. (Even though Zuboff seems no longer to believe in the possibility of return to traditional managerial capitalism, and the book ends by taking sides with George Orwell in his strong critique of James Burnham, an earlier advocate of managerial capitalism.)


Meanwhile, there is another French thinker who may be haunting Zuboff's book, thanks to her adoption of the term Big Other, usually associated with Jacques Lacan. Jörg Metelmann suggests that Zoboff's use of the term "Big Other" corresponds (to a great extent, he says) to Lacan and Slavoj Žižek's psychoanalysis, but I'm not convinced. I suspect she may have selected the term "Big Other" (associated with Disciplinary Power) not as a conscious reference to Lacanian theory but because it rhymed with the more familiar "Big Brother" (associated, at least in Orwell's novel, with Sovereign Power). 

Talking of "otherness", Peter Benson explains how the Amazon Alexa starts to be perceived, not as a mere object but as an Other.
"(We) know perfectly well that she is an electronic device without consciousness, intentions, or needs of her own. But behaving towards Alexa as a person becomes inevitable, because she is programmed to respond as a person might, and our brains have evolved to categorize such a being as an Other, so we respond to her as a person. We can resist this categorization, but, as with an optical illusion, our perception remains unchanged even after it has been explained. The stick in water still looks bent, even though we know it isn’t. Alexa’s personhood is exactly such a psychological illusion."
But much as we may desire to possess this mysterious black tube, regarding Alexa as an equal partner in dialogue, almost a mirror of ourselves, the reality is that this black tube is just one of many endpoints in an Internet of Things consisting of millions of similar black tubes and other devices, part of an all-pervasive Big Other. Zuboff sees the Big Other as an apparatus, an instrument of power, "the sensate computational, connected puppet that renders, monitors, computes and modifies human behavior" (p 376).

The Big Other possesses what Zuboff calls radical indifference - it monitors and controls human behaviour while remaining steadfastly indifferent to the meaning of that experience (pp 376-7). She quotes an internal Facebook memo by Andrew "Boz" Bosworth advocating moral neutrality (pp 505-6). (For what it's worth, radical indifference is also celebrated by Baudrillard.)

She also refers to this as observation without witness. This can be linked to Henry Giroux's notion of disimagination, the internalization of surveillance.
"I argue that the politics of disimagination refers to images, and I would argue institutions, discourses, and other modes of representation, that undermine the capacity of individuals to bear witness to a different and critical sense of remembering, agency, ethics and collective resistance. The 'disimagination machine' is both a set of cultural apparatuses extending from schools and mainstream media to the new sites of screen culture, and a public pedagogy that functions primarily to undermine the ability of individuals to think critically, imagine the unimaginable, and engage in thoughtful and critical dialogue: put simply, to become critically informed citizens of the world."
(Just over a year ago, I managed to catch a rare performance of A Machine They're Secretly Building , which explores some of these ideas in a really interesting way. Strongly recommended. Check the proto_type website for UK tour dates.)

Zuboff's book concentrates on the corporate side of surveillance, although she does mention the common interest (elective affinity p 115) between the surveillance capitalists and the public security forces around the war on terrorism. She also mentions the increased ability of political actors to use the corporate instruments for political ends. So a more comprehensive genealogy of surveillance would have to trace the shifting power relations between corporate power, government power, media power and algorithmic power.

A good example of this kind of exploration took place at the PowerSwitch conference in March 2017, where I heard Ariel Ezrachi (author of a recent book on the Algorithm-Driven Economy) talking about "the end of competition as we know it" (see links below to video and liveblog).

But obviously there is much more on this topic than can be covered in one book, and Shoshana Zuboff has made a valuable contribution - both in terms of the weight of evidence she has assembled and also in terms of bringing these issues to a wider audience.



Peter Benson, The Concept of the Other from Kant to Lacan (Philosophy Now 127, August/September 2018)

Ariel Ezrachi and Maurice Stucke, Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy (Harvard University Press, 2016) - more links via publisher's page 

Henry A. Giroux, The Politics of Disimagination and the Pathologies of Power (Truth Out, 27 February 2013)

Jörg Metelmann, Screening Surveillance Capitalism, in Daniel Cuonz, Scott Loren, Jörg Metelmann (eds) Screening Economies: Money Matters and the Ethics of Representation (transcript Verlag, 2018)

Evgeny Morozov, Capitalism’s New Clothes (The Baffler, 4 February 2019)

Mark Poster, The Mode of Information (Polity Press, 1990). See also notes in Donko Jeliazkov and Roger Blumberg, Virtualities (PDF, undated)

Shoshana Zuboff, In The Age of the Smart Machine (1988)

Shoshana Zuboff, Wall Street's Economic Crimes Against Humanity (Bloomberg, 20 March 2019)

Shoshana Zuboff, A Digital Declaration (Frankfurter Algemeiner Zeitung, 15 September 2014)

Shoshana Zuboff, The Age of Surveillance Capitalism (UK Edition: Profile Books, 2019)

Wikipedia: Panopticism

Stanford Encyclopedia of Philosophy: Jacques Lacan, Michel Foucault

CRASSH PowerSwitch Conference (Cambridge, 31 March 2017) Panel 4: Algorithmic Power (via YouTube). See also liveblog by Laura James Power Switch - Conference Report.

Related posts: The Price of Everything (May 2017), Witnessing Machines Built in Secret (November 2017), Big Data and Organizational Intelligence (November 2018) 

 •  0 comments  •  flag
Share on Twitter
Published on February 16, 2019 04:54

November 18, 2018

Ethics in Technology - FinTech

Last Thursday, @ThinkRiseLDN (Rise London, a FinTech hub) hosted a discussion on Ethics in Technology (15 November 2018).

Since many of the technologies under discussion are designed to support the financial services industry, the core ethical debate is strongly correlated to the business ethics of the finance sector and is not solely a matter of technology ethics. But like most other sectors, the finance sector is being disrupted by the opportunities and challenges posed by technological innovation, and this entails a professional and moral responsibility on technologists to engage with a range of ethical issues.

(Clearly there are many ethical issues in the financial services industry besides technology. For example, my friends in the @LongFinance initiative are tackling the question of sustainability.)

The Financial Services industry has traditionally been highly regulated, although some FinTech innovations may be less well regulated for now. So people working in this sector may expect regulation - specifically principles-based regulation - to play a leading role in ethical governance (Note: the UK Financial Services Authority has been pursuing a principles-based regulation strategy for over ten years.)

Whether ethical questions can be reduced to a set of principles or rules is a moot point. In medical ethics, principles are generally held to be useful but not sufficient for resolving difficult ethical problems. (See McCormick for a good summary. See also my post on Practical Ethics.)

Nevertheless, there are undoubtedly some useful principles for technology ethics. For example, the principle that you can never foresee all the consequences of your actions, so you should avoid making irreversible technological decisions. In science fiction, this issue can be illustrated by a robot that goes rogue and cannot be switched off. @moniquebachner made the point that with a technology like Blockchain, you were permanently stuck, for good or ill, with your original design choices.

Several of the large tech companies have declared principles for data and intelligence. (My summary here.) But declaring principles is the easy bit; these companies taking them seriously (or us trusting them to take them seriously) may be harder.

One of the challenges discussed by the panel was how to negotiate the asymmetry of power. If your boss or your client wants to do something that you are uncomfortable with, you can't just assert some ethical principles and expect her to change her mind. So rather than walk away from an interesting technical challenge, you give yourself an additional organizational challenge - how to influence the project in the right way, without sacrificing your own position.

Obviously that's an ethical dilemma in its own right. Should you compromise your principles in the hope of retaining some influence over the outcome, or could you persuade yourself that the project isn't so bad after all? There is an interesting play-off between individual responsibility and collective responsibility, which we are also seeing in politics (Brexit passim).

Sheryl Sandberg appears to offer a high-profile example of this ethical dilemma. She had been praised by feminists for being "the one reforming corporate boy’s club culture from the inside ... the civilizing force barely keeping the organization from tipping into the abyss of greed and toxic masculinity." Crispin now disagrees with this view. "It seems clear what Sandberg truly is instead: a team player. And her team is not the working women of the world. It is the corporate culture that has groomed, rewarded, and protected her throughout her career." "This is the end of corporate feminism", comments @B_Ehrenreich.

And talking of Facebook ...

The title of Cathy O'Neil's book Weapons of Math Destruction invites a comparison between the powerful technological instruments now in the hands of big business, and the arsenal of nuclear and chemical weapons that have been a major concern of international relations since the Second World War. During the so-called Cold War, these weapons were largely controlled by the two major superpowers, and it was these superpowers that dominated the debate. As these weapons technologies have proliferated however, attention has shifted to the possible deployment of these weapons by smaller countries, and it seems that the world has become much more uncertain and dangerous.

In the domain of data ethics, it is the data superpowers (Facebook, Google) that command the most attention. But while there are undoubtedly major concerns about the way these companies use their powers, we may at least hope that a combination of forces may help to moderate the worst excesses. Besides regulatory action, these forces might include public opinion, corporate risk aversion from the large advertisers that provide the bulk of the income, as well as pressure from their own employees.

And in FinTech as with Data Protection, it will always be easier for regulators to deal with a small number of large players than with a very large number of small players. The large players will of course try to lobby for regulations that suit them, and may shift some operations into less strongly regulated jurisdictions, but in the end they will be forced to comply, more or less. Except that the ethically dubious stuff will always turn out to be led by a small company you've never heard of, and the large players will deny that they knew anything about it.

As I pointed out in my previous post on The Future of Political Campaigning, the regulators only have limited tools at their disposal, so this slants their powers to deal with the ethical ecosystem as a whole. If I had a hammer ...



Financial Services Authority, Principles-Based Regulation - Focusing on the Outcomes that Matter (FSA, April 2007)

Jessa Crispin, Feminists gave Sheryl Sandberg a free pass. Now they must call her out (Guardian, 17 November 2018)

Ian Harris, Commercial Ethics: Process or Outcome (Z/Yen, 2008)

Thomas R. McCormick, Principles of Bioethics (University of Washington, 2013)

Chris Yapp, Where does the buck stop now? (Long Finance, 28 October 2018)


Related posts Practical Ethics (June 2018) Data and Intelligence Principles from Major Players (June 2018) The Future of Political Campaigning (November 2018)

 •  0 comments  •  flag
Share on Twitter
Published on November 18, 2018 02:33

November 17, 2018

The Future of Political Campaigning

#democracydisrupted Last Tuesday, @Demos organized a discussion on The Future of Political Campaigning (13 November 2018). The panelists included the Information Commissioner (@ElizabethDenham) and the CEO of the Electoral Commission (@ClaireERBassett).

The presenting problem is social and technological changes that disrupt the democratic process and some of the established mechanisms and assumptions that are supposed to protect it. Recent elections (including the Brexit referendum) have featured new methods of campaigning and new modes of propaganda. Voters are presented with a wealth of misinformation and disinformation on the Internet, while campaigners have new tools for targeting and influencing voters.

The regulators have some (limited) tools for dealing with these changes. The ICO can deal with organizations that misuse personal data, while the Electoral Commission can deal with campaigns that are improperly funded. But while the ICO in particular is demonstrating toughness and ingenuity in using the available regulatory instruments to maximum effect, these instruments are only indirectly linked to the problem of political misinformation. Bad actors in future will surely find new ways to achieve unethical political ends, out of the reach of these regulatory instruments.

@Jphsmith compared selling opposition to the "Chequers" Brexit deal with selling waterproof trousers. But if the trousers turn out not to be waterproof, there is legal recourse for the purchaser. Whereas there appears to be no direct accountability for political misinformation and disinformation. The ICO can deal with organizations that misuse personal data: that’s the main tool they’ve been provided with. What tool do they have for dealing with propaganda and false promises? Where is the small claims court I can go to when I discover my shiny new Brexit doesn’t hold water? (Twitter thread)

As I commented in my question from the floor, for the woman with a hammer, everything looks like a nail. Clearly misuse of data and illegitimate sources of campaign finance are problems, but they are not necessarily the main problem. And if the government and significant portions of the mass media (including the BBC) don't give these problems much airtime, downplay their impact on the democratic process, and (disgracefully) disparage and discredit those journalists who investigate them, notably @carolecadwalla, there may be insufficient public recognition of the need for reform, let alone enhanced and updated regulation. If antidemocratic forces are capable of influencing elections, they are surely also capable of persuading the man in the street that there is nothing to worry about.


Carole Cadwalladr, Why Britain Needs Its Own Mueller (NYR Daily, 16 November 2018)

Nick Raikes, Online security and privacy: What an email address reveals (BBC News, 13 November 2018)

Josh Smith, A nation of persuadables: politics and campaigning in the age of data (Demos, 13 November 2018)

Jim Waterson, BBC women complain after Andrew Neil tweet about Observer journalist (Guardian, 16 November 2018)


Related posts

Ethical Communication in a Digital Age (November 2018)

 •  0 comments  •  flag
Share on Twitter
Published on November 17, 2018 02:08