Richard Veryard's Blog, page 5
September 16, 2019
The Ethics of Diversion - Tobacco Example
What are the ethics of diverting people from smoking to vaping?
On the one hand, we have the following argument.
E-cigarettes ("vaping") offer a plausible substitute for smoking cigarettes.Smoking is dangerous, and vaping is probably much less dangerous.Many smokers find it difficult to give up, even if they are motivated to do so. So vaping provides a plausible exit route.Observed reductions in the level of smoking can be partially attributed to the availability of alternatives such as vaping. (This is known as the diversion hypothesis.) It is therefore justifiable to encourage smokers to switch from cigarettes to e-cigarettes.
Critics of this argument make the following points.
While the dangers of smoking are now well-known, some evidence is now emerging to suggest that vaping may also be dangerous. In the USA, a handful of people have died and hundreds have been hospitalized. While some smokers may be diverted to vaping, there are also concerns that vaping may provide an entry path to smoking, especially for young people. This is known as the gateway or catalyst hypothesis.Some defenders of vaping blame the potential health risks and the gateway effect not on vaping itself but on the wide range of flavours that are available. While these may increase the attraction of vaping to children, the flavour ingredients are chemically unstable and may produce toxic compounds. For this reason, President Trump has recently proposed a ban on flavoured e-cigarettes.
And elsewhere in the world, significant differences in regulation are emerging between countries. While some countries are looking to ban e-cigarettes altogether, the UK position (as presented by Public Health England and the MHRA) is to encourage e-cigarettes as a safe alternative to smoking. At some point in the future presumably, UK data can be compared with data from other countries to provide evidence for or against the UK position. Professor Simon Capewell of Liverpool University (quoted in the Observer) calls this a "bizarre national experiment".
While we await convincing data about outcomes, ethical reasoning may appeal to several different principles.
Firstly, the minimum interference principle. In this case, this means not restricting people's informed choice without good reason.
Secondly, the utilitarian principle. The benefit of helping a large number of people to reduce a known harm outweighs the possibility of causing a lesser but unknown harm to a smaller number of people.
Thirdly, the cautionary principle. Even if vaping appears to be safer than traditional smoking, Professor Capewell reminds us of other things that were assumed to be safe - until we discovered that they weren't safe at all.
And finally, the conflict of interest principle. Elliott Reichardt, a researcher at the University of Calvary and a campaigner against vaping, argues that any study, report or campaign funded by the tobacco industry should be regarded with some suspicion.
Allan M. Brandt, Inventing Conflicts of Interest: A History of Tobacco Industry Tactics (Am J Public Health 102(1) January 2012) 63–71
Jamie Doward, After six deaths in the US and bans around the world – is vaping safe? (Observer, 15 September 2019)
David Heath, Contesting the Science of Smoking (Atlantic, 4 May 2016)
Levy DT, Warner KE, Cummings KM, et al, Examining the relationship of vaping to smoking initiation among US youth and young adults: a reality check (Tobacco Control 20 November 2018)
Elliott Reichardt and Juliet Guichon, Vaping is an urgent threat to public health (The Conversation, 13 March 2019)
On the one hand, we have the following argument.
E-cigarettes ("vaping") offer a plausible substitute for smoking cigarettes.Smoking is dangerous, and vaping is probably much less dangerous.Many smokers find it difficult to give up, even if they are motivated to do so. So vaping provides a plausible exit route.Observed reductions in the level of smoking can be partially attributed to the availability of alternatives such as vaping. (This is known as the diversion hypothesis.) It is therefore justifiable to encourage smokers to switch from cigarettes to e-cigarettes.
Critics of this argument make the following points.
While the dangers of smoking are now well-known, some evidence is now emerging to suggest that vaping may also be dangerous. In the USA, a handful of people have died and hundreds have been hospitalized. While some smokers may be diverted to vaping, there are also concerns that vaping may provide an entry path to smoking, especially for young people. This is known as the gateway or catalyst hypothesis.Some defenders of vaping blame the potential health risks and the gateway effect not on vaping itself but on the wide range of flavours that are available. While these may increase the attraction of vaping to children, the flavour ingredients are chemically unstable and may produce toxic compounds. For this reason, President Trump has recently proposed a ban on flavoured e-cigarettes.
And elsewhere in the world, significant differences in regulation are emerging between countries. While some countries are looking to ban e-cigarettes altogether, the UK position (as presented by Public Health England and the MHRA) is to encourage e-cigarettes as a safe alternative to smoking. At some point in the future presumably, UK data can be compared with data from other countries to provide evidence for or against the UK position. Professor Simon Capewell of Liverpool University (quoted in the Observer) calls this a "bizarre national experiment".
While we await convincing data about outcomes, ethical reasoning may appeal to several different principles.
Firstly, the minimum interference principle. In this case, this means not restricting people's informed choice without good reason.
Secondly, the utilitarian principle. The benefit of helping a large number of people to reduce a known harm outweighs the possibility of causing a lesser but unknown harm to a smaller number of people.
Thirdly, the cautionary principle. Even if vaping appears to be safer than traditional smoking, Professor Capewell reminds us of other things that were assumed to be safe - until we discovered that they weren't safe at all.
And finally, the conflict of interest principle. Elliott Reichardt, a researcher at the University of Calvary and a campaigner against vaping, argues that any study, report or campaign funded by the tobacco industry should be regarded with some suspicion.
Allan M. Brandt, Inventing Conflicts of Interest: A History of Tobacco Industry Tactics (Am J Public Health 102(1) January 2012) 63–71
Jamie Doward, After six deaths in the US and bans around the world – is vaping safe? (Observer, 15 September 2019)
David Heath, Contesting the Science of Smoking (Atlantic, 4 May 2016)
Levy DT, Warner KE, Cummings KM, et al, Examining the relationship of vaping to smoking initiation among US youth and young adults: a reality check (Tobacco Control 20 November 2018)
Elliott Reichardt and Juliet Guichon, Vaping is an urgent threat to public health (The Conversation, 13 March 2019)
Published on September 16, 2019 09:40
August 31, 2019
The Ethics of Disruption
In a recent commentary on #Brexit, Simon Jenkins notes that
Here is Bruno Latour making the same point.
Anyone who advocates "moving fast and breaking things" is taking an ethical position: namely that anything fragile enough to break deserves to be broken. This position is similar to the economic view that companies and industries that can't compete should be allowed to fail.
This position may be based on a combination of specific perceptions and general observations. The specific perception is when something is weak or fragile, protecting and preserving it consumes effort and resources that could otherwise be devoted to other more worthwhile purposes, and makes other things less efficient and effective. The general observation is that when something is failing, efforts to protect and preserve it may merely delay the inevitable collapse.
These perceptions and observations rely on a particular worldview or lens, in which things can be perceived as successful or otherwise, independent of other things. As Gregory Bateson once remarked (via Tim Parks),
There may also be strong opinions about which things get protection and which don't. For example, some people may think it is more important to support agriculture or to rescue failing banks than to protect manufacturers. On the other hand, there will always be people who disagree with the choices made by governments on such matters, and who will conclude that the whole project of protecting some industry sectors (and not others) is morally compromised.
Furthermore, the idea that some things are "too big to fail" may also be problematic, because it implies that small things don't matter so much.
A common agenda of the disruptors is to tear down perceived barriers, such as regulations. This is subject to a fallacy known as Chesterton's Fence, assuming that anyone whose purpose is not immediately obvious must be redundant.
Simon Jenkins, Boris Johnson and Jeremy Hunt will have to ditch no deal – or face an election (Guardian, 28 June 2019)
Bruno Latour, Down to Earth: Politics in the New Climatic Regime (Polity Press, 2018)
Tim Parks, Impossible Choices (Aeon, 15 July 2019)
Rory Sutherland, Chesterton’s fence – and the idiots who rip it out (Spectator, 10 September 2016)
Related posts: Shifting Paradigms and Disruptive Technology (September 2008), Arguments from Nature (December 2010), Low-Hanging Fruit (August 2019)
"disruption theory is much in vogue in management schools, so long as someone else suffers".
Here is Bruno Latour making the same point.
"Don't be fooled for a second by those who preach the call of wide-open spaces, of 'risk-taking', those who abandon all protection and continue to point at the infinite horizon of modernization for all. Those good apostles take risks only if their own comfort is guaranteed. Instead of listening to what they are saying about what lies ahead, look instead at what lies behind them: you'll see the gleam of the carefully folded golden parachutes, of everything that ensures them against the random hazards of existence." (Down to Earth, p 11)
Anyone who advocates "moving fast and breaking things" is taking an ethical position: namely that anything fragile enough to break deserves to be broken. This position is similar to the economic view that companies and industries that can't compete should be allowed to fail.
This position may be based on a combination of specific perceptions and general observations. The specific perception is when something is weak or fragile, protecting and preserving it consumes effort and resources that could otherwise be devoted to other more worthwhile purposes, and makes other things less efficient and effective. The general observation is that when something is failing, efforts to protect and preserve it may merely delay the inevitable collapse.
These perceptions and observations rely on a particular worldview or lens, in which things can be perceived as successful or otherwise, independent of other things. As Gregory Bateson once remarked (via Tim Parks),
"There are times when I catch myself believing there is something which is separate from something else."Perceptions of success and failure are also dependent on timescale and time horizon. The dinosaurs ruled the Earth for 140 million years.
There may also be strong opinions about which things get protection and which don't. For example, some people may think it is more important to support agriculture or to rescue failing banks than to protect manufacturers. On the other hand, there will always be people who disagree with the choices made by governments on such matters, and who will conclude that the whole project of protecting some industry sectors (and not others) is morally compromised.
Furthermore, the idea that some things are "too big to fail" may also be problematic, because it implies that small things don't matter so much.
A common agenda of the disruptors is to tear down perceived barriers, such as regulations. This is subject to a fallacy known as Chesterton's Fence, assuming that anyone whose purpose is not immediately obvious must be redundant.
Simon Jenkins, Boris Johnson and Jeremy Hunt will have to ditch no deal – or face an election (Guardian, 28 June 2019)
Bruno Latour, Down to Earth: Politics in the New Climatic Regime (Polity Press, 2018)
Tim Parks, Impossible Choices (Aeon, 15 July 2019)
Rory Sutherland, Chesterton’s fence – and the idiots who rip it out (Spectator, 10 September 2016)
Related posts: Shifting Paradigms and Disruptive Technology (September 2008), Arguments from Nature (December 2010), Low-Hanging Fruit (August 2019)
Published on August 31, 2019 02:20
August 22, 2019
Low-Hanging Fruit
August comes around again, and there are ripe blackberries in the hedgerows. One of the things I was taught at an early age was to avoid picking berries that were low enough to be urinated on by animals. (Or humans for that matter.) So I have always regarded the "low hanging fruit" metaphor with some distaste.
In business, "low hanging fruit" sometimes refers to an easy and quick improvement that nobody has previously spotted.
Which is of course perfectly possible. A new perspective can often reveal new opportunities.
But often the so-called low hanging fruit were already obvious, so pointing them out just makes you sound as if you think you are smarter than everyone else. And if they haven't already been harvested, there may be reasons or context you don't know about.
A lot of best practices and checklists are based on the simple and obvious. Which is fine as far as it goes, but not very innovative, won't take you from Best Practice to Next Practice.
So as I pointed out in my post on the Wisdom of the Iron Age, nobody should ever be satisfied with the low hanging fruit. The only purpose of the low-hanging fruit is to get us started, to feed us and motivate us as we build ladders, so we can reach the high-hanging fruit.
In business, "low hanging fruit" sometimes refers to an easy and quick improvement that nobody has previously spotted.
Which is of course perfectly possible. A new perspective can often reveal new opportunities.
But often the so-called low hanging fruit were already obvious, so pointing them out just makes you sound as if you think you are smarter than everyone else. And if they haven't already been harvested, there may be reasons or context you don't know about.
A lot of best practices and checklists are based on the simple and obvious. Which is fine as far as it goes, but not very innovative, won't take you from Best Practice to Next Practice.
So as I pointed out in my post on the Wisdom of the Iron Age, nobody should ever be satisfied with the low hanging fruit. The only purpose of the low-hanging fruit is to get us started, to feed us and motivate us as we build ladders, so we can reach the high-hanging fruit.
Published on August 22, 2019 07:32
August 8, 2019
Automation Ethics
Many people start their journey into the ethics of automation and robotics by looking at Asimov's Laws of Robotics.
While we usually don't want robots to harm people (although some people may argue for this principle to be partially suspended in the event of a "just war"), notions of harm are not straightforward. For example, a robot surgeon would have to cut the patient (minor harm) in order to perform an essential operation (major benefit). How essential or beneficial does the operation need to be, in order to justify it? Is the patient's consent sufficient?
Harm can be individual or collective. One potential harm from automation is that even if it creates wealth overall, it may shift wealth and employment opportunities away from some people, at least in the short term. But perhaps this can be justified in terms of the broader social benefit, or in terms of technological inevitability.
And besides the avoidance of (unnecessary) harm, there are some other principles to think about.
Human-centred work - Humans should be supported by robots, not the other way around. Whole system solutions- Design the whole system or process, don’t just optimize a robot as a single component. Self-correcting - Ensure that the system is capable of detecting and learning from errors. Open - Providing space for learning and future disruption. Don't just pave the cow-paths.Transparent - The internal state and decision-making processes of a robot are accessible to (some) users.
Let's look at each of these in more detail.
Human-Centred Work
Humans should be supported by robots, not the other way around. So we don't just leave humans to handle the bits and pieces that can't be automated, but try to design coherent and meaningful jobs for humans, with robots to make them more powerful, efficient, and effective.
Organization theorists have identified a number of job characteristics associated with job satisfaction, including skill variety, task identity, task significance, autonomy and feedback. So we should be able to consider how a given automation project affects these characteristics.
Whole Systems
When we take an architectural approach to planning and designing new technology, we can look at the whole system rather than merely trying to optimize a single robotic component.
Look across the business and technology domains (e.g. POLDAT).Look at the total impact of a collection of automated devices, not at each device separately.Look at this as a sociotechnical system, involving humans and robots collaborating on the business process.
Self-Correcting
Ensure that the (whole) system is capable of detecting and learning from errors (including near misses).
This typically requires a multi-loop learning process. The machines may handle the inner learning loops, but human intervention will be necessary for the outer loops.
Open
Okay, so do you improve the process first and then automate it, or do you automate first? If you search the Internet for "paving the cow-paths", you can find strong opinions on both sides of this argument. But the important point here is that automation shouldn't close down all possibility of future change. Paving the cow-paths may be okay, but not just paving the cow-paths and thinking that's the end of the matter.
In some contexts, this may mean leaving a small proportion of cases to be handled manually, so that human know-how is not completely lost. (Lewis Mumford argued that it is generally beneficial to retain some "craft" production alongside automated "factory" production, as a means to further insight, discovery and invention.)
Transparency
The internal state and decision-making processes of a robot are accessible to (some) users. Provide ways to monitor and explain what the robots are up to, or to provide an audit trail in the event of something going wrong.
Related posts
How Soon Might Humans Be Replaced At Work? (November 2015) Could we switch the algorithms off? (July 2017), How many ethical principles? (April 2019), Responsible Transparency (April 2019), Process Automation and Intelligence (August 2019)
Links
Jim Highsmith, Paving Cow Paths (21 June 2005)
Wikipedia
Job Characteristic Theory
Just War Theory
A robot may not injure a human being or, through inaction, allow a human being to come to harm (etc. etc.)As I've said before, I believe Asimov's Laws are problematic as a basis for ethical principles. Given that Asimov's stories demonstrate numerous ways in which the Laws don't actually work as intended. I have always regarded Asimov's work as being satirical rather than prescriptive.
While we usually don't want robots to harm people (although some people may argue for this principle to be partially suspended in the event of a "just war"), notions of harm are not straightforward. For example, a robot surgeon would have to cut the patient (minor harm) in order to perform an essential operation (major benefit). How essential or beneficial does the operation need to be, in order to justify it? Is the patient's consent sufficient?
Harm can be individual or collective. One potential harm from automation is that even if it creates wealth overall, it may shift wealth and employment opportunities away from some people, at least in the short term. But perhaps this can be justified in terms of the broader social benefit, or in terms of technological inevitability.
And besides the avoidance of (unnecessary) harm, there are some other principles to think about.
Human-centred work - Humans should be supported by robots, not the other way around. Whole system solutions- Design the whole system or process, don’t just optimize a robot as a single component. Self-correcting - Ensure that the system is capable of detecting and learning from errors. Open - Providing space for learning and future disruption. Don't just pave the cow-paths.Transparent - The internal state and decision-making processes of a robot are accessible to (some) users.
Let's look at each of these in more detail.
Human-Centred Work
Humans should be supported by robots, not the other way around. So we don't just leave humans to handle the bits and pieces that can't be automated, but try to design coherent and meaningful jobs for humans, with robots to make them more powerful, efficient, and effective.
Organization theorists have identified a number of job characteristics associated with job satisfaction, including skill variety, task identity, task significance, autonomy and feedback. So we should be able to consider how a given automation project affects these characteristics.
Whole Systems
When we take an architectural approach to planning and designing new technology, we can look at the whole system rather than merely trying to optimize a single robotic component.
Look across the business and technology domains (e.g. POLDAT).Look at the total impact of a collection of automated devices, not at each device separately.Look at this as a sociotechnical system, involving humans and robots collaborating on the business process.
Self-Correcting
Ensure that the (whole) system is capable of detecting and learning from errors (including near misses).
This typically requires a multi-loop learning process. The machines may handle the inner learning loops, but human intervention will be necessary for the outer loops.
Open
Okay, so do you improve the process first and then automate it, or do you automate first? If you search the Internet for "paving the cow-paths", you can find strong opinions on both sides of this argument. But the important point here is that automation shouldn't close down all possibility of future change. Paving the cow-paths may be okay, but not just paving the cow-paths and thinking that's the end of the matter.
In some contexts, this may mean leaving a small proportion of cases to be handled manually, so that human know-how is not completely lost. (Lewis Mumford argued that it is generally beneficial to retain some "craft" production alongside automated "factory" production, as a means to further insight, discovery and invention.)
Transparency
The internal state and decision-making processes of a robot are accessible to (some) users. Provide ways to monitor and explain what the robots are up to, or to provide an audit trail in the event of something going wrong.
Related posts
How Soon Might Humans Be Replaced At Work? (November 2015) Could we switch the algorithms off? (July 2017), How many ethical principles? (April 2019), Responsible Transparency (April 2019), Process Automation and Intelligence (August 2019)
Links
Jim Highsmith, Paving Cow Paths (21 June 2005)
Wikipedia
Job Characteristic Theory
Just War Theory
Published on August 08, 2019 03:47
July 22, 2019
Algorithms and Auditability
@ruchowdh objects to an article by @etzioni and @tianhuil calling for algorithms to audit algorithms. The original article makes the following points.
Automated auditing, at a massive scale, can systematically probe AI systems and uncover biases or other undesirable behavior patterns. High-fidelity explanations of most AI decisions are not currently possible. The challenges of explainable AI are formidable. Auditing is complementary to explanations. In fact, auditing can help to investigate and validate (or invalidate) AI explanations. Auditable AI is not a panacea. But auditable AI can increase transparency and combat bias.
Rumman Chowdhury points out some of the potential imperfections of a system that relied on automated auditing, and does not like the idea that automated auditing might be an acceptable substitute for other forms of governance. Such a suggestion is not made explicitly in the article, and I haven't seen any evidence that this was the authors' intention. However, there is always a risk that people might latch onto a technical fix without understanding its limitations, and this risk is perhaps what underlies her critique.
In a recent paper, she calls for systems to be "taught to ignore data about race, gender, sexual orientation, and other characteristics that aren’t relevant to the decisions at hand". But how can people verify that systems are not only ignoring these data, but also being cautious about other data that may serve as proxies for race and class, as discussed by Cathy O'Neil? How can they prove that a system is systematically unfair without having some classification data of their own?
And yes, we know that all classification is problematic. But that doesn't mean being squeamish about classification, it just means being self-consciously critical about the tools you are using. Any given tool provides a particular lens or perspective, and it is important to remember that no tool can ever give you the whole picture. Donna Haraway calls this partial perspective.
With any tool, we need to be concerned about how the tool is used, by whom, and for whom. Chowdhury expects people to assume the tool will be in some sense "neutral", creating a "veneer of objectivity"; and she sees the tool as a way of centralizing power. Clearly there are some questions about the role of various stakeholders in promoting algorithmic fairness - the article mentions regulators as well as the ACLU - and there are some major concerns that the authors don't address in the article.
Chowdhury's final criticism is that the article "fails to acknowledge historical inequities, institutional injustice, and socially ingrained harm". If we see algorithmic bias as merely a technical problem, then this leads us to evaluate the technical merits of auditable AI, and acknowledge its potential use despite its clear limitations. And if we see algorithmic bias as an ethical problem, then we can look for various ways to "solve" and "eliminate" bias. @juliapowles calls this a "captivating diversion". But clearly that's not the whole story.
Some stakeholders (including the ACLU) may be concerned about historical and social injustice. Others (including the tech firms) are primarily interested in making the algorithms more accurate and powerful. So obviously it matters who controls the auditing tools. (Whom shall the tools serve?)
What algorithms and audits have in common is that they deliver opinions. A second opinion (possibly based on the auditing algorithm) may sometimes be useful - but only if it is reasonably independent of the first opinion, and doesn't entirely share the same assumptions or perspective. There are codes of ethics for human auditors, so we may want to ask whether automated auditing would be subject to some ethical code.
Paul R. Daugherty, H. James Wilson, and Rumman Chowdhury, Using Artificial Intelligence to Promote Diversity (Sloan Management Review, Winter 2019)
Oren Etzioni and Michael Li, High-Stakes AI Decisions Need to Be Automatically Audited (Wired, 18 July 2019)
Donna Haraway, Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective. In Simians, Cyborgs and Women (Free Association, 1991)
Cathy O'Neil, Weapons of Math Destruction
Julia Powles, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence (7 December 2018)
Related posts: Whom Does the Technology Serve? (May 2019), Algorithms and Governmentality (July 2019)
Automated auditing, at a massive scale, can systematically probe AI systems and uncover biases or other undesirable behavior patterns. High-fidelity explanations of most AI decisions are not currently possible. The challenges of explainable AI are formidable. Auditing is complementary to explanations. In fact, auditing can help to investigate and validate (or invalidate) AI explanations. Auditable AI is not a panacea. But auditable AI can increase transparency and combat bias.
Rumman Chowdhury points out some of the potential imperfections of a system that relied on automated auditing, and does not like the idea that automated auditing might be an acceptable substitute for other forms of governance. Such a suggestion is not made explicitly in the article, and I haven't seen any evidence that this was the authors' intention. However, there is always a risk that people might latch onto a technical fix without understanding its limitations, and this risk is perhaps what underlies her critique.
In a recent paper, she calls for systems to be "taught to ignore data about race, gender, sexual orientation, and other characteristics that aren’t relevant to the decisions at hand". But how can people verify that systems are not only ignoring these data, but also being cautious about other data that may serve as proxies for race and class, as discussed by Cathy O'Neil? How can they prove that a system is systematically unfair without having some classification data of their own?
And yes, we know that all classification is problematic. But that doesn't mean being squeamish about classification, it just means being self-consciously critical about the tools you are using. Any given tool provides a particular lens or perspective, and it is important to remember that no tool can ever give you the whole picture. Donna Haraway calls this partial perspective.
With any tool, we need to be concerned about how the tool is used, by whom, and for whom. Chowdhury expects people to assume the tool will be in some sense "neutral", creating a "veneer of objectivity"; and she sees the tool as a way of centralizing power. Clearly there are some questions about the role of various stakeholders in promoting algorithmic fairness - the article mentions regulators as well as the ACLU - and there are some major concerns that the authors don't address in the article.
Chowdhury's final criticism is that the article "fails to acknowledge historical inequities, institutional injustice, and socially ingrained harm". If we see algorithmic bias as merely a technical problem, then this leads us to evaluate the technical merits of auditable AI, and acknowledge its potential use despite its clear limitations. And if we see algorithmic bias as an ethical problem, then we can look for various ways to "solve" and "eliminate" bias. @juliapowles calls this a "captivating diversion". But clearly that's not the whole story.
Some stakeholders (including the ACLU) may be concerned about historical and social injustice. Others (including the tech firms) are primarily interested in making the algorithms more accurate and powerful. So obviously it matters who controls the auditing tools. (Whom shall the tools serve?)
What algorithms and audits have in common is that they deliver opinions. A second opinion (possibly based on the auditing algorithm) may sometimes be useful - but only if it is reasonably independent of the first opinion, and doesn't entirely share the same assumptions or perspective. There are codes of ethics for human auditors, so we may want to ask whether automated auditing would be subject to some ethical code.
Paul R. Daugherty, H. James Wilson, and Rumman Chowdhury, Using Artificial Intelligence to Promote Diversity (Sloan Management Review, Winter 2019)
Oren Etzioni and Michael Li, High-Stakes AI Decisions Need to Be Automatically Audited (Wired, 18 July 2019)
Donna Haraway, Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective. In Simians, Cyborgs and Women (Free Association, 1991)
Cathy O'Neil, Weapons of Math Destruction
Julia Powles, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence (7 December 2018)
Related posts: Whom Does the Technology Serve? (May 2019), Algorithms and Governmentality (July 2019)
Published on July 22, 2019 04:34
July 13, 2019
Algorithms and Governmentality
In the corner of the Internet where I hang out, it is reasonably well understood that big data raises a number of ethical issues, including data ownership and privacy.
There are two contrasting ways of characterizing these issues. One way is to focus on the use of big data to target individuals with increasingly personalized content, such as precision nudging. Thus mass surveillance provides commercial and governmental organizations with large quantities of personal data, allowing them to make precise calculations concerning individuals, and use these calculations for the purposes of influence and control.
Alternatively, we can look at how big data can be used to control large sets or populations - what Foucault calls governmentality. If the prime job of the bureaucrat is to compile lists that could be shuffled and compared (Note 1), then this function is increasingly being taken over by the technologies of data and intelligence - notably algorithms and so-called big data.
However, Deleuze challenges this dichotomy.
Foucault's version of Bentham's panopticon is often invoked in discussions of mass surveillance, but what was equally important for Foucault was what he called biopower - "a type of power that presupposed a closely meshed grid of material coercions rather than the physical existence of a sovereign". [Foucault 2003 via Adams]
People used to talk metaphorically about faceless bureaucracy being a "machine", but now we have a real machine, performing the same function with much greater efficiency and effectiveness. And of course, scale.
Bureaucracy is usually regarded as a Bad Thing, so it's worth remembering that it is a lot better than some of the alternatives. Bureaucracy should mean you are judged according to an agreed set of criteria, rather than whether someone likes your face or went to the same school as you. Bureaucracy may provide some protection against arbitrary action and certain forms of injustice. And the fact that bureaucracy has sometimes been used by evil regimes for evil purposes isn't sufficient grounds for rejecting all forms of bureaucracy everywhere.
What bureaucracy does do is codify and classify, and this has important implications for discrimination and diversity.
Sometimes discrimination is considered to be a good thing. For example, recruitment should discriminate between those who are qualified to do the job and those who are not, and this can be based either on a subjective judgement or an agreed standard. But even this can be controversial. For example, the College of Policing is implementing a policy that police recruits in England and Wales should be educated to degree level, despite strong objections from the Police Federation.
Other kinds of discrimination such as gender and race are widely disapproved of, and many organizations have an official policy disavowing such discrimination, or affirming a belief in diversity. Despite such policies, however, some unofficial or inadvertent discrimination may often occur, and this can only be discovered and remedied by some form of codification and classification. Organizations often have a diversity survey as part of their recruitment procedure, so that they can monitor the numbers of recruits by gender, race, religion, sexuality, disability or whatever, but of course this depends on people's willingness to place themselves in one of the defined categories. (If everyone ticks the "prefer not to say" box, then the diversity statistics are not going to be very helpful.)
Bureaucracy produces lists, and of course the lists can either be wrong or used wrongly. For example, King's College London recently apologized for denying access to selected students during a royal visit.
Big data also codifies and classifies, although much of this is done on inferred categories rather than declared ones. For example, some social media platforms infer gender from someone's speech acts (or what Judith Butler would call performativity). And political views can apparently be inferred from food choice. The fact that these inferences may be inaccurate doesn't stop them being used for targetting purposes, or population control.
Cathy O'Neil's statement that algorithms are "opinions embedded in code" is widely quoted. This may lead people to think that this is only a problem if you disagree with these opinions, and that the main problem with big data and algorithmic intelligence is a lack of perfection. And of course technology companies encourage ethics professors to look at their products from this perspective, firstly because they welcome any ideas that would help them make their products more powerful, and secondly because it distracts the professors from the more fundamental question as to whether they should be doing things like facial recognition in the first place. @juliapowles calls this a "captivating diversion".
But a more fundamental question concerns the ethics of codification and classification. Following a detailed investigation of this topic, published under the title Sorting Things Out , Bowker and Star conclude that "all information systems are necessarily suffused with ethical and political values, modulated by local administrative procedures" (p321).
Firstly, allow for ambiguity and plurality, allowing for multiple definitions across different domains. They call this recognizing the balancing act of classifying. Secondly, the sources of the classifications should remain transparent. If the categories are based on some professional opinion, these should be traceable to the profession or discourse or other authority that produced them. They call this rendering voice retrievable. And thirdly, awareness of the unclassified or unclassifiable "other". They call this being sensitive to exclusions, and note that "residual categories have their own texture that operates like the silences in a symphony to pattern the visible categories and their boundaries" (p325).
Note 1: This view is attributed to Bruno Latour by Bowker and Star (1999 p 137). However, although Latour talks about paper-shuffling bureaucrats (1987 pp 254-5), I have been unable to find this particular quote.
Rachel Adams, Michel Foucault: Biopolitics and Biopower (Critical Legal Thinking, 10 May 2017)
Geoffrey Bowker and Susan Leigh Star, Sorting Things Out (MIT Press 1999).
Gilles Deleuze, Postscript on the Societies of Control (October, Vol 59, Winter 1992), pp. 3-7
Michel Foucault ‘Society Must be Defended’ Lecture Series at the Collège de France, 1975-76 (2003) (trans. D Macey)
Maša Galič, Tjerk Timan and Bert-Jaap Koops, Bentham, Deleuze and Beyond: An Overview of Surveillance Theories from the Panopticon to Participation (Philos. Technol. 30:9–37, 2017)
Bruno Latour, Science in Action (Harvard University Press 1987)
Lewis Mumford, The Myth of the Machine (1967)
Samantha Murphy, Political Ideology Linked to Food Choices (LiveScience, 24 May 2011)
Julia Powles, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence (7 December 2018)
BBC News, All officers 'should have degrees', says College of Policing (13 November 2015), King's College London sorry over royal visit student bans (4 July 2019)
Related posts
Quotes on Bureaucracy (June 2003), What is the Purpose of Diversity? (January 2010), The Game of Wits between Technologists and Ethics Professors (June 2019)
There are two contrasting ways of characterizing these issues. One way is to focus on the use of big data to target individuals with increasingly personalized content, such as precision nudging. Thus mass surveillance provides commercial and governmental organizations with large quantities of personal data, allowing them to make precise calculations concerning individuals, and use these calculations for the purposes of influence and control.
Alternatively, we can look at how big data can be used to control large sets or populations - what Foucault calls governmentality. If the prime job of the bureaucrat is to compile lists that could be shuffled and compared (Note 1), then this function is increasingly being taken over by the technologies of data and intelligence - notably algorithms and so-called big data.
However, Deleuze challenges this dichotomy.
"We no longer find ourselves dealing with the mass/individual pair. Individuals have become 'dividuals' and masses, samples, data, markets, or 'banks'."
Foucault's version of Bentham's panopticon is often invoked in discussions of mass surveillance, but what was equally important for Foucault was what he called biopower - "a type of power that presupposed a closely meshed grid of material coercions rather than the physical existence of a sovereign". [Foucault 2003 via Adams]
People used to talk metaphorically about faceless bureaucracy being a "machine", but now we have a real machine, performing the same function with much greater efficiency and effectiveness. And of course, scale.
"The machine tended increasingly to dictate the purpose to be served, and to exclude other more intimate human needs." (Lewis Mumford)
Bureaucracy is usually regarded as a Bad Thing, so it's worth remembering that it is a lot better than some of the alternatives. Bureaucracy should mean you are judged according to an agreed set of criteria, rather than whether someone likes your face or went to the same school as you. Bureaucracy may provide some protection against arbitrary action and certain forms of injustice. And the fact that bureaucracy has sometimes been used by evil regimes for evil purposes isn't sufficient grounds for rejecting all forms of bureaucracy everywhere.
What bureaucracy does do is codify and classify, and this has important implications for discrimination and diversity.
Sometimes discrimination is considered to be a good thing. For example, recruitment should discriminate between those who are qualified to do the job and those who are not, and this can be based either on a subjective judgement or an agreed standard. But even this can be controversial. For example, the College of Policing is implementing a policy that police recruits in England and Wales should be educated to degree level, despite strong objections from the Police Federation.
Other kinds of discrimination such as gender and race are widely disapproved of, and many organizations have an official policy disavowing such discrimination, or affirming a belief in diversity. Despite such policies, however, some unofficial or inadvertent discrimination may often occur, and this can only be discovered and remedied by some form of codification and classification. Organizations often have a diversity survey as part of their recruitment procedure, so that they can monitor the numbers of recruits by gender, race, religion, sexuality, disability or whatever, but of course this depends on people's willingness to place themselves in one of the defined categories. (If everyone ticks the "prefer not to say" box, then the diversity statistics are not going to be very helpful.)
Bureaucracy produces lists, and of course the lists can either be wrong or used wrongly. For example, King's College London recently apologized for denying access to selected students during a royal visit.
Big data also codifies and classifies, although much of this is done on inferred categories rather than declared ones. For example, some social media platforms infer gender from someone's speech acts (or what Judith Butler would call performativity). And political views can apparently be inferred from food choice. The fact that these inferences may be inaccurate doesn't stop them being used for targetting purposes, or population control.
Cathy O'Neil's statement that algorithms are "opinions embedded in code" is widely quoted. This may lead people to think that this is only a problem if you disagree with these opinions, and that the main problem with big data and algorithmic intelligence is a lack of perfection. And of course technology companies encourage ethics professors to look at their products from this perspective, firstly because they welcome any ideas that would help them make their products more powerful, and secondly because it distracts the professors from the more fundamental question as to whether they should be doing things like facial recognition in the first place. @juliapowles calls this a "captivating diversion".
But a more fundamental question concerns the ethics of codification and classification. Following a detailed investigation of this topic, published under the title Sorting Things Out , Bowker and Star conclude that "all information systems are necessarily suffused with ethical and political values, modulated by local administrative procedures" (p321).
"Black boxes are necessary, and not necessarily evil. The moral questions arise when the categories of the powerful become the taken for granted; when policy decisions are layered into inaccessible technological structures; when one group's visibility comes at the expense of another's suffering." (p320)At the end of their book (pp324-5), they identify three things they want designers and users of information systems to do. (Clearly these things apply just as much to algorithms and big data as to older forms of information system.)
Firstly, allow for ambiguity and plurality, allowing for multiple definitions across different domains. They call this recognizing the balancing act of classifying. Secondly, the sources of the classifications should remain transparent. If the categories are based on some professional opinion, these should be traceable to the profession or discourse or other authority that produced them. They call this rendering voice retrievable. And thirdly, awareness of the unclassified or unclassifiable "other". They call this being sensitive to exclusions, and note that "residual categories have their own texture that operates like the silences in a symphony to pattern the visible categories and their boundaries" (p325).
Note 1: This view is attributed to Bruno Latour by Bowker and Star (1999 p 137). However, although Latour talks about paper-shuffling bureaucrats (1987 pp 254-5), I have been unable to find this particular quote.
Rachel Adams, Michel Foucault: Biopolitics and Biopower (Critical Legal Thinking, 10 May 2017)
Geoffrey Bowker and Susan Leigh Star, Sorting Things Out (MIT Press 1999).
Gilles Deleuze, Postscript on the Societies of Control (October, Vol 59, Winter 1992), pp. 3-7
Michel Foucault ‘Society Must be Defended’ Lecture Series at the Collège de France, 1975-76 (2003) (trans. D Macey)
Maša Galič, Tjerk Timan and Bert-Jaap Koops, Bentham, Deleuze and Beyond: An Overview of Surveillance Theories from the Panopticon to Participation (Philos. Technol. 30:9–37, 2017)
Bruno Latour, Science in Action (Harvard University Press 1987)
Lewis Mumford, The Myth of the Machine (1967)
Samantha Murphy, Political Ideology Linked to Food Choices (LiveScience, 24 May 2011)
Julia Powles, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence (7 December 2018)
BBC News, All officers 'should have degrees', says College of Policing (13 November 2015), King's College London sorry over royal visit student bans (4 July 2019)
Related posts
Quotes on Bureaucracy (June 2003), What is the Purpose of Diversity? (January 2010), The Game of Wits between Technologists and Ethics Professors (June 2019)
Published on July 13, 2019 02:48
June 21, 2019
With Strings Attached
@rachelcoldicutt notes that "Google Docs new grammar suggestion tool doesn’t like the word 'funding' and prefers 'investment' ".
Many business people have an accounting mindset, in which all expenditure must be justified in terms of benefit to the organization, measured in financial terms. When they hear the word "investment", they hold their breath until they hear the word "return".
So when Big Tech funds the debate on AI ethics (Oscar Williams, New Statesman, 6 June 2019), can we infer that Big Tech sees this as an "investment", to which it is entitled to a return or payback?
Related post: The Game of Wits Between Technologists and Ethics Professors (June 2019)
Many business people have an accounting mindset, in which all expenditure must be justified in terms of benefit to the organization, measured in financial terms. When they hear the word "investment", they hold their breath until they hear the word "return".
So when Big Tech funds the debate on AI ethics (Oscar Williams, New Statesman, 6 June 2019), can we infer that Big Tech sees this as an "investment", to which it is entitled to a return or payback?
Related post: The Game of Wits Between Technologists and Ethics Professors (June 2019)
Published on June 21, 2019 10:46
June 8, 2019
The Game of Wits between Technologists and Ethics Professors
What does #TechnologyEthics look like from the viewpoint of your average ethics professor?
Not surprisingly, many ethics professors believe strongly in the value of ethics education, and advocate ethics awareness training for business managers and engineers. Provided by people like themselves, obviously.
There is a common pattern among technologists and would-be enterpreneurs to first come up with a "solution", find areas where the solution might apply, and then produce self-interested arguments to explain why the solution matches the problem. Obviously there is a danger of confirmation bias here. Proposing ethics education as a solution for an ill-defined problem space looks suspiciously like the same pattern. Ethicists should understand why it is important to explain what this education achieves, and how exactly it solves the problem.
Ethics professors may also believe that people with strong ethical awareness, such as themselves, can play a useful role in technology governance - for example, participating in advisory councils.
Some technology companies may choose to humour these academics, engaging them as a PR exercise (ethics washing) and generously funding their research. Fortunately, many of them lack deep understanding of business organizations and of technology, so there is little risk of them causing any serious challenge or embarrasment to these companies.
Professors are always attracted to the kind of work that lends itself to peer-reviewed articles in leading Journals. So it is fairly easy to keep their attention focused on theoretically fascinating questions that have little or no practical relevance, such as the Trolley Problem.
Alternatively, they can be engaged to try and "fix" problems with real practical relevance, such as algorithmic bias. @juliapowles calls this a "captivating diversion", distracting academics from the more fundamental question, whether the algorithm should be built at all.
It might be useful for these ethics professors to have deeper knowledge of technology and business, enabling them to ask more searching and more relevant questions. But if only a minority of ethics professors possess sufficient knowledge and experience, these will be overlooked for the plum advisory jobs. I therefore advocate compulsory technology awareness training for ethics professors, especially "prominent" ones. Provided by people like myself, obviously.
Stephanie Burns, Solution Looking For A Problem (Forbes, 28 May 2019)
Casey Fiesler, Tech Ethics Curricula: A Collection of Syllabi (5 July 2018), What Our Tech Ethics Crisis Says About the State of Computer Science Education (5 December 2018)
Mark Graban, Cases of Technology “Solutions” Looking for a Problem? (26 January 2011)
Julia Powles, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence (7 December 2018)
Oscar Williams, How Big Tech funds the debate on AI ethics (New Statements, 6 June 2019)
Related posts:
Leadership and Governance (May 2019)
Not surprisingly, many ethics professors believe strongly in the value of ethics education, and advocate ethics awareness training for business managers and engineers. Provided by people like themselves, obviously.
There is a common pattern among technologists and would-be enterpreneurs to first come up with a "solution", find areas where the solution might apply, and then produce self-interested arguments to explain why the solution matches the problem. Obviously there is a danger of confirmation bias here. Proposing ethics education as a solution for an ill-defined problem space looks suspiciously like the same pattern. Ethicists should understand why it is important to explain what this education achieves, and how exactly it solves the problem.
Ethics professors may also believe that people with strong ethical awareness, such as themselves, can play a useful role in technology governance - for example, participating in advisory councils.
Some technology companies may choose to humour these academics, engaging them as a PR exercise (ethics washing) and generously funding their research. Fortunately, many of them lack deep understanding of business organizations and of technology, so there is little risk of them causing any serious challenge or embarrasment to these companies.
Professors are always attracted to the kind of work that lends itself to peer-reviewed articles in leading Journals. So it is fairly easy to keep their attention focused on theoretically fascinating questions that have little or no practical relevance, such as the Trolley Problem.
Alternatively, they can be engaged to try and "fix" problems with real practical relevance, such as algorithmic bias. @juliapowles calls this a "captivating diversion", distracting academics from the more fundamental question, whether the algorithm should be built at all.
It might be useful for these ethics professors to have deeper knowledge of technology and business, enabling them to ask more searching and more relevant questions. But if only a minority of ethics professors possess sufficient knowledge and experience, these will be overlooked for the plum advisory jobs. I therefore advocate compulsory technology awareness training for ethics professors, especially "prominent" ones. Provided by people like myself, obviously.
Stephanie Burns, Solution Looking For A Problem (Forbes, 28 May 2019)
Casey Fiesler, Tech Ethics Curricula: A Collection of Syllabi (5 July 2018), What Our Tech Ethics Crisis Says About the State of Computer Science Education (5 December 2018)
Mark Graban, Cases of Technology “Solutions” Looking for a Problem? (26 January 2011)
Julia Powles, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence (7 December 2018)
Oscar Williams, How Big Tech funds the debate on AI ethics (New Statements, 6 June 2019)
Related posts:
Leadership and Governance (May 2019)
Published on June 08, 2019 04:45
May 28, 2019
Five Elements of Responsibility by Design
I have been developing an approach to #TechnologyEthics, which I call #ResponsibilityByDesign. It is based on the five elements of #VPECT. Let me start with a high-level summary before diving into some of the detail.
Values
Why does ethics matter?What outcomes for whom?
Policies
Principles and practices of technology ethicsFormal codes of practice, etc. Regulation.
Event-Driven (Activity Viewpoint)
Effective and appropriate action at different points: planning; risk assessment; design; verification, validation and test; deployment; operation; incident management; retirement. (Also known as the Process Viewpoint).
Content (Knowledge Viewpoint)
What matters from an ethical point of view? What issues do we need to pay attention to?Where is the body of knowledge and evidence that we can reference?
Trust (Responsibility Viewpoint)
Transparency and governanceResponsibility, Authority, Expertise, Work (RAEW)
Concerning technology ethics, there is a lot of recent published material on each of these elements separately, but I have not yet found much work that puts them together in a satisfactory way. Many working groups concentrate on a single element - for example, principles or transparency. And even when experts link multiple elements, the logical connections aren't always spelled out.
At the time of writing this post (May 2019), I haven't yet fully worked out how to join these elements either, and I shall welcome constructive feedback from readers and pointers to good work elsewhere. I am also keen to find opportunities to trial these ideas on real projects.
Related Posts
Responsibility by Design (June 2018) What is Responsibility by Design (October 2018) Why Responsibility by Design Now? (October 2018)
Read more »
Values
Why does ethics matter?What outcomes for whom?
Policies
Principles and practices of technology ethicsFormal codes of practice, etc. Regulation.
Event-Driven (Activity Viewpoint)
Effective and appropriate action at different points: planning; risk assessment; design; verification, validation and test; deployment; operation; incident management; retirement. (Also known as the Process Viewpoint).
Content (Knowledge Viewpoint)
What matters from an ethical point of view? What issues do we need to pay attention to?Where is the body of knowledge and evidence that we can reference?
Trust (Responsibility Viewpoint)
Transparency and governanceResponsibility, Authority, Expertise, Work (RAEW)
Concerning technology ethics, there is a lot of recent published material on each of these elements separately, but I have not yet found much work that puts them together in a satisfactory way. Many working groups concentrate on a single element - for example, principles or transparency. And even when experts link multiple elements, the logical connections aren't always spelled out.
At the time of writing this post (May 2019), I haven't yet fully worked out how to join these elements either, and I shall welcome constructive feedback from readers and pointers to good work elsewhere. I am also keen to find opportunities to trial these ideas on real projects.
Related Posts
Responsibility by Design (June 2018) What is Responsibility by Design (October 2018) Why Responsibility by Design Now? (October 2018)
Read more »
Published on May 28, 2019 07:43
May 19, 2019
The Nudge as a Speech Act
Once upon a time, nudges were physical rather than verbal - a push on the shoulder perhaps, or a dig in the ribs with an elbow. The meaning was elliptical and depended almost entirely on context. "Nudge nudge, wink wink", as Monty Python used to say.
Even technologically mediated nudges can sometimes be physical, or what we should probably call haptic. For example, the fitness band that vibrates when it thinks you have been sitting for too long.
But many of the acts we now think of as nudges are delivered verbally, as some kind of speech act. But which kind?
The most obvious kind of nudge is a direct suggestion, which may take the form of a weak command. ("Try and eat a little now.") But nudges can also take other illocutionary forms, including questions ("Don't you think the sun is very hot here?") and statements / predictions ("You will find that new nose of yours very useful to spank people with.").
(Readers familiar with Kipling may recognize my examples as the nudges given by the Bi-Coloured-Python-Rock-Snake to the Elephant's Child.)
The force of a suggestion may depend on context and tone of voice. (A more systematic analysis of what philosophers call illocutionary force can be found in the Stanford Encyclopedia of Philosophy, based on Searle and Vanderveken 1985.)
A speech act can also gain force by being associated with action. If I promise to donate money to a given charity, this may nudge other people to do the same; but if they see me actually putting the money in the tin, the nudge might be much stronger. But then the nudge might be just as strong if I just put the money in the tin without saying anything, as long as everyone sees me do it. The important point is that some communication takes place, whether verbal or non-verbal, and this returns us to something closer to the original concept of nudge.
From an ethical point of view, there are particular concerns about unobtrusive or subliminal nudges. Yeung has introduced the concept of the Hypernudge, which combines three qualities: nimble, unobtrusive and highly potent. I share her concerns about this combination, but I think it is helpful to deal with these three qualities separately, before looking at the additional problems that may arise when they are combined.
Proponents of the nudge sometimes try to distinguish between unobtrusive (acceptable) and subliminal (unacceptable), but this distinction may be hard to sustain, and many people quote Luc Bovens' observation that nudges "typically work better in the dark". See also Baldwin.
I'm sure there's more to say on this topic, so I may update this post later. Relevant comments always welcome.
Robert Baldwin, From regulation to behaviour change: giving nudge the third degree (The Modern Law Review 77/6, 2014) pp 831-857
Luc Bovens, The Ethics of Nudge. In Mats J. Hansson and Till Grüne-Yanoff (eds.), Preference Change: Approaches from Philosophy, Economics and Psychology. (Berlin: Springer, 2008) pp. 207-20
John Danaher, Algocracy as Hypernudging: A New Way to Understand the Threat of Algocracy (Institute for Ethics and Emerging Technologies, 17 January 2017)
J. Searle and D. Vanderveken, Foundations of Illocutionary Logic (Cambridge: Cambridge University Press, 1985)
Karen Yeung, ‘Hypernudge’: Big Data as a Mode of Regulation by Design (Information, Communication and Society (2016) 1,19; TLI Think! Paper 28/2016)
Stanford Encyclopedia of Philosophy: Speech Acts
Related post: On the Ethics of Technologically Mediated Nudge (May 2019)
Even technologically mediated nudges can sometimes be physical, or what we should probably call haptic. For example, the fitness band that vibrates when it thinks you have been sitting for too long.
But many of the acts we now think of as nudges are delivered verbally, as some kind of speech act. But which kind?
The most obvious kind of nudge is a direct suggestion, which may take the form of a weak command. ("Try and eat a little now.") But nudges can also take other illocutionary forms, including questions ("Don't you think the sun is very hot here?") and statements / predictions ("You will find that new nose of yours very useful to spank people with.").
(Readers familiar with Kipling may recognize my examples as the nudges given by the Bi-Coloured-Python-Rock-Snake to the Elephant's Child.)
The force of a suggestion may depend on context and tone of voice. (A more systematic analysis of what philosophers call illocutionary force can be found in the Stanford Encyclopedia of Philosophy, based on Searle and Vanderveken 1985.)
A speech act can also gain force by being associated with action. If I promise to donate money to a given charity, this may nudge other people to do the same; but if they see me actually putting the money in the tin, the nudge might be much stronger. But then the nudge might be just as strong if I just put the money in the tin without saying anything, as long as everyone sees me do it. The important point is that some communication takes place, whether verbal or non-verbal, and this returns us to something closer to the original concept of nudge.
From an ethical point of view, there are particular concerns about unobtrusive or subliminal nudges. Yeung has introduced the concept of the Hypernudge, which combines three qualities: nimble, unobtrusive and highly potent. I share her concerns about this combination, but I think it is helpful to deal with these three qualities separately, before looking at the additional problems that may arise when they are combined.
Proponents of the nudge sometimes try to distinguish between unobtrusive (acceptable) and subliminal (unacceptable), but this distinction may be hard to sustain, and many people quote Luc Bovens' observation that nudges "typically work better in the dark". See also Baldwin.
I'm sure there's more to say on this topic, so I may update this post later. Relevant comments always welcome.
Robert Baldwin, From regulation to behaviour change: giving nudge the third degree (The Modern Law Review 77/6, 2014) pp 831-857
Luc Bovens, The Ethics of Nudge. In Mats J. Hansson and Till Grüne-Yanoff (eds.), Preference Change: Approaches from Philosophy, Economics and Psychology. (Berlin: Springer, 2008) pp. 207-20
John Danaher, Algocracy as Hypernudging: A New Way to Understand the Threat of Algocracy (Institute for Ethics and Emerging Technologies, 17 January 2017)
J. Searle and D. Vanderveken, Foundations of Illocutionary Logic (Cambridge: Cambridge University Press, 1985)
Karen Yeung, ‘Hypernudge’: Big Data as a Mode of Regulation by Design (Information, Communication and Society (2016) 1,19; TLI Think! Paper 28/2016)
Stanford Encyclopedia of Philosophy: Speech Acts
Related post: On the Ethics of Technologically Mediated Nudge (May 2019)
Published on May 19, 2019 04:08


