Richard Veryard's Blog, page 4
July 3, 2020
Naive Epistemology
One of the things I learned from studying maths and philosophy is an appreciation of what things follow from what other things. Identifying and understanding what assumptions are implicit in a given argument, what axioms required to establish a given proof.
So when I see or hear something that I disagree with, I feel the need to trace where the disagreement comes from - is there a difference in fact or value or something else? Am I missing some critical piece of knowledge or understanding, that might lead me to change my mind? And if I want to correct someone's error, is there some piece of knowledge or understanding that I can give them, that will bring them around to my way of thinking?
(By the way, this skill would seem important for teachers. If a child struggles with simple arithmetic, exactly which step in the process has the child failed to grasp? However, teachers don't always have time to do this.)
There is also an idea of the economy of argument. What is the minimum amount of knowledge or understanding that is needed in this context, and how can I avoid complicating the argument by bringing in a lot of other material that may be fascinating but not strictly relevant. (I acknowledge that I don't always follow this principle myself.) And when I'm wrong about something, how can other people help me see this without requiring me to wade through far more material than I have time for.
There was a thread on Twitter recently, prompted by some weak thinking by a certain computer scientist. @jennaburrell noted that computer science has never been very strong on epistemology – either recognizing that it implicitly has one, that there might be any other, or interrogating its weaknesses as a way of understanding the world.
Some people suggested that the solution involves philosophy.
So when I see or hear something that I disagree with, I feel the need to trace where the disagreement comes from - is there a difference in fact or value or something else? Am I missing some critical piece of knowledge or understanding, that might lead me to change my mind? And if I want to correct someone's error, is there some piece of knowledge or understanding that I can give them, that will bring them around to my way of thinking?
(By the way, this skill would seem important for teachers. If a child struggles with simple arithmetic, exactly which step in the process has the child failed to grasp? However, teachers don't always have time to do this.)
There is also an idea of the economy of argument. What is the minimum amount of knowledge or understanding that is needed in this context, and how can I avoid complicating the argument by bringing in a lot of other material that may be fascinating but not strictly relevant. (I acknowledge that I don't always follow this principle myself.) And when I'm wrong about something, how can other people help me see this without requiring me to wade through far more material than I have time for.
There was a thread on Twitter recently, prompted by some weak thinking by a certain computer scientist. @jennaburrell noted that computer science has never been very strong on epistemology – either recognizing that it implicitly has one, that there might be any other, or interrogating its weaknesses as a way of understanding the world.
Some people suggested that the solution involves philosophy.
People in CS and machine learning have been haphazardly trying to reinvent epistemology while universities make cuts to philosophy departments. Instead of getting more STEM majors we might be better off if we figured out how to send more funding to the humanities.—
Published on July 03, 2020 10:05
June 29, 2020
Bold, Restless Experimentation
In his latest speech, invoking the spirit of Franklin Delano Roosevelt, Michael Gove calls for bold, restless experimentation.
Although one of Gove's best known pronouncements was his statement during the Brexit campaign that people in this country have had enough of experts ..., Fraser Nelson suggests he never intended this to refer to all experts: he was interrupted before he could specify which experts he meant.
Many of those who share Gove's enthusiasm for disruptive innovation also share his ambivalence about expertise. Joe McKendrick quotes Valar Afshar of DisrupTV: If the problem is unsolved, it means there are no experts.
Joe also quotes Michael Sikorsky of Robots and Pencils, who links talent, speed of decision and judgement, and talks about pushing as much of the decision rights as possible right to the edge of the organization. Meanwhile, Michael Gove also talks about diversifying the talent pool - not only a diversity of views but also a diversity of skills.
In some quarters, expertise means centralized intelligence - for example, clever people in Head Office. The problems with this model were identified by Harold Wilensky in his 1967 book on Organizational Intelligence, and explored more rigorously by David Alberts and his colleagues in CCRP, especially under the Power To The Edge banner.
Expertise also implies authority and permission; so rebellion against expertise can also take the form of permissionless innovation. Adam Thierer talks about the tinkering and continuous exploration that takes place at multiple levels.
However, elevating individual talent over collective expertise is a risky enterprise. Malcolm Gladwell calls this the Talent Myth. For further discussion and links, see my post Explaining Enron.
Michael Gove, The Privilege of Public Service (Ditchley Annual Lecture, 27 June 2020)
Joe McKendrick, Artificial Intelligence May Bring More Management-Free Organizations (Forbes, 8 June 2020)
Henry Mance, Britain has had enough of experts, says Gove (Financial Times, 3 June 2016)
Fraser Nelson, Don't ask the experts (Spectator, 14 January 2017)
Adam Thierer, Permissionless Innovation (Mercatus Center, 2014/2016)
Related posts: Demise of the Superstar (August 2004), Power to the Edge (December 2005), Explaining Enron (January 2010), Enemies of Intelligence (May 2010)
Although one of Gove's best known pronouncements was his statement during the Brexit campaign that people in this country have had enough of experts ..., Fraser Nelson suggests he never intended this to refer to all experts: he was interrupted before he could specify which experts he meant.
Many of those who share Gove's enthusiasm for disruptive innovation also share his ambivalence about expertise. Joe McKendrick quotes Valar Afshar of DisrupTV: If the problem is unsolved, it means there are no experts.
Joe also quotes Michael Sikorsky of Robots and Pencils, who links talent, speed of decision and judgement, and talks about pushing as much of the decision rights as possible right to the edge of the organization. Meanwhile, Michael Gove also talks about diversifying the talent pool - not only a diversity of views but also a diversity of skills.
In some quarters, expertise means centralized intelligence - for example, clever people in Head Office. The problems with this model were identified by Harold Wilensky in his 1967 book on Organizational Intelligence, and explored more rigorously by David Alberts and his colleagues in CCRP, especially under the Power To The Edge banner.
Expertise also implies authority and permission; so rebellion against expertise can also take the form of permissionless innovation. Adam Thierer talks about the tinkering and continuous exploration that takes place at multiple levels.
However, elevating individual talent over collective expertise is a risky enterprise. Malcolm Gladwell calls this the Talent Myth. For further discussion and links, see my post Explaining Enron.
Michael Gove, The Privilege of Public Service (Ditchley Annual Lecture, 27 June 2020)
Joe McKendrick, Artificial Intelligence May Bring More Management-Free Organizations (Forbes, 8 June 2020)
Henry Mance, Britain has had enough of experts, says Gove (Financial Times, 3 June 2016)
Fraser Nelson, Don't ask the experts (Spectator, 14 January 2017)
Adam Thierer, Permissionless Innovation (Mercatus Center, 2014/2016)
Related posts: Demise of the Superstar (August 2004), Power to the Edge (December 2005), Explaining Enron (January 2010), Enemies of Intelligence (May 2010)





Published on June 29, 2020 14:58
January 28, 2020
The Algorithmic Child and the Anxious Parent
#OIILondonLecture An interesting lecture by @VickiNashOII of @oiioxford at @BritishAcademy_ this evening, entitled Connected cots, talking teddies and the rise of the algorithmic child.
Since the early days of the World Wide Web, people have been concerned about the risks to children. Initially, these were seen in terms of protecting children from unsuitable content and from contact with unsuitable strangers. Children also needed to be prevented from behaving inappropriately on the Internet.
In the days when a typical middle-class household had a single fixed computer in a downstairs room, it was relatively easy for parents to monitor their children's use of the Internet. But nowadays childen in Western countries think themselves deprived if they don't have the latest smartphone, and even toddlers often have their own tablet computers. So much of the activity can be hidden in the bedroom, or even under the bedclothes after lights out.
Furthermore, connection to the Internet is not merely through computers, phones, tablets and games consoles, but also through chatbots and connected toys, as well as the Internet of Things. So there are some new threats to children as well as the older threats, including privacy and security, and it may be increasingly difficult for parents to protect their children from these threats. (Even confiscating the phones may not solve the problem: one resourceful Kentucky teenager managed to send messages from the family smartfridge.)
And as Dr Nash pointed out, it's no longer just about how children use the internet, but also how the internet uses children. Large-scale collection and use of data is not just being practised by the technology giants, but by an increasing number of consumer companies and other commercial enterprises. One of the most interesting developments here is the provision of surveillance tools to help parents monitor their children.
Parents are being told that good parenting means keeping your children safe, and keeping them safe means knowing where they are at all times, what they are doing, whom they are with, and so on. All thanks to various tracking apps that provide real-time information about your children's location and activity. And even when they are at home, asleep in their own beds, there are monitoring technologies to track their temperature or breathing, and alert the parents of any abnormal pattern.
Dr Nash argues that this expectation of constantly monitoring one's children contributes to a significant alteration in the parent-child relationship, and in our norms of parenthood. Furthermore, as children become teenagers, they will increasingly be monitoring themselves, in healthy or unhealthy ways. So how should the monitoring parents monitor the monitoring?
One of the problems with any surveillance technology is that provides a single lens for viewing what is going on. Although this may be done with good intentions, and may often be beneficial, it is also selective in what it captures. It is so easy to fall into the fallacy of thinking that what is visible is important, and what is not visible is not important. Those aspects of a child's life and experience that can be captured by clever technology aren't necessarily those aspects that a parent should be paying most attention to.
Linda Geddes, Does sharing photos of your children on Facebook put them at risk? (The Guardian, 21 Sep 2014)
Victoria Nash, The Unpolitics of Child Protection (Oxford Internet Institute, 5 May 2013)
Victoria Nash, Connected toys: not just child’s play (Parent Info, May 2018)
Victoria Nash, Huw Davies and Allison Mishkin, Digital Safety in the Era of Connected Cots and Talking Teddies (Oxford Internet Institute, 25 June 2019)
Caitlin O'Kane, Teen goes viral for tweeting from LG smart fridge after mom confiscates all electronics (CBS News 14 August 2019)
Related posts IOT is coming to town (December 2017), Shoshana Zuboff on Surveillance Capitalism (February 2019), Towards Chatbot Ethics (May 2019)
Since the early days of the World Wide Web, people have been concerned about the risks to children. Initially, these were seen in terms of protecting children from unsuitable content and from contact with unsuitable strangers. Children also needed to be prevented from behaving inappropriately on the Internet.
In the days when a typical middle-class household had a single fixed computer in a downstairs room, it was relatively easy for parents to monitor their children's use of the Internet. But nowadays childen in Western countries think themselves deprived if they don't have the latest smartphone, and even toddlers often have their own tablet computers. So much of the activity can be hidden in the bedroom, or even under the bedclothes after lights out.
Furthermore, connection to the Internet is not merely through computers, phones, tablets and games consoles, but also through chatbots and connected toys, as well as the Internet of Things. So there are some new threats to children as well as the older threats, including privacy and security, and it may be increasingly difficult for parents to protect their children from these threats. (Even confiscating the phones may not solve the problem: one resourceful Kentucky teenager managed to send messages from the family smartfridge.)
And as Dr Nash pointed out, it's no longer just about how children use the internet, but also how the internet uses children. Large-scale collection and use of data is not just being practised by the technology giants, but by an increasing number of consumer companies and other commercial enterprises. One of the most interesting developments here is the provision of surveillance tools to help parents monitor their children.
Parents are being told that good parenting means keeping your children safe, and keeping them safe means knowing where they are at all times, what they are doing, whom they are with, and so on. All thanks to various tracking apps that provide real-time information about your children's location and activity. And even when they are at home, asleep in their own beds, there are monitoring technologies to track their temperature or breathing, and alert the parents of any abnormal pattern.
Dr Nash argues that this expectation of constantly monitoring one's children contributes to a significant alteration in the parent-child relationship, and in our norms of parenthood. Furthermore, as children become teenagers, they will increasingly be monitoring themselves, in healthy or unhealthy ways. So how should the monitoring parents monitor the monitoring?
One of the problems with any surveillance technology is that provides a single lens for viewing what is going on. Although this may be done with good intentions, and may often be beneficial, it is also selective in what it captures. It is so easy to fall into the fallacy of thinking that what is visible is important, and what is not visible is not important. Those aspects of a child's life and experience that can be captured by clever technology aren't necessarily those aspects that a parent should be paying most attention to.
Linda Geddes, Does sharing photos of your children on Facebook put them at risk? (The Guardian, 21 Sep 2014)
Victoria Nash, The Unpolitics of Child Protection (Oxford Internet Institute, 5 May 2013)
Victoria Nash, Connected toys: not just child’s play (Parent Info, May 2018)
Victoria Nash, Huw Davies and Allison Mishkin, Digital Safety in the Era of Connected Cots and Talking Teddies (Oxford Internet Institute, 25 June 2019)
Caitlin O'Kane, Teen goes viral for tweeting from LG smart fridge after mom confiscates all electronics (CBS News 14 August 2019)
Related posts IOT is coming to town (December 2017), Shoshana Zuboff on Surveillance Capitalism (February 2019), Towards Chatbot Ethics (May 2019)





Published on January 28, 2020 15:38
November 7, 2019
Jaywalking
Until the arrival of the motor car, the street belonged to humans and horses. The motor car was regarded as an interloper, and was generally blamed for collisions with pedestrians. Cities introduced speed limits and other safety measures to protect pedestrians from the motor car.
The motor industry fought back. Their goal was to shift the blame for collisions onto the foolish or foolhardy pedestrian, who had crossed the road in the wrong place at the wrong time, or showed insufficient respect to our new four-wheeled masters. A new crime was invented, known as jaywalking, and newspapers were encouraged to describe road accidents in these terms.
In March 2018, a middle-aged woman was killed by a self-driving car. This is thought to be the first recorded death by a fully autonomous vehicle. According to the US National Safety Transportation Board (NTSB), the code failed to recognise her as a pedestrian because she was not at an obvious designated crossing. In other words, she was jaywalking.
Aidan Lewis, Jaywalking: How the car industry outlawed crossing the road(BBC News, 12 February 2014)
Peter Norton, Street Rivals: Jaywalking and the Invention of the Motor Age Street (Technology and Culture, Vol 48, April 2007)
Katyanna Quach, Remember the Uber self-driving car that killed a woman crossing the street? The AI had no clue about jaywalkers (The Register, 6 November 2019)
Joseph Stromberg, The forgotten history of how automakers invented the crime of "jaywalking" (Vox, 4 November 2015)
The motor industry fought back. Their goal was to shift the blame for collisions onto the foolish or foolhardy pedestrian, who had crossed the road in the wrong place at the wrong time, or showed insufficient respect to our new four-wheeled masters. A new crime was invented, known as jaywalking, and newspapers were encouraged to describe road accidents in these terms.
In March 2018, a middle-aged woman was killed by a self-driving car. This is thought to be the first recorded death by a fully autonomous vehicle. According to the US National Safety Transportation Board (NTSB), the code failed to recognise her as a pedestrian because she was not at an obvious designated crossing. In other words, she was jaywalking.
Aidan Lewis, Jaywalking: How the car industry outlawed crossing the road(BBC News, 12 February 2014)
Peter Norton, Street Rivals: Jaywalking and the Invention of the Motor Age Street (Technology and Culture, Vol 48, April 2007)
Katyanna Quach, Remember the Uber self-driving car that killed a woman crossing the street? The AI had no clue about jaywalkers (The Register, 6 November 2019)
Joseph Stromberg, The forgotten history of how automakers invented the crime of "jaywalking" (Vox, 4 November 2015)





Published on November 07, 2019 15:16
October 30, 2019
What Difference Does Technology Make?
In his book on policy-making, Geoffrey Vickers talks about three related types of judgment – reality judgment (what is going on, also called appreciation or sense-making), value judgment and action judgment.
In his book on technology ethics, Hans Jonas notes "the excess of our power to act over our power to foresee and our power to evaluate and to judge" (p22). In other words, technology disrupts the balance between the three types of judgment identified by Vickers.
Jonas (p23) identifies four critical differences between technological action and earlier forms
novelty of its methodsunprecedented nature of some of its objectssheer magnitude of most of its enterprisesindefinitely cumulative propagation of its effectsAnother disruptive effect of technology is that it affects our reality judgments. Our knowledge and understanding of what is going on (WIGO) is rarely direct, but is mediated (screened) by technology and systems. We get an increasing amount of our information about our social world through technical media: information systems and dashboards, email, telephone, television, internet, social media, and these systems in turn rely on data collected by a wide range of monitoring instruments, including IoT. These technologies screen information for us, screen information from us.
The screen here is both literal and metaphorical. It is a surface on which the data are presented, and also a filter that controls what the user sees. The screen is a two-sided device: it both reveals information and hides information.
Heidegger thought that technology tends to constrain or impoverish the human experience of reality in specific ways. Albert Borgmann argued that technological progress tends to increase the availability of a commodity or service, and at the same time pushes the actual device or mechanism into the background. Thus technology is either seen as a cluster of devices, or it isn't seen at all. Borgmann calls this the Device Paradigm.
But there is a paradox here. On the one hand, the device encourages to pay attention to the immediate affordance of the device, and ignore the systems that support the device. So we happily consume recommendations from media and technology giants, without looking too closely at the surveillance systems and vast quantities of personal data that feed into these recommendations. But on the other hand, technology (big data, IoT, wearables) gives us the power to pay attention to vast areas of life that were previously hidden.
In agriculture for example, technology allows the farmer to have an incredibly detailed map of each field, showing how the yield varies from one square metre to the next. Or to monitor every animal electronically for physical and mental welbeing.
And not only farm animals, also ourselves. As I said in my post on the Internet of Underthings, we are now encouraged to account for everything we do: footsteps, heartbeats, posture. (Until recently this kind of micro-attention to oneself was regarded as slightly obsessional, nowadays it seems to be perfectly normal.)
Albert Borgmann, Technology and the Character of Everyday Life (University of Chicago Press, 1984)
Hans Jonas, The Imperative of Responsibility (University of Chicago Press, 1984)
Geoffrey Vickers, The Art of Judgment: A Study in Policy-Making (Sage 1965)
In his book on technology ethics, Hans Jonas notes "the excess of our power to act over our power to foresee and our power to evaluate and to judge" (p22). In other words, technology disrupts the balance between the three types of judgment identified by Vickers.
Jonas (p23) identifies four critical differences between technological action and earlier forms
novelty of its methodsunprecedented nature of some of its objectssheer magnitude of most of its enterprisesindefinitely cumulative propagation of its effectsAnother disruptive effect of technology is that it affects our reality judgments. Our knowledge and understanding of what is going on (WIGO) is rarely direct, but is mediated (screened) by technology and systems. We get an increasing amount of our information about our social world through technical media: information systems and dashboards, email, telephone, television, internet, social media, and these systems in turn rely on data collected by a wide range of monitoring instruments, including IoT. These technologies screen information for us, screen information from us.
The screen here is both literal and metaphorical. It is a surface on which the data are presented, and also a filter that controls what the user sees. The screen is a two-sided device: it both reveals information and hides information.
Heidegger thought that technology tends to constrain or impoverish the human experience of reality in specific ways. Albert Borgmann argued that technological progress tends to increase the availability of a commodity or service, and at the same time pushes the actual device or mechanism into the background. Thus technology is either seen as a cluster of devices, or it isn't seen at all. Borgmann calls this the Device Paradigm.
But there is a paradox here. On the one hand, the device encourages to pay attention to the immediate affordance of the device, and ignore the systems that support the device. So we happily consume recommendations from media and technology giants, without looking too closely at the surveillance systems and vast quantities of personal data that feed into these recommendations. But on the other hand, technology (big data, IoT, wearables) gives us the power to pay attention to vast areas of life that were previously hidden.
In agriculture for example, technology allows the farmer to have an incredibly detailed map of each field, showing how the yield varies from one square metre to the next. Or to monitor every animal electronically for physical and mental welbeing.
And not only farm animals, also ourselves. As I said in my post on the Internet of Underthings, we are now encouraged to account for everything we do: footsteps, heartbeats, posture. (Until recently this kind of micro-attention to oneself was regarded as slightly obsessional, nowadays it seems to be perfectly normal.)
Albert Borgmann, Technology and the Character of Everyday Life (University of Chicago Press, 1984)
Hans Jonas, The Imperative of Responsibility (University of Chicago Press, 1984)
Geoffrey Vickers, The Art of Judgment: A Study in Policy-Making (Sage 1965)





Published on October 30, 2019 13:19
October 20, 2019
On the Scope of Ethics
I was involved in a debate this week, concerning whether ethical principles and standards should include weapons systems, or whether military purposes should be explicitly excluded.
On both sides of the debate, there were people who strongly disapproved of weapons systems, but this disapproval led them to two opposite positions. One side felt that applying any ethical principles and standards to such systems would imply a level of ethical approval or endorsement, which they would prefer to withhold. The other side felt that weapons systems called for at least as much ethical scrutiny as anything else, if not more, and thought that exempting weapons systems implied a free pass.
It goes without saying that people disapprove of weapons systems to different degrees. Some people think they are unacceptable in all circumstances, while others see them as a regrettable necessity, while welcoming the economic activity and technological spin-offs that they produce. It's also worth noting that there are other sectors that attract strong disapproval from many people, including gambling, hydrocarbon, nuclear energy and tobacco, especially where these appear to rely on disinformation campaigns such as climate science denial.
It's also worth noting that there isn't always a clear dividing line between those products and technologies that can be used for military purposes and those that cannot. For example, although the dividing line between peaceful nuclear power and nuclear weapons may be framed as a purely technical question, this has major implications for international relations, and technical experts may be subject to significant political pressure.
While there may be disagreements about the acceptability of a given technology, and legitimate suspicion about potential use, these should be capable of being addressed as part of ethical governance. So I don't think this is a good reason for limiting the scope.
However, a better reason for limiting the scope may be to simplify the task. Given finite time and resources, it may be better to establish effective governance for a limited scope, than taking forever getting something that works properly for everything. This leads to the position that although some ethical governance may apply to weapons systems, this doesn't mean that every ethical governance exercise must address such systems. And therefore it may be reasonable to exclude such systems from a specific exercise for a specific time period, provided that this doesn't rule out the possibility of extending the scope at a later date.
On both sides of the debate, there were people who strongly disapproved of weapons systems, but this disapproval led them to two opposite positions. One side felt that applying any ethical principles and standards to such systems would imply a level of ethical approval or endorsement, which they would prefer to withhold. The other side felt that weapons systems called for at least as much ethical scrutiny as anything else, if not more, and thought that exempting weapons systems implied a free pass.
It goes without saying that people disapprove of weapons systems to different degrees. Some people think they are unacceptable in all circumstances, while others see them as a regrettable necessity, while welcoming the economic activity and technological spin-offs that they produce. It's also worth noting that there are other sectors that attract strong disapproval from many people, including gambling, hydrocarbon, nuclear energy and tobacco, especially where these appear to rely on disinformation campaigns such as climate science denial.
It's also worth noting that there isn't always a clear dividing line between those products and technologies that can be used for military purposes and those that cannot. For example, although the dividing line between peaceful nuclear power and nuclear weapons may be framed as a purely technical question, this has major implications for international relations, and technical experts may be subject to significant political pressure.
While there may be disagreements about the acceptability of a given technology, and legitimate suspicion about potential use, these should be capable of being addressed as part of ethical governance. So I don't think this is a good reason for limiting the scope.
However, a better reason for limiting the scope may be to simplify the task. Given finite time and resources, it may be better to establish effective governance for a limited scope, than taking forever getting something that works properly for everything. This leads to the position that although some ethical governance may apply to weapons systems, this doesn't mean that every ethical governance exercise must address such systems. And therefore it may be reasonable to exclude such systems from a specific exercise for a specific time period, provided that this doesn't rule out the possibility of extending the scope at a later date.





Published on October 20, 2019 07:27
October 8, 2019
Ethics of Transparency and Concealment
Last week I was in Berlin at the invitation of the IEEE to help develop standards for responsible technology (P7000). One of the working groups (P7001) is looking at transparency, especially in relation to autonomous and semi-autonomous systems. In this blogpost, I want to discuss some more general ideas about transparency.
In 1986 I wrote an article for Human Systems Management promoting the importance of visibility. There were two reasons I preferred this word. Firstly, "transparency" is a contronym - it has two opposite senses. When something is transparent, this either means you don't see it, you just see through it, or it means you can really see it. And secondly, transparency appears to be merely a property of an object, whereas visibility is about the relationship between the object and the viewer - visibility to whom?
(P7001 addresses this by defining transparency requirements in relation to different stakeholder groups.)
Although I wasn't aware of this when I wrote the original article, my concept of visibility shares something with Heidegger's concept of Unconcealment (Unverborgenheit). Heidegger's word seems a good starting point for thinking about the ethics of transparency.
Technology generally makes certain things available while concealing other things. (This is related to what Albert Borgmann, a student of Heidegger, calls the Device Paradigm.)
We are surrounded by technology, we rarely have much idea how most of it works, and usually cannot be bothered to find out. Thus when technological devices are designed to conceal their inner workings, this is often exactly what the users want. How then can we object to concealment?
The ethical problems of concealment depend on what is concealed by whom and from whom, why it is concealed, and whether, when and how it can be unconcealed.
Let's start with the why. Sometimes people deliberately hide things from us, for dishonest or devious reasons. This category includes so-called defeat devices that are intended to cheat regulations. Less clear-cut is when people hide things to avoid the trouble of explaining or justifying them.
People may also hide things for aesthetic reasons. The Italian civil engineer Riccardo Morandi designed bridges with the steel cables concealed, which made them difficult to inspect and maintain. The Morandi Bridge in Genoa collapsed in August 2018, killing 43 people.
And sometimes things are just hidden, not as a deliberate act but because nobody has thought it necessary to make them visible.
We also need to consider the who. For whose benefit are things being hidden? In particular, who is pulling the strings, where is the funding coming from, and where are the profits going - follow the money. In technology ethics, the key question is Whom Does The Technology Serve?
In many contexts, therefore, the main focus of unconcealment is not understanding exactly how something works but being aware of the things that people might be trying to hide from you, for whatever reason. This might include being selective about the available evidence, or presenting the most common or convenient examples and ignoring the outliers. It might also include failing to declare potential conflicts of interest.
For example, the #AllTrials campaign for clinical trial transparency demands that drug companies declare all clinical trials in advance, rather than waiting until the trials are complete and then deciding which ones to publish.
Now let's look at the possibility of unconcealment. Concealment doesn't always mean making inconvenient facts impossible to discover, but may mean making them so obscure and inaccessible that most people don't bother, or creating distractions that divert people's attention elsewhere. So transparency doesn't just entail possibility, it requires a reasonable level of accessibility.
Sometimes too much information can also serve to conceal the truth. Onora O'Neill talks about the "cult of transparency" that fails to produce real trust.
Another philosopher who talks about the "cult of transparency" is Shannon Vallor. However, what she calls the "Technological Transparency Paradox" seems to be merely a form of asymmetry: we are open and transparent to the social media giants, but they are not open and transparent to us.
In the absence of transparency, we are forced to trust people and organizations - not only for their honesty but also their competence and diligence. Under certain conditions, we may trust independent regulators, certification agencies and other institutions to verify these attributes on our behalf, but this in turn depends on our confidence in their ability to detect malfeasance and enforce compliance, as well as believing them to be truly independent. (Which means that these institutions too must be transparent.)
And trusting products and services typically means trusting the organizations and supply chains that produce them, in addition to any inspection, certification and official monitoring that these products and services have undergone.
... to be continued
UK Department of Health and Social Care, Response to the House of Commons Science and Technology Committee report on research integrity: clinical trials transparency (UK Government Policy Paper, 22 February 2019) via AllTrials
Mike Ananny and Kate Crawford, Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability (new media and society 2016) pp 1–17
Albert Borgmann, Technology and the Character of Contemporary Life (University of Chicago Press, 1984)
G.K. Chesterton, The Sign of the Broken Sword (The Saturday Evening Post, 7 January 1911)
Martin Heidegger, The Question Concerning Technology. Introduction by William Levitt.
Onora O'Neill, Trust is the first casualty of the cult of transparency (Telegraph, 24 April 2002)
Cynthia Stohl, Michael Stohl and P.M. Leonardi, Managing opacity: Information visibility and the paradox of transparency in the digital age (International Journal of Communication Systems 10, January 2016)pp 123–137.
Richard Veryard, The Role of Visibility in Systems (Human Systems Management 6, 1986) pp 167-175 (this version includes some further notes dated 1999)
Stanford Encyclopedia of Philosophy: Heidegger, Technological Transparency Paradox
Wikipedia: Follow the money, Ponte Morandi, Regulatory Capture,
Related posts: Defeating the Device Paradigm (October 2015), Responsible Transparency (April 2019), Whom Does The Technology Serve (May 2019)
In 1986 I wrote an article for Human Systems Management promoting the importance of visibility. There were two reasons I preferred this word. Firstly, "transparency" is a contronym - it has two opposite senses. When something is transparent, this either means you don't see it, you just see through it, or it means you can really see it. And secondly, transparency appears to be merely a property of an object, whereas visibility is about the relationship between the object and the viewer - visibility to whom?
(P7001 addresses this by defining transparency requirements in relation to different stakeholder groups.)
Although I wasn't aware of this when I wrote the original article, my concept of visibility shares something with Heidegger's concept of Unconcealment (Unverborgenheit). Heidegger's word seems a good starting point for thinking about the ethics of transparency.
Technology generally makes certain things available while concealing other things. (This is related to what Albert Borgmann, a student of Heidegger, calls the Device Paradigm.)
In our time, things are not even regarded as objects, because their only important quality has become their readiness for use. Today all things are being swept together into a vast network in which their only meaning lies in their being available to serve some end that will itself also be directed towards getting everything under control. Levitt
Goods that are available to us enrich our lives and, if they are technologically available, they do so without imposing burdens on us. Something is available in this sense if it has been rendered instantaneous, ubiquitous, safe, and easy. BorgmannI referred above to the two opposite meanings of the word "transparent". For Heidegger and his followers, the word "transparent" often refers to tools that can be used without conscious thought, or what Heidegger called ready-to-hand (zuhanden). In technology ethics, on the other hand, the word "transparent" generally refers to something (product, process or organization) being open to scrutiny, and I shall stick to this meaning for the remainder of this blogpost.
We are surrounded by technology, we rarely have much idea how most of it works, and usually cannot be bothered to find out. Thus when technological devices are designed to conceal their inner workings, this is often exactly what the users want. How then can we object to concealment?
The ethical problems of concealment depend on what is concealed by whom and from whom, why it is concealed, and whether, when and how it can be unconcealed.
Let's start with the why. Sometimes people deliberately hide things from us, for dishonest or devious reasons. This category includes so-called defeat devices that are intended to cheat regulations. Less clear-cut is when people hide things to avoid the trouble of explaining or justifying them.
People may also hide things for aesthetic reasons. The Italian civil engineer Riccardo Morandi designed bridges with the steel cables concealed, which made them difficult to inspect and maintain. The Morandi Bridge in Genoa collapsed in August 2018, killing 43 people.
And sometimes things are just hidden, not as a deliberate act but because nobody has thought it necessary to make them visible.
We also need to consider the who. For whose benefit are things being hidden? In particular, who is pulling the strings, where is the funding coming from, and where are the profits going - follow the money. In technology ethics, the key question is Whom Does The Technology Serve?
In many contexts, therefore, the main focus of unconcealment is not understanding exactly how something works but being aware of the things that people might be trying to hide from you, for whatever reason. This might include being selective about the available evidence, or presenting the most common or convenient examples and ignoring the outliers. It might also include failing to declare potential conflicts of interest.
For example, the #AllTrials campaign for clinical trial transparency demands that drug companies declare all clinical trials in advance, rather than waiting until the trials are complete and then deciding which ones to publish.
Now let's look at the possibility of unconcealment. Concealment doesn't always mean making inconvenient facts impossible to discover, but may mean making them so obscure and inaccessible that most people don't bother, or creating distractions that divert people's attention elsewhere. So transparency doesn't just entail possibility, it requires a reasonable level of accessibility.
Sometimes too much information can also serve to conceal the truth. Onora O'Neill talks about the "cult of transparency" that fails to produce real trust.
Transparency can produce a flood of unsorted information and misinformation that provides little but confusion unless it can be sorted and assessed. It may add to uncertainty rather than to trust. Transparency can even encourage people to be less honest, so increasing deception and reducing reasons for trust. O'NeillSometimes this can be inadvertent. However, as Chesterton pointed out in one of his stories, this can be a useful tactic for those who have something to hide.
Where would a wise man hide a leaf? In the forest. If there were no forest, he would make a forest. And if he wished to hide a dead leaf, he would make a dead forest. And if a man had to hide a dead body, he would make a field of dead bodies to hide it in. ChestertonStohl et al call this strategic opacity (via Ananny and Crawford).
Another philosopher who talks about the "cult of transparency" is Shannon Vallor. However, what she calls the "Technological Transparency Paradox" seems to be merely a form of asymmetry: we are open and transparent to the social media giants, but they are not open and transparent to us.
In the absence of transparency, we are forced to trust people and organizations - not only for their honesty but also their competence and diligence. Under certain conditions, we may trust independent regulators, certification agencies and other institutions to verify these attributes on our behalf, but this in turn depends on our confidence in their ability to detect malfeasance and enforce compliance, as well as believing them to be truly independent. (Which means that these institutions too must be transparent.)
And trusting products and services typically means trusting the organizations and supply chains that produce them, in addition to any inspection, certification and official monitoring that these products and services have undergone.
... to be continued
UK Department of Health and Social Care, Response to the House of Commons Science and Technology Committee report on research integrity: clinical trials transparency (UK Government Policy Paper, 22 February 2019) via AllTrials
Mike Ananny and Kate Crawford, Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability (new media and society 2016) pp 1–17
Albert Borgmann, Technology and the Character of Contemporary Life (University of Chicago Press, 1984)
G.K. Chesterton, The Sign of the Broken Sword (The Saturday Evening Post, 7 January 1911)
Martin Heidegger, The Question Concerning Technology. Introduction by William Levitt.
Onora O'Neill, Trust is the first casualty of the cult of transparency (Telegraph, 24 April 2002)
Cynthia Stohl, Michael Stohl and P.M. Leonardi, Managing opacity: Information visibility and the paradox of transparency in the digital age (International Journal of Communication Systems 10, January 2016)pp 123–137.
Richard Veryard, The Role of Visibility in Systems (Human Systems Management 6, 1986) pp 167-175 (this version includes some further notes dated 1999)
Stanford Encyclopedia of Philosophy: Heidegger, Technological Transparency Paradox
Wikipedia: Follow the money, Ponte Morandi, Regulatory Capture,
Related posts: Defeating the Device Paradigm (October 2015), Responsible Transparency (April 2019), Whom Does The Technology Serve (May 2019)





Published on October 08, 2019 00:46
September 23, 2019
Technology and The Discreet Cough
In fiction, servants cough discreetly to make people aware of their presence. (I'm thinking of P.G. Wodehouse, but there must be other examples.)
Technological devices sometimes call our attention to themselves for various reasons. John Ehrenfeld calls this presencing. The device goes from available (ready-to-hand) to conspicuous (visible).
In many cases this is seen as a malfunction, when the device fails to provide the expected commodity (obstinate) and thereby interrupts our intended action (obstructive).
However, in some cases the presencing is part of the design - the device nudging us into some kind of conscious engagement (or even what Borgmann calls focal practice).
Ehrenfeld's example is the two-button toilet flush, which allows the user to select more or less water. He sees this as "lending an ethical context to the task at hand" (p155) - thus the user is not only choosing the quantity of water but also being mindful of the environmental impact of this choice. Even if this mindfulness may diminish with familiarity, "the ethical nature of the task has become completely intertwined with the more practical aspects of the process". In other words, the environmentally friendly path has become routine (normalized).
Of course, people who are really mindful of the environmental or financial impact of wasting water may sometimes choose not to flush at all (following the slogan “If it’s yellow, let it mellow; if it’s brown, flush it down”) or perhaps to wee behind a tree in the garden rather than use the toilet. It is quite possible that the two button flush might nudge a few more people to think this way.
So sometimes a little gentle obstinacy on the part of our technological devices may be a good thing.
Albert Borgmann, Technology and the Character of Contemporary Life (Chicago, 1984)
John Ehrenfeld, Sustainability by Design (Yale, 2008)
Technological devices sometimes call our attention to themselves for various reasons. John Ehrenfeld calls this presencing. The device goes from available (ready-to-hand) to conspicuous (visible).
In many cases this is seen as a malfunction, when the device fails to provide the expected commodity (obstinate) and thereby interrupts our intended action (obstructive).
However, in some cases the presencing is part of the design - the device nudging us into some kind of conscious engagement (or even what Borgmann calls focal practice).
Ehrenfeld's example is the two-button toilet flush, which allows the user to select more or less water. He sees this as "lending an ethical context to the task at hand" (p155) - thus the user is not only choosing the quantity of water but also being mindful of the environmental impact of this choice. Even if this mindfulness may diminish with familiarity, "the ethical nature of the task has become completely intertwined with the more practical aspects of the process". In other words, the environmentally friendly path has become routine (normalized).
Of course, people who are really mindful of the environmental or financial impact of wasting water may sometimes choose not to flush at all (following the slogan “If it’s yellow, let it mellow; if it’s brown, flush it down”) or perhaps to wee behind a tree in the garden rather than use the toilet. It is quite possible that the two button flush might nudge a few more people to think this way.
So sometimes a little gentle obstinacy on the part of our technological devices may be a good thing.
Albert Borgmann, Technology and the Character of Contemporary Life (Chicago, 1984)
John Ehrenfeld, Sustainability by Design (Yale, 2008)





Published on September 23, 2019 01:41
September 18, 2019
What Does Diversion Mean?
Diversion has various different meanings in the world of ethics.
Distraction. An idea or activity serves as a distraction from what's important. For example, @juliapowles uses the term "captivating diversion" to refer to ethicists becoming preoccupied with narrow computational puzzles that distract them from far more important issues.
Substitution. People are redirected from something harmful to something supposedly less harmful. For example, switching from smoking to vaping. See my post on the Ethics of Diversion - Tobacco Example (September 2019).
Unauthorized Utilization. Using products for some purpose other than that approved or prescribed for a given purpose in a given market. There are various forms of this, some of which are both illegal and unethical, while others may be ethically justifiable.
Drug diversion, the transfer of any legally prescribed controlled substance from the individual for whom it was prescribed to another person for any illicit use.Grey imports. Drug companies try to control shipments of drugs between markets, especially when this is done to undercut the official drug prices. However, some people regard the tactics of the drug companies as unethical. Médecins Sans Frontières, the medical charity, has accused one pharma giant of promoting overly-intrusive patient surveillance to stop a generic drug being diverted to patients in developed countries.Off-label use. Doctors may prescribe drugs for a purpose or patient group outside the official approval, with various degrees of justification. For more discussion, see my post Off-Label (March 2005) Exploiting Regulatory Divergence. Carrying out activities (for example, conducting trials) in countries with underdeveloped ethics and weak regulatory oversight. See debate between Wertheimer and Resnick.
Amy Kazmin, Pharma combats diversion of cheap drugs (FT 12 April 2015)
Julia Powles, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence (7 December 2018)
David B. Resnik, Addressing diversion effects (Journal of Law and the Biosciences, 2015) 428–430
Alan Wertheimer, The ethics of promulgating principles of research ethics: the problem of diversion effects (J Law Biosci. 2(1) Feb 2015) 2-32
Wikipedia: Drug Diversion
Distraction. An idea or activity serves as a distraction from what's important. For example, @juliapowles uses the term "captivating diversion" to refer to ethicists becoming preoccupied with narrow computational puzzles that distract them from far more important issues.
Substitution. People are redirected from something harmful to something supposedly less harmful. For example, switching from smoking to vaping. See my post on the Ethics of Diversion - Tobacco Example (September 2019).
Unauthorized Utilization. Using products for some purpose other than that approved or prescribed for a given purpose in a given market. There are various forms of this, some of which are both illegal and unethical, while others may be ethically justifiable.
Drug diversion, the transfer of any legally prescribed controlled substance from the individual for whom it was prescribed to another person for any illicit use.Grey imports. Drug companies try to control shipments of drugs between markets, especially when this is done to undercut the official drug prices. However, some people regard the tactics of the drug companies as unethical. Médecins Sans Frontières, the medical charity, has accused one pharma giant of promoting overly-intrusive patient surveillance to stop a generic drug being diverted to patients in developed countries.Off-label use. Doctors may prescribe drugs for a purpose or patient group outside the official approval, with various degrees of justification. For more discussion, see my post Off-Label (March 2005) Exploiting Regulatory Divergence. Carrying out activities (for example, conducting trials) in countries with underdeveloped ethics and weak regulatory oversight. See debate between Wertheimer and Resnick.
Amy Kazmin, Pharma combats diversion of cheap drugs (FT 12 April 2015)
Julia Powles, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence (7 December 2018)
David B. Resnik, Addressing diversion effects (Journal of Law and the Biosciences, 2015) 428–430
Alan Wertheimer, The ethics of promulgating principles of research ethics: the problem of diversion effects (J Law Biosci. 2(1) Feb 2015) 2-32
Wikipedia: Drug Diversion





Published on September 18, 2019 06:53
September 16, 2019
The Ethics of Diversion - Tobacco Example
What are the ethics of diverting people from smoking to vaping?
On the one hand, we have the following argument.
E-cigarettes ("vaping") offer a plausible substitute for smoking cigarettes.Smoking is dangerous, and vaping is probably much less dangerous.Many smokers find it difficult to give up, even if they are motivated to do so. So vaping provides a plausible exit route.Observed reductions in the level of smoking can be partially attributed to the availability of alternatives such as vaping. (This is known as the diversion hypothesis.) It is therefore justifiable to encourage smokers to switch from cigarettes to e-cigarettes.
Critics of this argument make the following points.
While the dangers of smoking are now well-known, some evidence is now emerging to suggest that vaping may also be dangerous. In the USA, a handful of people have died and hundreds have been hospitalized. While some smokers may be diverted to vaping, there are also concerns that vaping may provide an entry path to smoking, especially for young people. This is known as the gateway or catalyst hypothesis.Some defenders of vaping blame the potential health risks and the gateway effect not on vaping itself but on the wide range of flavours that are available. While these may increase the attraction of vaping to children, the flavour ingredients are chemically unstable and may produce toxic compounds. For this reason, President Trump has recently proposed a ban on flavoured e-cigarettes.
And elsewhere in the world, significant differences in regulation are emerging between countries. While some countries are looking to ban e-cigarettes altogether, the UK position (as presented by Public Health England and the MHRA) is to encourage e-cigarettes as a safe alternative to smoking. At some point in the future presumably, UK data can be compared with data from other countries to provide evidence for or against the UK position. Professor Simon Capewell of Liverpool University (quoted in the Observer) calls this a "bizarre national experiment".
While we await convincing data about outcomes, ethical reasoning may appeal to several different principles.
Firstly, the minimum interference principle. In this case, this means not restricting people's informed choice without good reason.
Secondly, the utilitarian principle. The benefit of helping a large number of people to reduce a known harm outweighs the possibility of causing a lesser but unknown harm to a smaller number of people.
Thirdly, the cautionary principle. Even if vaping appears to be safer than traditional smoking, Professor Capewell reminds us of other things that were assumed to be safe - until we discovered that they weren't safe at all.
And finally, the conflict of interest principle. Elliott Reichardt, a researcher at the University of Calvary and a campaigner against vaping, argues that any study, report or campaign funded by the tobacco industry should be regarded with some suspicion.
Allan M. Brandt, Inventing Conflicts of Interest: A History of Tobacco Industry Tactics (Am J Public Health 102(1) January 2012) 63–71
Jamie Doward, After six deaths in the US and bans around the world – is vaping safe? (Observer, 15 September 2019)
David Heath, Contesting the Science of Smoking (Atlantic, 4 May 2016)
Levy DT, Warner KE, Cummings KM, et al, Examining the relationship of vaping to smoking initiation among US youth and young adults: a reality check (Tobacco Control 20 November 2018)
Elliott Reichardt and Juliet Guichon, Vaping is an urgent threat to public health (The Conversation, 13 March 2019)
On the one hand, we have the following argument.
E-cigarettes ("vaping") offer a plausible substitute for smoking cigarettes.Smoking is dangerous, and vaping is probably much less dangerous.Many smokers find it difficult to give up, even if they are motivated to do so. So vaping provides a plausible exit route.Observed reductions in the level of smoking can be partially attributed to the availability of alternatives such as vaping. (This is known as the diversion hypothesis.) It is therefore justifiable to encourage smokers to switch from cigarettes to e-cigarettes.
Critics of this argument make the following points.
While the dangers of smoking are now well-known, some evidence is now emerging to suggest that vaping may also be dangerous. In the USA, a handful of people have died and hundreds have been hospitalized. While some smokers may be diverted to vaping, there are also concerns that vaping may provide an entry path to smoking, especially for young people. This is known as the gateway or catalyst hypothesis.Some defenders of vaping blame the potential health risks and the gateway effect not on vaping itself but on the wide range of flavours that are available. While these may increase the attraction of vaping to children, the flavour ingredients are chemically unstable and may produce toxic compounds. For this reason, President Trump has recently proposed a ban on flavoured e-cigarettes.
And elsewhere in the world, significant differences in regulation are emerging between countries. While some countries are looking to ban e-cigarettes altogether, the UK position (as presented by Public Health England and the MHRA) is to encourage e-cigarettes as a safe alternative to smoking. At some point in the future presumably, UK data can be compared with data from other countries to provide evidence for or against the UK position. Professor Simon Capewell of Liverpool University (quoted in the Observer) calls this a "bizarre national experiment".
While we await convincing data about outcomes, ethical reasoning may appeal to several different principles.
Firstly, the minimum interference principle. In this case, this means not restricting people's informed choice without good reason.
Secondly, the utilitarian principle. The benefit of helping a large number of people to reduce a known harm outweighs the possibility of causing a lesser but unknown harm to a smaller number of people.
Thirdly, the cautionary principle. Even if vaping appears to be safer than traditional smoking, Professor Capewell reminds us of other things that were assumed to be safe - until we discovered that they weren't safe at all.
And finally, the conflict of interest principle. Elliott Reichardt, a researcher at the University of Calvary and a campaigner against vaping, argues that any study, report or campaign funded by the tobacco industry should be regarded with some suspicion.
Allan M. Brandt, Inventing Conflicts of Interest: A History of Tobacco Industry Tactics (Am J Public Health 102(1) January 2012) 63–71
Jamie Doward, After six deaths in the US and bans around the world – is vaping safe? (Observer, 15 September 2019)
David Heath, Contesting the Science of Smoking (Atlantic, 4 May 2016)
Levy DT, Warner KE, Cummings KM, et al, Examining the relationship of vaping to smoking initiation among US youth and young adults: a reality check (Tobacco Control 20 November 2018)
Elliott Reichardt and Juliet Guichon, Vaping is an urgent threat to public health (The Conversation, 13 March 2019)





Published on September 16, 2019 09:40