An Updated Round Up of Ethical Principles of Robotics and AI
This blogpost is an updated round up of the various sets of ethical principles of robotics and AI that have been proposed to date, ordered by date of first publication. I previously listed principles published before December 2017 here; this blogpost appends those principles drafted since January 2018. The principles are listed here (in full or abridged) with notes and references but without critique.
Scroll down to the next horizontal line for the updates.
If there any (prominent) ones I've missed please let me know.
Asimov's three laws of Robotics (1950)
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. I have included these to explicitly acknowledge, firstly, that Asimov undoubtedly established the principle that robots (and by extension AIs) should be governed by principles, and secondly that many subsequent principles have been drafted as a direct response. The three laws first appeared in Asimov's short story Runaround [1]. This wikipedia article provides a very good account of the three laws and their many (fictional) extensions.
Murphy and Wood's three laws of Responsible Robotics (2009)
A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics. A robot must respond to humans as appropriate for their roles. A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws. These were proposed in Robin Murphy and David Wood's paper Beyond Asimov: The Three Laws of Responsible Robotics [2].
EPSRC Principles of Robotics (2010)
Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security. Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy. Robots are products. They should be designed using processes which assure their safety and security. Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent. The person with legal responsibility for a robot should be attributed. These principles were drafted in 2010 and published online in 2011, but not formally published until 2017 [3] as part of a two-part special issue of Connection Science on the principles, edited by Tony Prescott & Michael Szollosy [4]. An accessible introduction to the EPSRC principles was published in New Scientist in 2011.
Future of Life Institute Asilomar principles for beneficial AI (Jan 2017)
I will not list all 23 principles but extract just a few to compare and contrast with the others listed here:
The ACM US Public Policy Council Principles for Algorithmic Transparency and Accountability (Jan 2017)
Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process.Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results. See the ACM announcement of these principles here. The principles form part of the ACM's updated code of ethics.
Japanese Society for Artificial Intelligence (JSAI) Ethical Guidelines (Feb 2017)
Contribution to humanity Members of the JSAI will contribute to the peace, safety, welfare, and public interest of humanity. Abidance of laws and regulations Members of the JSAI must respect laws and regulations relating to research and development, intellectual property, as well as any other relevant contractual agreements. Members of the JSAI must not use AI with the intention of harming others, be it directly or indirectly.Respect for the privacy of others Members of the JSAI will respect the privacy of others with regards to their research and development of AI. Members of the JSAI have the duty to treat personal information appropriately and in accordance with relevant laws and regulations.Fairness Members of the JSAI will always be fair. Members of the JSAI will acknowledge that the use of AI may bring about additional inequality and discrimination in society which did not exist before, and will not be biased when developing AI. Security As specialists, members of the JSAI shall recognize the need for AI to be safe and acknowledge their responsibility in keeping AI under control. Act with integrity Members of the JSAI are to acknowledge the significant impact which AI can have on society. Accountability and Social Responsibility Members of the JSAI must verify the performance and resulting impact of AI technologies they have researched and developed. Communication with society and self-development Members of the JSAI must aim to improve and enhance society’s understanding of AI.Abidance of ethics guidelines by AI AI must abide by the policies described above in the same manner as the members of the JSAI in order to become a member or a quasi-member of society.An explanation of the background and aims of these ethical guidelines can be found here, together with a link to the full principles (which are shown abridged above).
Draft principles of The Future Society's Science, Law and Society Initiative (Oct 2017)
AI should advance the well-being of humanity, its societies, and its natural environment. AI should be transparent. Manufacturers and operators of AI should be accountable. AI’s effectiveness should be measurable in the real-world applications for which it is intended. Operators of AI systems should have appropriate competencies. The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society.This article by Nicolas Economou explains the 6 principles with a full commentary on each one.
Montréal Declaration for Responsible AI draft principles (Nov 2017)
Well-being The development of AI should ultimately promote the well-being of all sentient creatures.Autonomy The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.Justice The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental / physical abilities, sexual orientation, ethnic/social origins and religious beliefs.Privacy The development of AI should offer guarantees respecting personal privacy and allowing people who use it to access their personal data as well as the kinds of information that any algorithm might use.Knowledge The development of AI should promote critical thinking and protect us from propaganda and manipulation.Democracy The development of AI should promote informed participation in public life, cooperation and democratic debate.Responsibility The various players in the development of AI should assume their responsibility by working against the risks arising from their technological innovations.The Montréal Declaration for Responsible AI proposes the 7 values and draft principles above (here in full with preamble, questions and definitions).
IEEE General Principles of Ethical Autonomous and Intelligent Systems (Dec 2017)How can we ensure that A/IS do not infringe human rights? Traditional metrics of prosperity do not take into account the full effect of A/IS technologies on human well-being. How can we assure that designers, manufacturers, owners and operators of A/IS are responsible and accountable? How can we ensure that A/IS are transparent? How can we extend the benefits and minimize the risks of AI/AS technology being misused? These 5 general principles appear in Ethically Aligned Design v2, a discussion document drafted and published by the IEEE Standards Association Global Initiative on Ethics of Autonomous and Intelligent Systems. The principles are expressed not as rules but instead as questions, or concerns, together with background and candidate recommendations.
A short article co-authored with IEEE general principles co-chair Mark Halverson Why Principles Matter explains the link between principles and standards, together with further commentary and references.
Note that these principles have been revised and extended, in March 2019 (see below).
UNI Global Union Top 10 Principles for Ethical AI (Dec 2017)
Demand That AI Systems Are TransparentEquip AI Systems With an “Ethical Black Box”Make AI Serve People and Planet Adopt a Human-In-Command ApproachEnsure a Genderless, Unbiased AIShare the Benefits of AI SystemsSecure a Just Transition and Ensuring Support for Fundamental Freedoms and RightsEstablish Global Governance MechanismsBan the Attribution of Responsibility to RobotsBan AI Arms RaceDrafted by UNI Global Union's Future World of Work these 10 principles for Ethical AI (set out here with full commentary) “provide unions, shop stewards and workers with a set of concrete demands to the transparency, and application of AI”.
Updated principles...
Lords Select Committee 5 core principles to keep AI ethical (Apr 2018)Artificial intelligence should be developed for the common good and benefit of humanity. Artificial intelligence should operate on principles of intelligibility and fairness. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities. All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.These principles appear in the UK House of Lords Select Committee on Artificial Intelligence report AI in the UK: ready, willing and able? published in April 2019. The WEF published a summary and commentary here.
AI UX: 7 Principles of Designing Good AI Products (Apr 2018)Differentiate AI content visually - let people know if an algorithm has generated a piece of content so they can decide for themselves whether to trust it or not.Explain how machines think - helping people understand how machines work so they can use them betterSet the right expectations - set the right expectations, especially in a world full of sensational, superficial news about new AI technologies.Find and handle weird edge cases - spend more time testing and finding weird, funny, or even disturbing or unpleasant edge cases.User testing for AI products (default methods won’t work here).Provide an opportunity to give feedback.These principles, focussed on the design of the User Interface (UI) and User Experience (UX), are from Budapest based company UX Studio.
Google AI Principles (Jun 2018)
Be socially beneficial. Avoid creating or reinforcing unfair bias.Be built and tested for safety.Be accountable to people.Incorporate privacy design principles.Uphold high standards of scientific excellence.Be made available for uses that accord with these principles. These principles were launched with a blog post and commentary by Google CEO Sundar Pichai here.
Microsoft Responsible bots: 10 guidelines for developers of conversational AI (Nov 2018)
Articulate the purpose of your bot and take special care if your bot will support consequential use cases.Be transparent about the fact that you use bots as part of your product or service.Ensure a seamless hand-off to a human where the human-bot exchange leads to interactions that exceed the bot’s competence.Design your bot so that it respects relevant cultural norms and guards against misuse.Ensure your bot is reliable.Ensure your bot treats people fairly.Ensure your bot respects user privacy.Ensure your bot handles data securely.Ensure your bot is accessible.Accept responsibility.Microsoft's guidelines for the ethical design of 'bots' (chatbots or conversational AIs) are fully described here.
Summary – with links – of ethical AI principles from IBM, Google, Intel and Microsoft, Nov 2018 https://vitalflux.com/ethical-ai-principles-ibm-google-intel/
CEPEJ European Ethical Charter on the use of artificial intelligence (AI) in judicial systems and their environment, 5 principles (Feb 2019)
Principle of respect of fundamental rights: ensuring that the design and implementation of artificial intelligence tools and services are compatible with fundamental rights.Principle of non-discrimination: specifically preventing the development or intensification of any discrimination between individuals or groups of individuals.Principle of quality and security: with regard to the processing of judicial decisions and data, using certified sources and intangible data with models conceived in a multi-disciplinary manner, in a secure technological environment.Principle of transparency, impartiality and fairness: making data processing methods accessible and understandable, authorising external audits.Principle “under user control”: precluding a prescriptive approach and ensuring that users are informed actors and in control of their choices.The Council of Europe ethical charter principles are outlined here, with a link to the ethical charter istelf.
Women Leading in AI (WLinAI) 10 recommendations (Feb 2019)
Introduce a regulatory approach governing the deployment of AI which mirrors that used for the pharmaceutical sector.Establish an AI regulatory function working alongside the Information Commissioner’s Office and Centre for Data Ethics – to audit algorithms, investigate complaints by individuals,issue notices and fines for breaches of GDPR and equality and human rights law, give wider guidance, spread best practice and ensure algorithms must be fully explained to users and open to public scrutiny.Introduce a new Certificate of Fairness for AI systems alongside a ‘kite mark’ type scheme to display it. Criteria to be defined at industry level, similarly to food labelling regulations.Introduce mandatory AIAs (Algorithm Impact Assessments) for organisations employing AI systems that have a significant effect on individuals.Introduce a mandatory requirement for public sector organisations using AI for particular purposes to inform citizens that decisions are made by machines, explain how the decision is reached and what would need to change for individuals to get a different outcome.Introduce a ‘reduced liability’ incentive for companies that have obtained a Certificate of Fairness to foster innovation and competitiveness.To compel companies and other organisations to bring their workforce with them – by publishing the impact of AI on their workforce and offering retraining programmes for employees whose jobs are being automated.Where no redeployment is possible, to compel companies to make a contribution towards a digital skills fund for those employeesTo carry out a skills audit to identify the wide range of skills required to embrace the AI revolution.To establish an education and training programme to meet the needs identified by the skills audit, including content on data ethics and social responsibility. As part of that, we recommend the set up of a solid, courageous and rigorous programme to encourage young women and other underrepresented groups into technology.Presented by the Women Leading in AI group at a meeting in parliament in February 2019, this report in Forbes by Noel Sharkey outlines both the group, their recommendations, and the meeting.
The NHS’s 10 Principles for AI + Data (Feb 2019)
Understand users, their needs and the contextDefine the outcome and how the technology will contribute to itUse data that is in line with appropriate guidelines for the purpose for which it is being usedBe fair, transparent and accountable about what data is being usedMake use of open standardsBe transparent about the limitations of the data used and algorithms deployedShow what type of algorithm is being developed or deployed, the ethical examination of how the data is used, how its performance will be validated and how it will be integrated into health and care provisionGenerate evidence of effectiveness for the intended use and value for moneyMake security integral to the designDefine the commercial strategyThese principles are set out with full commentary and elaboration on Artificial Lawyer here.
IEEE General Principles of Ethical Autonomous and Intelligent Systems (A/IS) (Mar 2019)Human Rights: A/IS shall be created and operated to respect, promote, and protect internationally recognized human rights.Well-being: A/IS creators shall adopt increased human well-being as a primary success criterion for development.Data Agency: A/IS creators shall empower individuals with the ability to access and securely share their data to maintain people’s capacity to have control over their identity.Effectiveness: A/IS creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS.Transparency: the basis of a particular A/IS decision should always be discoverable.Accountability: A/IS shall be created and operated to provide an unambiguous rationale for all decisions made.Awareness of Misuse: A/IS creators shall guard against all potential misuses and risks of A/IS in operation.Competence: A/IS creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation.These amended and extended general principles form part of Ethical Aligned Design 1st edition, published in March 2019. For an overview see pdf here.
Ethical issues arising from the police use of live facial recognition technology (Mar 2019)
9 ethical principles relate to: public interest, effectiveness, the avoidance of bias and algorithmic justice, impartiality and deployment, necessity, proportionality, impartiality, accountability, oversight, and the construction of watchlists, public trust, and cost effectiveness.
Reported here the UK government’s independent Biometrics and Forensics Ethics Group (BFEG) published an interim report outlining nine ethical principles forming a framework to guide policy on police facial recognition systems.
Floridi and Clement Jones' five principles key to any ethical framework for AI (Mar 2019)
AI must be beneficial to humanity.AI must also not infringe on privacy or undermine security. AI must protect and enhance our autonomy and ability to take decisions and choose between alternatives. AI must promote prosperity and solidarity, in a fight against inequality, discrimination, and unfairnessWe cannot achieve all this unless we have AI systems that are understandable in terms of how they work (transparency) and explainable in terms of how and why they reach the conclusions they do (accountability).Luciano Floridi and Lord Tim Clement Jones set out, in the New Statesman, these 5 general ethical principles for AI, with additional commentary.
References
[1] Asimov, Isaac (1950): Runaround, in I, Robot, (The Isaac Asimov Collection ed.) Doubleday. ISBN 0-385-42304-7.
[2] Murphy, Robin; Woods, David D. (2009): Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent systems. 24 (4): 14–20.
[3] Margaret Boden et al (2017): Principles of robotics: regulating robots in the real world
Connection Science. 29 (2): 124:129.
[4] Tony Prescott and Michael Szollosy (eds.) (2017): Ethical Principles of Robotics, Connection Science. 29 (2) and 29 (3).
Scroll down to the next horizontal line for the updates.
If there any (prominent) ones I've missed please let me know.
Asimov's three laws of Robotics (1950)
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. I have included these to explicitly acknowledge, firstly, that Asimov undoubtedly established the principle that robots (and by extension AIs) should be governed by principles, and secondly that many subsequent principles have been drafted as a direct response. The three laws first appeared in Asimov's short story Runaround [1]. This wikipedia article provides a very good account of the three laws and their many (fictional) extensions.
Murphy and Wood's three laws of Responsible Robotics (2009)
A human may not deploy a robot without the human-robot work system meeting the highest legal and professional standards of safety and ethics. A robot must respond to humans as appropriate for their roles. A robot must be endowed with sufficient situated autonomy to protect its own existence as long as such protection provides smooth transfer of control which does not conflict with the First and Second Laws. These were proposed in Robin Murphy and David Wood's paper Beyond Asimov: The Three Laws of Responsible Robotics [2].
EPSRC Principles of Robotics (2010)
Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security. Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy. Robots are products. They should be designed using processes which assure their safety and security. Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent. The person with legal responsibility for a robot should be attributed. These principles were drafted in 2010 and published online in 2011, but not formally published until 2017 [3] as part of a two-part special issue of Connection Science on the principles, edited by Tony Prescott & Michael Szollosy [4]. An accessible introduction to the EPSRC principles was published in New Scientist in 2011.
Future of Life Institute Asilomar principles for beneficial AI (Jan 2017)
I will not list all 23 principles but extract just a few to compare and contrast with the others listed here:
6. Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9. Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10. Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
11. Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.An account of the development of the Asilomar principles can be found here.
12. Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
13. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
14. Shared Benefit: AI technologies should benefit and empower as many people as possible.
15. Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
The ACM US Public Policy Council Principles for Algorithmic Transparency and Accountability (Jan 2017)
Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process.Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results. See the ACM announcement of these principles here. The principles form part of the ACM's updated code of ethics.
Japanese Society for Artificial Intelligence (JSAI) Ethical Guidelines (Feb 2017)
Contribution to humanity Members of the JSAI will contribute to the peace, safety, welfare, and public interest of humanity. Abidance of laws and regulations Members of the JSAI must respect laws and regulations relating to research and development, intellectual property, as well as any other relevant contractual agreements. Members of the JSAI must not use AI with the intention of harming others, be it directly or indirectly.Respect for the privacy of others Members of the JSAI will respect the privacy of others with regards to their research and development of AI. Members of the JSAI have the duty to treat personal information appropriately and in accordance with relevant laws and regulations.Fairness Members of the JSAI will always be fair. Members of the JSAI will acknowledge that the use of AI may bring about additional inequality and discrimination in society which did not exist before, and will not be biased when developing AI. Security As specialists, members of the JSAI shall recognize the need for AI to be safe and acknowledge their responsibility in keeping AI under control. Act with integrity Members of the JSAI are to acknowledge the significant impact which AI can have on society. Accountability and Social Responsibility Members of the JSAI must verify the performance and resulting impact of AI technologies they have researched and developed. Communication with society and self-development Members of the JSAI must aim to improve and enhance society’s understanding of AI.Abidance of ethics guidelines by AI AI must abide by the policies described above in the same manner as the members of the JSAI in order to become a member or a quasi-member of society.An explanation of the background and aims of these ethical guidelines can be found here, together with a link to the full principles (which are shown abridged above).
Draft principles of The Future Society's Science, Law and Society Initiative (Oct 2017)
AI should advance the well-being of humanity, its societies, and its natural environment. AI should be transparent. Manufacturers and operators of AI should be accountable. AI’s effectiveness should be measurable in the real-world applications for which it is intended. Operators of AI systems should have appropriate competencies. The norms of delegation of decisions to AI systems should be codified through thoughtful, inclusive dialogue with civil society.This article by Nicolas Economou explains the 6 principles with a full commentary on each one.
Montréal Declaration for Responsible AI draft principles (Nov 2017)
Well-being The development of AI should ultimately promote the well-being of all sentient creatures.Autonomy The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.Justice The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental / physical abilities, sexual orientation, ethnic/social origins and religious beliefs.Privacy The development of AI should offer guarantees respecting personal privacy and allowing people who use it to access their personal data as well as the kinds of information that any algorithm might use.Knowledge The development of AI should promote critical thinking and protect us from propaganda and manipulation.Democracy The development of AI should promote informed participation in public life, cooperation and democratic debate.Responsibility The various players in the development of AI should assume their responsibility by working against the risks arising from their technological innovations.The Montréal Declaration for Responsible AI proposes the 7 values and draft principles above (here in full with preamble, questions and definitions).
IEEE General Principles of Ethical Autonomous and Intelligent Systems (Dec 2017)How can we ensure that A/IS do not infringe human rights? Traditional metrics of prosperity do not take into account the full effect of A/IS technologies on human well-being. How can we assure that designers, manufacturers, owners and operators of A/IS are responsible and accountable? How can we ensure that A/IS are transparent? How can we extend the benefits and minimize the risks of AI/AS technology being misused? These 5 general principles appear in Ethically Aligned Design v2, a discussion document drafted and published by the IEEE Standards Association Global Initiative on Ethics of Autonomous and Intelligent Systems. The principles are expressed not as rules but instead as questions, or concerns, together with background and candidate recommendations.
A short article co-authored with IEEE general principles co-chair Mark Halverson Why Principles Matter explains the link between principles and standards, together with further commentary and references.
Note that these principles have been revised and extended, in March 2019 (see below).
UNI Global Union Top 10 Principles for Ethical AI (Dec 2017)
Demand That AI Systems Are TransparentEquip AI Systems With an “Ethical Black Box”Make AI Serve People and Planet Adopt a Human-In-Command ApproachEnsure a Genderless, Unbiased AIShare the Benefits of AI SystemsSecure a Just Transition and Ensuring Support for Fundamental Freedoms and RightsEstablish Global Governance MechanismsBan the Attribution of Responsibility to RobotsBan AI Arms RaceDrafted by UNI Global Union's Future World of Work these 10 principles for Ethical AI (set out here with full commentary) “provide unions, shop stewards and workers with a set of concrete demands to the transparency, and application of AI”.
Updated principles...
Lords Select Committee 5 core principles to keep AI ethical (Apr 2018)Artificial intelligence should be developed for the common good and benefit of humanity. Artificial intelligence should operate on principles of intelligibility and fairness. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities. All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.These principles appear in the UK House of Lords Select Committee on Artificial Intelligence report AI in the UK: ready, willing and able? published in April 2019. The WEF published a summary and commentary here.
AI UX: 7 Principles of Designing Good AI Products (Apr 2018)Differentiate AI content visually - let people know if an algorithm has generated a piece of content so they can decide for themselves whether to trust it or not.Explain how machines think - helping people understand how machines work so they can use them betterSet the right expectations - set the right expectations, especially in a world full of sensational, superficial news about new AI technologies.Find and handle weird edge cases - spend more time testing and finding weird, funny, or even disturbing or unpleasant edge cases.User testing for AI products (default methods won’t work here).Provide an opportunity to give feedback.These principles, focussed on the design of the User Interface (UI) and User Experience (UX), are from Budapest based company UX Studio.
Google AI Principles (Jun 2018)
Be socially beneficial. Avoid creating or reinforcing unfair bias.Be built and tested for safety.Be accountable to people.Incorporate privacy design principles.Uphold high standards of scientific excellence.Be made available for uses that accord with these principles. These principles were launched with a blog post and commentary by Google CEO Sundar Pichai here.
Microsoft Responsible bots: 10 guidelines for developers of conversational AI (Nov 2018)
Articulate the purpose of your bot and take special care if your bot will support consequential use cases.Be transparent about the fact that you use bots as part of your product or service.Ensure a seamless hand-off to a human where the human-bot exchange leads to interactions that exceed the bot’s competence.Design your bot so that it respects relevant cultural norms and guards against misuse.Ensure your bot is reliable.Ensure your bot treats people fairly.Ensure your bot respects user privacy.Ensure your bot handles data securely.Ensure your bot is accessible.Accept responsibility.Microsoft's guidelines for the ethical design of 'bots' (chatbots or conversational AIs) are fully described here.
Summary – with links – of ethical AI principles from IBM, Google, Intel and Microsoft, Nov 2018 https://vitalflux.com/ethical-ai-principles-ibm-google-intel/
CEPEJ European Ethical Charter on the use of artificial intelligence (AI) in judicial systems and their environment, 5 principles (Feb 2019)
Principle of respect of fundamental rights: ensuring that the design and implementation of artificial intelligence tools and services are compatible with fundamental rights.Principle of non-discrimination: specifically preventing the development or intensification of any discrimination between individuals or groups of individuals.Principle of quality and security: with regard to the processing of judicial decisions and data, using certified sources and intangible data with models conceived in a multi-disciplinary manner, in a secure technological environment.Principle of transparency, impartiality and fairness: making data processing methods accessible and understandable, authorising external audits.Principle “under user control”: precluding a prescriptive approach and ensuring that users are informed actors and in control of their choices.The Council of Europe ethical charter principles are outlined here, with a link to the ethical charter istelf.
Women Leading in AI (WLinAI) 10 recommendations (Feb 2019)
Introduce a regulatory approach governing the deployment of AI which mirrors that used for the pharmaceutical sector.Establish an AI regulatory function working alongside the Information Commissioner’s Office and Centre for Data Ethics – to audit algorithms, investigate complaints by individuals,issue notices and fines for breaches of GDPR and equality and human rights law, give wider guidance, spread best practice and ensure algorithms must be fully explained to users and open to public scrutiny.Introduce a new Certificate of Fairness for AI systems alongside a ‘kite mark’ type scheme to display it. Criteria to be defined at industry level, similarly to food labelling regulations.Introduce mandatory AIAs (Algorithm Impact Assessments) for organisations employing AI systems that have a significant effect on individuals.Introduce a mandatory requirement for public sector organisations using AI for particular purposes to inform citizens that decisions are made by machines, explain how the decision is reached and what would need to change for individuals to get a different outcome.Introduce a ‘reduced liability’ incentive for companies that have obtained a Certificate of Fairness to foster innovation and competitiveness.To compel companies and other organisations to bring their workforce with them – by publishing the impact of AI on their workforce and offering retraining programmes for employees whose jobs are being automated.Where no redeployment is possible, to compel companies to make a contribution towards a digital skills fund for those employeesTo carry out a skills audit to identify the wide range of skills required to embrace the AI revolution.To establish an education and training programme to meet the needs identified by the skills audit, including content on data ethics and social responsibility. As part of that, we recommend the set up of a solid, courageous and rigorous programme to encourage young women and other underrepresented groups into technology.Presented by the Women Leading in AI group at a meeting in parliament in February 2019, this report in Forbes by Noel Sharkey outlines both the group, their recommendations, and the meeting.
The NHS’s 10 Principles for AI + Data (Feb 2019)
Understand users, their needs and the contextDefine the outcome and how the technology will contribute to itUse data that is in line with appropriate guidelines for the purpose for which it is being usedBe fair, transparent and accountable about what data is being usedMake use of open standardsBe transparent about the limitations of the data used and algorithms deployedShow what type of algorithm is being developed or deployed, the ethical examination of how the data is used, how its performance will be validated and how it will be integrated into health and care provisionGenerate evidence of effectiveness for the intended use and value for moneyMake security integral to the designDefine the commercial strategyThese principles are set out with full commentary and elaboration on Artificial Lawyer here.
IEEE General Principles of Ethical Autonomous and Intelligent Systems (A/IS) (Mar 2019)Human Rights: A/IS shall be created and operated to respect, promote, and protect internationally recognized human rights.Well-being: A/IS creators shall adopt increased human well-being as a primary success criterion for development.Data Agency: A/IS creators shall empower individuals with the ability to access and securely share their data to maintain people’s capacity to have control over their identity.Effectiveness: A/IS creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS.Transparency: the basis of a particular A/IS decision should always be discoverable.Accountability: A/IS shall be created and operated to provide an unambiguous rationale for all decisions made.Awareness of Misuse: A/IS creators shall guard against all potential misuses and risks of A/IS in operation.Competence: A/IS creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation.These amended and extended general principles form part of Ethical Aligned Design 1st edition, published in March 2019. For an overview see pdf here.
Ethical issues arising from the police use of live facial recognition technology (Mar 2019)
9 ethical principles relate to: public interest, effectiveness, the avoidance of bias and algorithmic justice, impartiality and deployment, necessity, proportionality, impartiality, accountability, oversight, and the construction of watchlists, public trust, and cost effectiveness.
Reported here the UK government’s independent Biometrics and Forensics Ethics Group (BFEG) published an interim report outlining nine ethical principles forming a framework to guide policy on police facial recognition systems.
Floridi and Clement Jones' five principles key to any ethical framework for AI (Mar 2019)
AI must be beneficial to humanity.AI must also not infringe on privacy or undermine security. AI must protect and enhance our autonomy and ability to take decisions and choose between alternatives. AI must promote prosperity and solidarity, in a fight against inequality, discrimination, and unfairnessWe cannot achieve all this unless we have AI systems that are understandable in terms of how they work (transparency) and explainable in terms of how and why they reach the conclusions they do (accountability).Luciano Floridi and Lord Tim Clement Jones set out, in the New Statesman, these 5 general ethical principles for AI, with additional commentary.
References
[1] Asimov, Isaac (1950): Runaround, in I, Robot, (The Isaac Asimov Collection ed.) Doubleday. ISBN 0-385-42304-7.
[2] Murphy, Robin; Woods, David D. (2009): Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent systems. 24 (4): 14–20.
[3] Margaret Boden et al (2017): Principles of robotics: regulating robots in the real world
Connection Science. 29 (2): 124:129.
[4] Tony Prescott and Michael Szollosy (eds.) (2017): Ethical Principles of Robotics, Connection Science. 29 (2) and 29 (3).
Published on April 18, 2019 04:35
No comments have been added yet.
Alan Winfield's Blog
- Alan Winfield's profile
- 3 followers
Alan Winfield isn't a Goodreads Author
(yet),
but they
do have a blog,
so here are some recent posts imported from
their feed.

