Autonomous and Intelligent Systems (A/IS) have the potential to affect societies’ foundations—governance, law, finance, education, healthcare, public safety, employment, and other facets. Because A/IS applications are emerging at a fast pace and in many countries, an accepted framework to guide ethical A/IS research, development, and use does not yet exist. Achieving this will help ensure A/IS satisfy local and national needs, and international norms and standards.
The Policy Committee is part of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, which focuses on ethics and A/IS governance (policy, law, regulations). This Committee has developed a rights-based framework for evaluating positive and negative aspects of A/IS, and policy recommendations that help achieve A/IS that are beneficial to society.
A rights-based framework for A/IS governance
The potential for A/IS to harm individuals’ economic, legal, or civil rights is acknowledged in recent cases of automated discrimination involving A/IS technologies applied to hiring practices, predictive policing, judicial sentencing, and the delivery of goods or information. Whether or not these consequences are intended in the design of the A/IS, societies need a framework that protects individuals’ rights and helps the development of beneficial A/IS.
An effective framework will benefit individuals and society by promoting safety, protecting privacy and human rights, and by educating the public on the potential effects of A/IS on society. Educating the development community and government officials on A/IS ethical considerations will promote greater use of and benefit from A/IS.
The Policy Committee established five objectives for a rights-based framework for A/IS governance:
- Support, promote, and enable internationally recognized legal norms
- Develop workforce expertise in A/IS technology
- Ensure governance and ethics are core components in A/IS research, development, acquisition, and use
- Create A/IS policies to ensure public safety and responsible A/IS system design
- Educate the public on societal impacts of A/IS
Support, promote, and enable internationally recognized legal norms
The “United Nations Guiding Principles on Business and Human Rights” establishes principles for corporate human rights responsibility. Also known as the Ruggie principles, these provide an accepted basis for businesses and governments to measure the effect of technology on individuals, and to guide the development of A/IS technologies and systems. When implemented, the Principles work to protect internationally recognized individual and societal rights. For A/IS ethical considerations, several derived principles are constructed. These deal with:
- Responsibility of the duty bearers to realize all human rights
- Accountability of the duty bearers to represent the greater public interest
- Participation by all interested parties in A/IS development and governance
- Non-discrimination as an underlying value of A/IS practice
- Empowerment of right holders
- Corporate responsibility to comply with a rights-based approach
Develop workforce expertise in A/IS technology
A/IS technologies and applications will infuse many elements of society and will create important changes to processes governing interactions among the constituent parts. Effective governance will require a broad base of A/IS experts among those developing the technologies, applications, and governing policies. The fast-evolving character of A/IS underscores the importance and challenge of maintaining an expert workforce.
Ensure governance and ethics are core components in A/IS research, development, acquisition, and use
The global interest in A/IS likely will promote international collaboration among many institutions. Beyond international standards for data and system design, there will be an increasing need for standards concerning both the internal design of and also the outward effects of A/IS. Some of the challenging, new areas will include the verifiability of the internal logic of an A/IS, the societal benefits of A/IS, and the risks of A/IS use. To ensure A/IS are accepted by individuals and society, standards for the ethical design and development of A/IS are important. IEEE is currently developing several standards concerning the ethical development of A/IS.
Create A/IS policies to ensure public safety and responsible A/IS system design
Public safety and public trust are central to the acceptance and growing use of A/IS. Policies that promote innovation and cooperative development also can promote the development and use of standards. Because A/IS technologies rely on algorithms encoded in software, there are several challenges to their verifiability. The A/IS systems may lack transparency or explainability. Accountability is diminished when the algorithm developer is not associated with the developer of the end system. Safety cannot be certified if standards are lacking and if testing authorities have not been established. These challenges benefit from collaborations among nations, and involving a wide set of experts educated in the ethics of A/IS.
Educate the public on societal impacts of A/IS
Given the technical nature of A/IS technologies, creating a well-informed and engaged public is an important step towards society’s acceptance of A/IS. For like reasons, the practitioners, policy-makers, and other decision-makers must develop expertise in the technologies, the risks and benefits, and legal and human-rights issues. Because A/IS applications may affect so many aspects of society, the educational strategies must continue to expand to more areas, more countries, and more groups of participants and users.
There are several criteria for the successful evolution of A/IS:
- A rights-based framework for A/IS governance. This framework and its policy objectives help guide the ethical development of A/IS.
- International participation and support. The ethical development of A/IS must be informed by principles that have international support, while also allowing nations and local regions to focus on their respective priorities.
- Measurable standards of performance. A/IS must gain public support through measurable standards of its internal logic and its external effects.
For information on how to join The IEEE Global Initiative, please click here.
The views The Policy Committee reflect the expert opinions of committee members but do not formally represent official policy positions of IEEE.
- United Nations. Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework. United Nations Office of the High Commissioner of Human Rights. New York and Geneva: UN, 2011.
- Holdren, J., and M. Smith. “Preparing for the Future of Artificial Intelligence.” Washington, DC: Executive Office of the President, National Science and Technology Council, 2016.
- Stanford University. “Artificial Intelligence and Life in 2030: One Hundred Year Study on Artificial Intelligence.” Stanford, CA: Stanford University, 2016.
- Networking and Information Technology Research and Development (NITRD) Program. “The National Artificial Intelligence Research and Development Strategic Plan.” Washington, DC: Office of Science and Technology Policy, 2016.
Peter Brooks is a member of the Institute for Defense Analyses and Co-chair of the Policy Committee of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
Mina Hanna is the Chair of the IEEE-USA Artificial Intelligence and Autonomous Systems Policy Committee and Co-chair of the Policy Committee of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems