Injecting Ethical Considerations in Innovation Via Standards – Keeping Humans in the AI Loop

Injecting Ethical Considerations in Innovation Via Standards – Keeping Humans in the AI Loop

Not long ago, technology was its own industry sector—alongside healthcare, education, manufacturing, entertainment, etc. No more. Technology pervades horizontally across all of the vertical sectors we might imagine. Technology is healthcare. Technology is education. It is manufacturing, entertainment and so forth … more and more, an intrinsic, inseparable element of most any field of human endeavor globally.

And, yet, the people whose values and world view underlie technology have continued to represent a fairly narrow band of experiences and assumptions. This is changing, but it must change more quickly if technological innovation—particularly in autonomous and intelligent systems—is to not outpace cultural evolution and point the world backward to outdated ways of thinking we do not want to revisit.

A couple of IEEE P7000™ standards-development projects, growing out of the work of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, exemplify the gathering push to prioritize ethical considerations in innovation and to help ensure technology works to the benefit of more and more humans around the world.

Toward Safe, Accountable Care for Child Data

There have existed no unified, global standards addressing the safe and accountable care of student or child data.

Schools use data to track and monitor academic progress. Student Information Systems (SIS) maintain private, confidential and sometimes very sensitive personal information on students and their families. Additional systems and applications are used to improve efficiency and facilitate classroom collaboration and communication. At home, children use technology leveraging highly sensitive data assets that represent them as unique citizens of the world.

To make informed choices about their child’s digital identity, parents must know who has access to their data, what is collected and why, how it will be used and then have clear expectations about when the data will be deleted and destroyed. Parents need to know they can protect their children’s data as they would their physical safety, with trusted tools from accredited partners aiding in the proper collection, usage, storage and removal of child and youth data. This includes proper accounting for how long certain data is needed for various transactions and who should receive the data with express consent before transactions take place, along with the technical means of removing data from servers at a parent’s discretion if they feel their child’s data has been misused.

Development of IEEE P7004™, Draft IEEE Standard for Child and Student Data Governance, which began in 2017, targets this need. The standard is being written to define governance and certification processes for organizations handling child and student data to ensure transparency and accountability as it relates to the safety and wellbeing of children, their parents, the educational institutions where they are enrolled and the community and societies where they spend their time—both on- and offline. IEEE P7004 is intended to propose specific methodologies to help users certify how they approach accessing, collecting, storing, utilizing, sharing and destroying child and student data, as well as metrics and conformance criteria regarding these types of uses from trusted global partners and how vendors and educational institutions can meet them.

Keeping Humans in the AI Loop

The growth of artificial intelligence (AI) creates a risk that machine-to-machine decisions could be made with no transparency to humans. To avoid this, and to ensure AI is developed ethically, individuals need to be able to influence and determine the values, rules and inputs that guide the development of personalized algorithms directly related to their identity.

An approach that enables a personalized, human-in-the-loop AI agent—to act as a proxy for machine-to-machine decisions, negotiating individual rights and agency in a system of shared social norms, ethics and human rights—would enable individuals to safely organize and share their personal information at a machine-readable level. IEEE P7006™, Draft IEEE Standard for Personal Data Artificial Intelligence (AI) Agent, is intended to define a framework for creating and granting access to a personalized AI agent. It is being written to propose a principled and ethical basis for the development of a personal AI agent that will enable trusted access to personal data and increased human agency, as well as to articulate how data, access and permission can be granted to government, commercial or other actors and allow for technical flexibility, transparency and informed consensus for individuals.

The standard’s development, which began in 2017, is particularly timely, given the rise of regulations such as The European Union (EU) General Data Protection Regulation (GDPR), which requires organizations to demonstrate compliance on how they handle EU citizen data and empower individuals in data exchanges or risk facing heavy fines (up to 4 percent of their gross revenues). If individuals around the world can be empowered through a version of AI that is coordinated with GDPR and other privacy regulations, they can more effectively direct how their data, devices and location are accessed and used. In this light, IEEE P7006 is a promising tool for helping align autonomous and intelligent systems with the privacy and other ethical considerations of their human users.


Technologists globally have a responsibility to think carefully about both the innovation of which they are part today and the body of solutions, standards and engineering approaches that they helped bring about over the years: Do these technologies adequately account for the ethical considerations of people around the globe, especially those communities who have been traditionally marginalized and perhaps abstracted from the working assumptions on which innovation is predicated? Technology is too interlaced throughout the fabric of global life, and autonomous and intelligent systems are developing too quickly to not be substantially more intentional about the ethical question.

IEEE is a globally open, equitable platform that is ideally suited for elevating individual, community and societal values as a key priority in the development of human-aligned autonomous and intelligent systems. Standards-development projects such as IEEE P7004 and IEEE P7006 are examples of IEEE efforts to incorporate historically overlooked aspects of human wellbeing that may not automatically be considered in the design and manufacture of the innovations that are dramatically reshaping our world.

Katryna Dow is chief executive officer and founder of Meeco and chair of the IEEE P7006 Personal Data AI Agent Working Group.

Marsali Hancock is co-founder and chief executive officer of the EP3 Foundation and chair of the IEEE P7004 Working Group for Child and Student Data Governance.

Opinions expressed are those of the authors and not necessarily representative of IEEE-USA’s policies or positions.

Leave a Reply