Addressing Legal Issues Around Autonomous and Intelligent Systems with Agility

Addressing Legal Issues Around Autonomous and Intelligent Systems with Agility

Autonomous and intelligent systems are developing rapidly, and much of the world is struggling with outdated governance mechanisms in trying to ensure that the powerful innovations leveraging artificial intelligence work for the actual benefit of humanity. The common instinctive approach tends to be asking, “What laws do we have on the books that might fit?”

But there is clearly a pressing need for a fundamentally more nimble and agile approach to governing development and business around these emerging technologies. Traditional governance structures and policy-making models are being necessarily transformed by the rise of faster, more adaptive techniques, such as globally open standards development to better anticipate and accommodate the new legal issues created by revolutionary autonomous and intelligent systems.

Traditional Governance

Assessing the legal issues introduced by autonomous and intelligent systems is complex. It is especially complex when trying to take a global perspective on the question, because different jurisdictions around the world regard them as issues in dramatically different ways because of the varied legal frameworks in existence around the world. There are essentially three main, very different types of law in place globally:

  • common law, which mainly is focused in the United States and other former colonies of the United Kingdom;
  • civil law, which is the world’s dominant system, having developed out of Europe and spread to European colonies, and
  • mixed law systems, such as are found in China, India and Russia and tend to intermingle varied laws from historical evolutions in systems of government.

Especially in the world’s common-law jurisdictions, there is an often-espoused view that “we can rely upon the law to sort it out.” The truth, however, is that the traditional governance frameworks—such as common-law systems, with their lengthy processes of acts of parliament, judicial reviews, appeals, etc.—are proving not nearly nimble enough to keep up with the furious pace of innovation around artificial intelligence.

Rethinking Approaches

With this need becoming more pressing every day with the ongoing proliferation of autonomous and intelligent systems, the World Economic Forum has taken note of an “agile governance” model that is gaining prominence around the world:

We define agile governance as adaptive, human-centred, inclusive and sustainable policy-making, which acknowledges that policy development is no longer limited to governments but rather is an increasingly multistakeholder effort. It is the continual readiness to rapidly navigate change, proactively or reactively embrace change and learn from change, while contributing to actual or perceived end-user value.1

Essentially, agile governance begins with the question, “What can we put in place swiftly so that we can actually govern this technology today?”

One such answer would be globally open technology standards, such as those that the IEEE is creating. The IEEE standards-development process is rooted in consensus, due process, openness, right to appeal and balance, and it adheres to and supports the principles and requirements of the World Trade Organization’s Decision on Principles for the Development of International Standards, Guides and Recommendations. And IEEE is committed to inclusivity—open in membership, in participation and in governance. Any individual or company with a technical idea may start an IEEE standards-development project.

The world’s prominent methods for developing and refining consensus technology standards tend to be more efficient, flexible and adaptive than the world’s legislative processes. Plus, standards offer a globally available means of addressing problems—somebody using the standard could be based anywhere in the world. In these ways, standards create a valuable, efficiently achieved degree of global governance for emergent technologies.

Standards in Action

IEEE P7003™, Algorithmic Bias Considerations, is a good example. The fast growth of algorithm-driven services around the world has led to growing concerns about potential unintended and undesirable biases within autonomous and intelligent systems. IEEE P7003 is being created to provide certification-oriented methodologies to provide clearly articulated accountability and clarity around how algorithms target, assess and influence users and other stakeholders.

IEEE P7003 is being drafted to describe specific methodologies for certifying how individuals or organizations creating algorithms, largely in regards to autonomous or intelligent systems, worked to address and eliminate issues of “negative bias.” For example, an algorithm could exhibit such bias by relying on overly subjective or uniformed data sets or information that is inconsistent with legislation concerning certain protected characteristics (such as race, gender and sexuality). Possible elements of the in-development IEEE P7003 could include:

  • benchmarking procedures and criteria for selection of validation data sets for bias quality control;
  • guidelines for establishing and communicating application boundaries for which an algorithm has been designed and validated to guard against unintended consequences arising from out-of-bound application of algorithms; and
  • suggestions for managing user expectation to mitigate bias due to incorrect interpretation of systems outputs by users (e.g., correlation vs. causation).

Certification under the in-development IEEE standard ultimately is intended to help algorithm creators to communicate that up-to-date best practices were used in design, testing and evaluation of their innovations, so as to avoid unjustified differential impact on users. This makes IEEE P7003 of terrific potential value, for example, to the world’s regulatory authorities charged with working to ensure autonomous and intelligent systems improve, or at least do not harm, the wellbeing of people within their jurisdictions.

Other Mechanisms

Like IEEE P7003, ethics-oriented technology standards are tremendous tools for expanding governance and rendering it more agile, a need that the World Economic Forum has identified as increasingly urgent:

New models of public-private collaborative governance are needed to expand governance beyond existing public sector institutions.2

Industry self-regulation is a similarly valuable mechanism. Stakeholders can be convened to collaboratively agree on particular methods for self-regulation by individual companies through price controls, market-entry conditions, product requirements, standard contract terms, environmental controls, safety regulations or advertising and/or labeling requirements. While there are potential drawbacks around relying on systems rooted in companies defining and enforcing their own ethical accountability, self-regulation can enable consumers to be protected sooner than through government legislation—and for guidelines to evolve more gracefully and fluidly with the pace of innovation.

Rethinking the approach and role of regulators is another way that governance can be made more agile. Conventionally, regulators operate in a mode of waiting for something to go wrong and then punishing violations and/or revising guidelines. Given the power and speed of artificial intelligence technologies, however, we cannot afford to wait for things to go wrong first before making corrections. So a more agile approach in the case of regulation around autonomous and intelligent systems could be to create more of a front-end regulator who would work with companies and certify algorithms before they are commercialized and deployed.

Finally, there is a need to drive transparency and trust into technology innovation at its very roots. This is where academia can play a crucial role—those who are teaching about artificial intelligence have a responsibility to think about how they teach about the ethical development, deployment and commercialization of autonomous and intelligent systems.

Conclusion

The time is absolutely crucial now for developing and spreading more agile governance models.

It is a conversation that I have been watching unfold since the early 2000s, and it is encouraging that there has been a rapid maturing of the conversation in just the last five years. Where those of us who are lawyers or social scientists once were perhaps not so visible at artificial intelligence conferences, we are now. There is much more of an open, generous flow of information and lessons learned that is transpiring, and there is a clear recognition—among developers of autonomous and intelligent systems, civil society, legislators, industry bodies and academics alike—that this problem must be tackled.

It is critical that all stakeholders find a place to engage, to ensure their unique experiences and expertise are applied in ensuring artificial intelligence works to the benefit of human wellbeing all around the world.

References

  1. Retrieved 18 May 2018 from https://www.weforum.org/whitepapers/agile-governance-reimagining-policy-making-in-the-fourth-industrial-revolution
  2. Retrieved 18 May 2018 from https://www.weforum.org/whitepapers/agile-governance-reimagining-policy-making-in-the-fourth-industrial-revolution

Kay Firth-Butterfield is head of artificial intelligence and machine learning at the World Economic Forum’s Center for The Fourth Industrial Revolution, vice chair of the  IEEE Global Initiative for Ethical Considerations in the Design of Autonomous Systems, an expert advisor to ADA AI https://ada-ai.org/, a Fellow of the Centre for the Future of Intelligence http://lcfi.ac.uk/ and a member of the technical advisory group to the Foundation for Responsible Robotics.


Leave a Reply