=Paper=
{{Paper
|id=Vol-2484/keynote1
|storemode=property
|title=Principles for the Trustworthy Adoption of AI in Legal Systems: the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
|pdfUrl=https://ceur-ws.org/Vol-2484/keynote1.pdf
|volume=Vol-2484
|authors=Nicolas Economou
|dblpUrl=https://dblp.org/rec/conf/icail/Economou19
}}
==Principles for the Trustworthy Adoption of AI in Legal Systems: the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems==
Principles for the Trustworthy Adoption of AI in Legal Systems:
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
Nicolas Economou
H5
340 Madison Avenue
19th Floor
New York, NY USA 10173
neconomou@h5.com
ABSTRACT Keynote Presentation
The advent of artificial intelligence in legal systems The advent of artificial intelligence in legal
spurred laudable efforts to assess its implications, systems since the early 2000s spurred laudable
risks, and benefits. Among those efforts, US NIST’s efforts to assess its implications, risks, and
TREC Legal Track produced exemplary scholarship
on the effectiveness of AI in discovery; other benefits. Among those, US NIST’s seminal
initiatives explored bias in risk-assessment algorithms TREC Legal Track studies produced exemplary
used in bail or sentencing; and bar associations scholarship on the effectiveness of AI in
considered the implications for professional conduct. discovery. Several initiatives explored bias in risk
Yet, a foundational question remained unaddressed: assessment algorithms used in bail or sentencing.
What framework could equip lawyers, judges,
advocates, policy makers, and the public, irrespective Bar associations considered the implications for
of legal system or cultural traditions, to determine the professional conduct. Yet, a foundational
extent to which they should trust (or mistrust) the question remained unaddressed: what framework
deployment of AI in the legal system? The IEEE and instruments could equip lawyers, judges,
Global Initiative on Ethics of Autonomous and advocates, policy makers, and the public,
Intelligent Systems, a multiyear, international,
multidisciplinary effort focused on the ethics of AI irrespective of legal system or cultural traditions,
took on this challenge. This talk, by the Chair of the to determine the extent to which they should trust
Initiative’s Law Committee, will present the IEEE’s (or mistrust) the deployment of AI in legal
recently published proposed norms for the trustworthy systems.
adoption of AI in legal systems, outline the objectives
of its upcoming work, and place this endeavor in the The IEEE Global Initiative on Ethics of
broader context of international law-focused AI Autonomous and Intelligent Systems, a
governance endeavors. multiyear, international, multidisciplinary effort
focused on the ethics of Artificial Intelligence
took on this challenge. The IEEE, which traces its
In: Proceedings of the First International Workshop roots back to Thomas Edison and Alexander
on AI and Intelligent Assistance for Legal Graham Bell, is a global technology think tank
Professionals in the Digital Workplace (LegalAIIA and one of the world’s leading standards-setting
2019), held in conjunction with ICAIL 2019. June
bodies. The IEEE Global Initiative’s mission is
17, 2019. Montréal, QC, Canada.
“to ensure every stakeholder involved in the
Copyright © 2019 for this paper by its authors. Use design and development of autonomous and
permitted under Creative Commons License intelligent systems is educated, trained, and
Attribution 4.0 International (CC BY empowered to prioritize ethical considerations so
4.0). Published at http://ceur-ws.org. that these technologies are advanced for the
LegalAIIA, 17 June 2019, Montréal, QC, Canada N. Economou
benefit of humanity.” In early 2019, the Global Principle 1: Evidence of Effectiveness
Initiative published its treatise, Ethically Aligned An essential component of trust in a technology
Design, First Edition (“EAD”) which sets forth is trust that it in fact works and succeeds in
the high-level ethical principles, key issues, and meeting the purpose for which it is intended. The
recommendations to advance this mission. principle of effectiveness, by requiring the
collection and disclosure of evidence of the
effectiveness of AI-enabled systems applied to
When it comes specifically to the trustworthy legal tasks, is intended to ensure that stakeholders
adoption of Artificial Intelligence in legal have the information needed to have a well-
systems and the practice of law, the IEEE Global grounded trust that the systems being applied can
Initiative’s Law Committee sought to answer this meet their intended purposes. In order for the
central question: “When it comes to legal practice of measuring effectiveness to realize its
systems, to what extent should society delegate to potential for fostering trust and mitigating the
intelligent machines decisions that affect risks of uninformed adoption and uninformed
people?” avoidance of adoption, it must have the certain
features: Meaningful metrics that are practically
The IEEE Law Committee EAD Chapter feasible and actually implemented; Sound
proposes that a definition of “Informed Trust” is methods. Valid data; Awareness and consensus;
necessary in order to answer this question and Transparency.
that this definition must meet certain design
constraints. Specifically, it needs to rest on a Principle 2: Competence
single set of principles that are: An essential component of informed trust in a
• Individually necessary and collectively technological system, especially one that may
sufficient affect us in profound ways, is confidence in the
• Applicable to the totality of the legal competence of the operator(s) of the technology.
system We trust surgeons or pilots with our lives because
• Globally applicable but culturally flexible we have confidence that they have the
knowledge, skills, and experience to apply the
• Considering the legal system as an
tools and methods needed to carry out their tasks
institution accountable to the citizen (so effectively. We have that confidence because we
as to avoid solely considering know that these operators have met rigorous
professional ethics or judicial ethics, etc.) professional and scientific accreditation standards
• Capable of being operationalized before being allowed to step into the operating
room or cockpit. This informed trust in operator
The IEEE Law Committee concluded that four competence is what gives us confidence that
principles fulfill the above design conditions in surgery or air travel (or even a plumbing repair!)
defining “Informed Trust” in the adoption (or will result in the desired outcome. No such
avoidance of adoption) of AI in legal systems and standards of operator competence currently exist
the practice of law: with respect to AI applied in legal systems, where
1. Effectiveness the life, liberty, and rights of citizens can be at
2. Competence stake. Such standards are both indispensable and
3. Accountability considerably overdue.
4. Transparency
Principle 3: Accountability
Those principles are outlines below.
Principles for the Trustworthy Adoption of AI
LegalAIIA, 17 June 2019, Montréal, QC, Canada
in Legal Systems
An essential component of informed trust in a security, intellectual property) and the needs of a
technological system is confidence that it is legitimate inquiry into the design and operation
possible, if the need arises, to apportion of an AI-enabled system.
responsibility among the human agents engaged
along the path of its creation and application: Next steps – From Principles to Practice
from design through to development, With these principles established, the IEEE will
procurement, deployment, operation, and, finally, seek to develop instruments, such as standards
validation of effectiveness. Unless there are and certifications, which can serve as the
mechanisms to hold the agents engaged in these “Currency of Trust”, which lawyers, judges,
steps accountable, it will be difficult or procurement officers, policy makers, advocates
impossible to assess responsibility for the and the public can understand in determining the
outcome of the system under any framework, extent to which AI-enabled systems and their
whether a formal legal framework or a less
operators meet certain criteria or claims. In this
formal normative framework. A model of AI regard, the IEEE has established The Ethics
creation and use that does not have such Certification Program for Autonomous and
mechanisms will also lack important forms of Intelligent Systems, which will progressively
deterrence against poorly thought-out design, develop such instruments.
casual adoption, and inappropriate use of AI.
Principle 4: Transparency It should be noted that, independently but nearly
An essential component of informed trust in a simultaneously to the IEEE’s work, the Council
technological system is confidence that the of Europe published the first Ethical Charter
information required for a human to understand promulgated by an intergovernmental
why the system behaves a certain way in a organization for use of Artificial Intelligence in
specific circumstance (or would behave in a judicial systems and their environment. The
hypothetical circumstance) will be accessible. prominence of the Council of Europe renders this
Without appropriate transparency, there is no work of particular importance to stakeholders in
basis for trusting that a given decision or outcome legal systems globally. The Council of Europe, in
of the system can be explained, replicated, or, if the context of an international multi-stakeholder
necessary, corrected. Without appropriate roundtable on AI and the Rule of Law recently
transparency, there is no basis for informed trust launched a project for the certification of
that the system can be operated in a way that artificial intelligence in the light of the Charter,
achieves its ends reliably and consistently or that further strengthening the global impetus for
the system will not be used in a way that trustworthy norms for AI in the law.
impinges on human rights. In the case of AI
applied in a legal system, such a lack of trust About the Author
could undermine the credibility of the legal ▪ Nicolas Economou is the chief executive of H5
system itself. and was a pioneer in advocating the application
of scientific methods to electronic discovery. He
An effective implementation of the transparency chairs the Law Committees of the IEEE Global
principle will ensure that the appropriate Initiative on Ethics of Autonomous and Intelli-
information is disclosed to the appropriate gent Systems and of the Global Governance of AI
stakeholders to meet appropriate information Roundtable hosted in Dubai as part of the annual
needs, striking a balance between legitimate World Government Summit. He leads The Future
grounds for withholding information (privacy, Society's Law Initiative and is a member of the
LegalAIIA, 17 June 2019, Montréal, QC, Canada N. Economou
Council on Extended Intelligence (CXI), a joint
initiative of the MIT Media Lab and IEEE-SA.
He has spoken on issues pertaining to artificial
intelligence and its governance at a wide variety
of conferences and organizations, including the
Spring Meetings of the International Monetary
Fund (IMF), UNESCO, Harvard and Stanford
Law Schools, and Renmin University of China.
Trained in political science at the Graduate
Institute of International Studies of the University
of Geneva (Switzerland), he earned his M.B.A.
from the Wharton School of Business, and chose
to forgo completion of his M.P.A at Harvard's
Kennedy School in order to co-found H5.