=Paper=
{{Paper
|id=Vol-2844/ethics7
|storemode=property
|title=Artificial Intelligence as Evidence in Criminal Trial
|pdfUrl=https://ceur-ws.org/Vol-2844/ethics7.pdf
|volume=Vol-2844
|authors=Eftychia Bampasika
|dblpUrl=https://dblp.org/rec/conf/setn/Bampasika20
}}
==Artificial Intelligence as Evidence in Criminal Trial==
Artificial Intelligence as Evidence in Criminal Trial
Eftychia Bampasika, LL.M.
Doctoral Researcher, Member of the Otto Hahn Research Group on Alternative Criminal Justice
Max Planck Institute for the Study of Crime, Security and Law
Günterstalstraße 73, 79100, Freiburg i. Br.
e.bampasika@csl.mpg.de
ABSTRACT to detect certain criminal behaviours and find their operational
patterns. This fact has a negative impact on the social credibility
This paper touches upon the intertwining of AI technology and of traditional justice systems.1 In this context, the use of AI in the
criminal justice systems and assesses especially the issue of using criminal justice system may prove to be of strategic importance
AI as an evidence-generating mechanism in criminal trials. The and a game changer for prevention, investigation, fact finding and
paper revolves, in particular, around three focal points. Firstly, it procedural economy. This in turn calls for a deeper understanding
sets the context for the following analysis and gives a short of AI’s functions and operation processes within the scope of
definition of AI. Secondly, it examines some thorny parameters criminal justice. Given that software programmes of predictive
of the evidentiary proceedings and focuses on the most important policing,2 predictive analytics and face recognition are already
AI weaknesses that could jeopardise the smooth incorporation of being used in police departments in the U.S.,3 Europe4 and China,5
AI in the criminal justice systems. Thirdly, it presents the ways in and criminal justice systems have begun to use machine learning
which AI could affect basic procedural rights of the defendant and to assist in investigations for fraud and other white-collar crimes,
concludes with some safety requirements and suggestions that it is imperative that legal scholars begin to dig in this uncharted
could facilitate the transition to an AI-criminal-justice-era. and unlegislated territory. This paper focuses on the use of AI as
evidence in the context of the traditional criminal trial. After
KEYWORDS briefly dealing with the definition of AI, the paper outlines the use
of AI as an evidence-generating mechanism, examines the
Artificial Intelligence; Criminal Justice; Evidence; Procedural procedural rights of the defendant in view of the problems
Rights inherent to the AI technology and concludes the proposed
solutions and guidelines.
1. INTRODUCTION
It is trite to say that Artificial Intelligence (AI) will reshape and is 2. DEFINITION OF ARTIFICIAL
indeed already reshaping many aspects of our reality, and yet it is INTELLIGENCE
true. The digital transformation of the global society due to AI
In the field of criminal law, there is a long-standing, close bond
does not leave the justice systems around the world unaffected,
between criminal justice and technology. Over the past 150 years,
bringing at this very moment the first challenges for crime control
criminal courts have deployed the so called ‘machine evidence’, in
and criminal justice to the surface.
order to form and support their verdict, and the ‘silent testimony
As crime becomes more and more complex, sophisticated and of instruments’ has supplemented the testimony of humans.6 One
opaque, it is extremely difficult for the law enforcement agencies could just think of toxicology, ballistics, anthropometry,
1 On this matter, see U Sieber, V Mitsilegas, C Mylonopoulos, E Billis and N Knust https://datajusticeproject.net/wp-content/uploads/sites/30/2019/05/Report-Data-
(Eds), Alternative Systems of Crime Control: National, Transnational, and International Driven-Policing-EU.pdf; Bundeskriminalamt, ‘Das Programm “Polizei 2020”’,
Dimensions (Berlin, Duncker & Humblot, 2018). available at:
2 For an overview on the use of predictive policing software around the world, see: https://www.bka.de/DE/UnsereAufgaben/Ermittlungsunterstuetzung/Elektronische
FahndungsInformationssysteme/Polizei2020/Polizei2020_node.html; S Egbert,
https://privacyinternational.org/examples/predictive-policing.
‘Siegeszug der Algorithmen? Predictive Policing im Deutschsprachigen Raum‘
3 See B Kartheuser, ‘Kontrolle Ist Gut, Überwachung Ist Besser’ Der Spiegel (27 Bundeszentrale für Politische Bildung (4 August 2017), available at:
January 2018), available at: https://www.spiegel.de/panorama/justiz/predictive- http://www.bpb.de/apuz/253603/siegeszug-der-algorithmen-predictive-policing-im-
policing-in-los-angeles-kontrolle-ist-gut-ueberwachung-ist-besser-a-1188578.html; deutschsprachigen-raum?p=all.
K Hao, ‘Police Across the US Are Training Crime-Predicting AIs on Falsified Data’ 5 eg, B Schmidt, ‘Hong Kong Police Already Have AI Tech That Can Recognize Faces’,
The MIT Technology Review (13 February 2019), available at:
Bloomberg (22 October 2019), available at:
https://www.technologyreview.com/2019/02/13/137444/predictive-policing-
https://www.bloomberg.com/news/articles/2019-10-22/hong-kong-police-already-
algorithms-ai-crime-dirty-data/; ES Levine, J Tisch, A Tasso and M Joy, ‘The New
have-ai-tech-that-can-recognize-faces; P Mozur, ‘One Month, 500,000 Face Scans:
York City Police Department’s Domain Awareness System’ (2017) 47(1) INOFRMS
How China Is Using A.I. to Profile a Minority’, The New York Times (14 April 2019),
Journal on Applied Analytics 1‒15.
available at: https://www.nytimes.com/2019/04/14/technology/china-surveillance-
4 Meeting Report: PHRP Expert Meeting on Predictive Policing (20 September 2019), artificial-intelligence-racial-profiling.html.
available at: https://ec.europa.eu/knowledge4policy/publication/meeting-report- 6 MR Damaška, Evidence Law Adrift (New Haven, Yale University Press, 1997) 143.
phrp-expert-meeting-predictive-policing_en; F Jansen, Report on ‘Data Driven
Policing in the Context of Europe’ (7 May 2018), available at:
WAIEL2020, September 3, 2020, Athens, Greece
Copyright © 2020 for this paper by its authors. Use permitted under Creative
Commons License Attribution 4.0 International (CC BY 4.0)
fingerprints, uhlenhuth test, maturation, forensic graphology and potentially as basis for the subsequent acquittal or conviction of
DNA-test. the defendant, the focus must be, thus, on the compatibility of this
particular characteristic of AI with the traditional purposes and
The striking difference, though, between the AI—in the form of guarantees of the criminal trial and the evidentiary process.
face, voice or video recognition, machine learning for the
detection of fraud or other crimes, etc.—and the previous forensic 3. CRIMINAL TRIAL AND EVIDENTIARY
methods of past decades is that machines, then, whenever put into PROCESS
use, operated according to rules that humans painstakingly
programmed by hand.7 By contrast, an offshoot of AI, that is, Adjudicative fact-finding as such is bound to be conducted in
machine learning, refers to a programme’s ability to extract conditions of uncertainty. The evidentiary process of the trial
patterns from raw data.8 The machine has now the ability to keep rests upon probabilities, not certainties, and hence involves a risk
improving its performance without humans having to explain of error. That is why it is impossible to eliminate erroneous
exactly how to accomplish a task.9 Indeed, many times the convictions and acquittals.13 The courts may aspire to ascertain
programmer herself cannot account for how the machine came to the truth, but at the end of the day they must come to a decision.14
a particular result, even if the result is accurate.10 For this reason, the criminal courts often turn to science, in order
to help them reach a verdict that is, as much as possible, objective
For the purposes of this paper, we adopt the High-Level Expert and facts-based. As already mentioned, the use of machines,
Group on Artificial Intelligence (AI HLEG) general definition of scientific evidence and expert witnesses in the evidentiary
AI, specifically that of its AI Ethics Guidelines: proceedings is not a novelty for criminal justice. Equally old is also
‘Artificial intelligence (AI) systems are software (and possibly also the fact that scientific evidence might get it wrong sometimes. By
hardware) systems designed by humans that, given a complex their conduct, courts have expressed, over the years, a tolerance
goal, act in the physical or digital dimension by perceiving their for some level of both ignorance and risk in machine evidence:
environment through data acquisition, interpreting the collected ignorance in how these processes work, and risk that they might
structured or unstructured data, reasoning on the knowledge, or not ‘get it right’ every time.15 The purpose of the finality of trial
processing the information, derived from this data and deciding trumps the purpose of finding of truth, if the latter ever was
the best action(s) to take to achieve the given goal. AI systems can possible. The beyond-all-reasonable-doubt standard itself
either use symbolic rules or learn a numeric model, and they can recognizes this inevitability of sporadically convicting innocent
also adapt their behaviour by analysing how the environment is people.16
affected by their previous actions. As a scientific discipline, AI Further, evidence is not necessarily produced during trial. It is
includes several approaches and techniques, such as machine instead the outcome of the process of appraising what is produced
learning (of which deep learning and reinforcement learning are at trial by the fact finder, who in doing so invokes a large
specific examples), machine reasoning (which includes planning, storehouse of ‘evidence’ that is summed up in her beliefs. 17 This
scheduling, knowledge representation and reasoning, search, and makes it almost impossible for the fact finder to avoid heuristics
optimization), and robotics (which includes control, perception, and cognitive bias. Increasing reliance on machines in litigation
sensors and actuators, as well as the integration of all other could consequently, for some scholars, help minimize the ‘whim
techniques into cyber-physical systems).’11 and caprice’ of the bench or jury. All in all, no one could refuse
that the generation of unpredictable, idiosyncratic decisions is the
Core to the concept of AI, as stated above, is the notion of an agent antithesis to the rule of law.18 Therefore, the AI could serve as an
capable of taking relatively autonomous decisions, depending on auxiliary mechanism, assisting the court in the fact-finding
its perception and cognition of its environment. The emphasis on process, by reducing judicial arbitrariness, systematizing the
agency implies that we are not dealing with a rigid execution of proof process and improving trial efficiency. Some proponents of
rules but with systems capable of learning how to improve their the deployment of AI in the judicial system even invoke
performance on the basis of feedback.12 When it comes down to phenomena of judicial corruption, to advocate in favour of the use
the use of such programmes as evidence in a criminal trial, and of the AI in the judicial field.19 This could be the case, though, if
7 V Fomin, ‘The Shift from Traditional Computing Systems to Artificial intelligence 12 M Hildebrandt, ‘Criminal Law and Technology in a Data-driven Society’ in MD
and the Implications for Bias’ in JS Gordon (ed), Smart Technologies and Fundamental Dubber and T Hörnle, The Oxford Handbook of Criminal Law (Oxford, Oxford
Rights (Brill | Rodopi, 2020, to be published). University Press, 2014) 175‒96, 188.
8 I Goodfellow, Y Bengio and A Courville, Deep Learning, 9th edn (Cambridge, MA, 13 A Stein, Foundations of Evidence (Oxford, Oxford University Press, 2005) 2.
The MIT Press, 2016) 2–3; H Surden, ‘Machine Learning and Law’ (2014) 89 14 A Keane, P McKeown, The Modern Law of Evidence, 12th edn (Oxford, Oxford
Washington Law Review 87‒115, 88: ‘Machine learning systems are computer
University Press, 2018) 2‒3. On the search for the truth as purpose of the criminal
algorithms that have the ability to learn or improve in performance over time on
trial, see E Billis, Die Rolle des Richters im Adversatorischen und im Inquisitorischen
some task’.
Beweisverfahren (Berlin, Ducker & Humblot, 2015) 93‒120.
9 On AI, in general, see the reference book of S Russell and P Norvig, Artificial
15 PW Nutter,’ Machine Learning Evidence: Admissibility and Weight’ (2019) 21(3)
Intelligence: A Modern Approach, 4th edn (New Jersey, Pearson, 2020).
Journal of Constitutional Law 919‒58, 925.
10 A Holzinger, C Biemann, CS Pattichis and DB Kell, ‘What Do We Need to Build
16 ibid 173.
Explainable AI Systems for the Medical Domain?’: ‘Often the best-performing
methods are the least transparent’, 2, available at: 17 RJ Allen, ‘Artificial Intelligence and the Evidentiary Process: The Challenges of
https://arxiv.org/pdf/1712.09923.pdf. Formalism and Computation’ (2001) 9 Artificial Intelligence and Law 99‒114, 103.
11 Available at: https://ec.europa.eu/futurium/en/ai-alliance-consultation, 36. 18 ibid 101.
19 Y Cui, Artificial Intelligence and Judicial Modernization (Singapore, Springer, 2020)
22.
the AI deployed in the criminal justice field, could promise a high subjectivities and errors that often go unrecognized and
percentage of objectivity and accuracy. Research so far has shown, unchecked, thus potentially ‘facilitat[ing] the masking of
however, that AI is susceptible to biases and, as a result, its illegitimate or illegal discrimination behind layers upon layers of
outcome accuracy cannot be fully trusted.20 Nevertheless, since AI mirrors and proxies.’23
applications are already being used in many criminal jurisdictions
around the world, it is imperative to properly examine their weak 4.3. Lack of accountability
spots. Furthermore, when data are first gathered or generated, basic
human error in collection or interpretation is common.24 Human
4. THE PROBLEMS OF AI AS MEANS OF errors could occur in the training phase of the data or even later
EVIDENCE in the further development of the programme. Nevertheless, in
order to establish accountability, one needs to demonstrate the
As artificial intelligence does not equal artificial perfection, the person behind the programme, who did something wrong. In
weaknesses of AI as an evidence-generating mechanism must be machine learning systems, where computer scientists are often
put under a magnifying glass, in order to figure out satisfactory unable to determine how or why a machine learning system has
safety requirements conditioning its use in the realm of the made a particular decision, this is very difficult to achieve.25
criminal justice. The most problematic characteristics of AI could Furthermore, one of the typical supportive arguments from the
be summed up around the following terms: 1. inexplicability, 2. side of AI experts and AI companies is that AI systems and
discrimination and bias and 3. lack of accountability. especially the machine learning ones evolve in unforeseen ways,
due to their autonomous and self-learning nature. As a result, no
4.1. Inexplicability programmer could be held liable for their evolution.
AI is revolutionary in its applications and capabilities, though,
with respect to its potential uses in criminal justice, it is 5. PROCEDURAL JUSTICE IN THE AI-ERA
functionally similar to traditional software: data go in and
conclusions come out. In between, there is a ‘black box’ of In view of the characteristics of AI outlined above, legal scholars
calculations that not only is occasionally inaccessible to the need to come up with new, effective safeguards in criminal
experts themselves but also few in the courtroom would procedure, or reinterpret those already existing. Since the use of
understand.21 Here lies the danger of the AI being improperly AI will be a state privilege, at least in this phase of the digital
afforded a presumption of reliability, objectivity and certainty, judicial transformation, the defendants need to be equipped with
due to its mechanical appearance and apparently simple output.22 the procedural rights that will preserve the equality of arms
In order for the bench or jury to make an informed decision on between them and the state, and the fairness of the trial. The
the guilt of the defendant, light must somehow be shed on this defendant must be able, in this new criminal procedural
black box. Moreover, given that the AI output is often framework, to defend herself against the all-mighty AI and contest
inexplicable, the question of how the defendant will be able to the evidence produced by the latter.
defend herself and contest the evidence produced by it inevitably
arises. Legal scholars must also consider changes in the law of evidence.
Rules regarding the admissibility of AI generated evidence and
4.2. Discrimination and Bias methods to determine the reliability of its outcomes, exclusionary
At the same time, decisions taken by algorithms could result from rules, and rules on risk-allocation are some of the problems which
data that is incomplete and therefore not reliable: data may be lie at the heart of this issue. The principle of judicial discretion
tampered with by cyber-attackers, biased or simply mistaken. must, likewise, find its place in this new environment, since the
Applying the technology as it develops without due consideration danger of over-evaluating the importance of AI generated
would, therefore, lead to problematic outcomes as well as evidence could lead to total reliance upon science and to
reluctance by citizens to accept its use by the courts, since the risk ‘abdication of responsibility by law’.26 Human-computer
of malfunction always remains a distinct possibility. Thus, one of interaction research on the biases involved in all algorithmic
the toughest challenges for a successful incorporation of AI in decision-making systems has shown that it is extremely difficult
criminal justice is the elimination of all kinds of biases that AI is for a human decision-maker to refute a ‘recommendation’ made
susceptible to. Indeed, such biases may subsequently lead to poor by a high-tech tool.27
and unjust judicial decision making, when factored in by the
Furthermore, since we already witness a ‘dissolution of the
bench or jury. In truth, all these processes have hidden
procedural infrastructure within the criminal justice system’28
20 See the ProPublica research on the COMPAS recidivism algorithm (23 May 2016), 25 J Buyers, Artificial Intelligence—The Practical Legal Issues (Minehead, Somerset,
available at: https://www.propublica.org/article/how-we-analyzed-the-compas- Law Brief Publishing, 2018) 22.
recidivism-algorithm. 26 P Alldridge, ‘Do C&IT Facilitate the Wrong Things?’ (2000) 14(2) International
21 Nutter (n 15) 922. Review of Law, Computers & Technology 143‒54, 144.
22 A Roth, ‘Trial by Machine’ (2016) 104(5) Georgetown Law Journal 1245‒306, 1269– 27 A Završnik, ‘Criminal Justice, Artificial Intelligence Systems, and Human Rights’,
70. ERA Forum 20 (2020) 567–83, 574.
23 O Tene and J Polonetsky, ‘Judged by the Tin Man: Individual Rights in the Age of 28 A Marks, B Bowling and C Keenan, ‘Automatic Justice?: Technology, Crime, and
Big Data’ (2013) 11 Journal on Telecommunications and High Technology Law 351‒68, Social Control’ in R Brownsword, E Scotford and K Yeung, The Oxford Handbook of
358. Law, Regulation and Technology (Oxford, Oxford University Press, 2018) 705‒30, 714.
24WA Logan and AG Ferguson, ‘Policing Criminal Justice Data’ (2016) 101(2)
Minnesota Law Review 549‒615, 559.
because of the profiling, risk assessment and predictive analytics acquitted. Even at the level of the European Union where the
techniques, a new conceptualization of the fundamental General Data Protection Regulation38 and the accompanying Law
procedural rights of the defendant is more than necessary. Enforcement Directive39 establish strict data protection standards
Procedural fairness is the ultimate prerequisite, if we want this in the area of criminal offences and penalties, fully automated
new architectural scheme to work and gain social acceptance. decision-making remains possible, albeit rarely. The Member
States still have the possibility of providing for a decision based
5.1. The presumption of innocence solely on automated processing, which produces an adverse legal
The presumption of innocence was traditionally connected with a effect concerning the data subject. Sole prerequisite is the
temporal distance between the criminal charge and the conviction authorisation by Union or Member State law, as long as it provides
or the acquittal.29 The new AI environment challenges ‘the linear appropriate safeguards for the rights and freedoms of the data
sense of time’.30 In other words, it challenges the delay inherent subject.
in procedural safeguards embodying protection against hasty
judgments, as we are confronted with a series of real-time 5.4. The principle of equality of arms
decisions taken by automated decision systems based on machine The above-mentioned impact of AI on the procedural rights
learning techniques.31 Data-driven surveillance challenges the disturbs the fair balance between the parties. Procedural equality
very foundations of the presumption of innocence by suggesting of arms is designed to ‘treat the accused as a thinking and feeling
precognition of criminal intent32 and thus ‘creating a de facto human being worthy of respect, who is entitled to be given the
presumption of guilt’.33 Hence, some scholars go as far as opportunity to play an active part in procedures with a direct and
advocating the construction of a ‘presumption of innocence by possibly catastrophic impact on their life, rather than as object of
design’34 and the interjection of ‘explanation systems’ into AI state control to be manipulated for the greater good’.40 Therefore,
solutions, since the inculpatory evidence must have some kind of it is imperative for the defendant to be afforded a reasonable
discernible logic, explanation, ability to be examined or opportunity to present his case and take actively part in the
challenged. In the context of law enforcement and intelligence, criminal trial including the evidentiary proceedings.
default settings of the computational technologies should prevent
the reversal of the presumption of innocence by the automation
6. POSSIBLE SOLUTIONS
of suspicion,35 especially where data-mining is used to flag
behaviours. AI has entered the premises of criminal justice systems in the
aspiration to improve the procedural justice and economy, their
5.2. The right to confrontation effectiveness and efficiency. In order to fulfil those aspirations, we
One of the oldest rights that belongs to the core of the defendant’s need greater social acceptance of AI. Trustworthy AI has three
procedural arsenal in the Western legal systems is the right to components according to the High-Level Expert Group on
confrontation.36 The accused enjoys the right to be confronted Artificial Intelligence (AI HLEG): (1) it should be lawful, ensuring
with the witnesses against him, cross examine them and contest compliance with all applicable laws and regulations, (2) it should
the incriminating evidence. Normally, the defendant would be be ethical, ensuring adherence to ethical principles and values and
given full access to the evidence against him, in order for him to (3) it should be robust, both from a technical and social perspective
exercise this very right. Since the inner workings of these tools since to ensure that, even with good intentions, AI systems do not
are trade secrets of the companies that developed them, one cause any unintentional harm. The current state of the art does not
wonders how the defendant would be able to effectively defend provide for systems to self-report their decisions, but there is a
herself in lack of access to the very algorithmic assessment tool, widely held view in the relevant scientific community that
that brought her to the stand. regulators will force developers to interject “explanation systems”
into their AI solutions when they are deployed in environments
5.3. The right to privacy
With the advance of big data, privacy is the right that has suffered
the most. Individuals are investigated, judged and sometimes
punished ‘en masse’ and ‘at a distance’,37 blurring the clear
distinctions between citizen, suspect, defendant, convict and
29 Hildebrandt (n 12) 181. data and on the free movement of such data, and repealing Directive 95/46/EC,
30 Hildebrandt (n 12) 182. available at: https://gdpr.eu/tag/gdpr/.
39 Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April
31 Hildebrandt (n 12) 182.
2016 on the protection of natural persons with regard to the processing of personal
32 Hildebrandt (n 12) 194−95. data by competent authorities for the purposes of the prevention, investigation,
33 Hildebrandt (n 12) 184. detection or prosecution of criminal offences or the execution of criminal penalties,
and on the free movement of such data, and repealing Council Framework Decision
34 Hildebrandt (n 12) 174, 195. 2008/977/JHA, Article 11: 1. Member States shall provide for a decision based solely
35 Hildebrandt (n 12) 183. on automated processing, including profiling, which produces an adverse legal effect
concerning the data subject or significantly affects him or her, to be prohibited unless
36 For the U.S., see P Marcus, DK Duncan, T Miller and J Moreno, The Rights of the authorised by Union or Member State law to which the controller is subject and
Accused under the Sixth Amendment, 2nd edn (Chicago, American Bar Association, which provides appropriate safeguards for the rights and freedoms of the data
2016). For Europe, see S Maffei, The Right to Confrontation in Europe (Groningen, subject, at least the right to obtain human intervention on the part of the controller.
Europa Law Publishing, 2012). Available at: https://eur-lex.europa.eu/legal-
37 Marks, Bowling and Keenan (n 28) 708. content/EN/TXT/?uri=celex:32016L0680.
40P Roberts and A Zuckerman, Criminal Evidence, 2nd edn. (Oxford, Oxford
38 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April
University Press, 2010) 21.
2016 on the protection of natural persons with regard to the processing of personal
where their outputs (or decisions) are likely to have a significant
regulatory or human impact.41
According to the Ethics Guidelines for Trustworthy Artificial
Intelligence prepared by the same expert group (AI HLEG), the key
requirements that any AI system must fulfil in order to be accepted
are: a. the human agency and oversight, b. technical robustness and
safety, c. privacy and data governance, d. transparency, e. diversity,
non-discrimination and fairness, f. societal and environmental well-
being and g. accountability.42 Overall, the trust in AI control
mechanisms poses many challenging regulatory questions, given
the fact that trust must not be an elusive and muddled idea, but a
reflection and a result of crystal-clear regulation. In order for these
requirements to gain true meaning instead of remaining empty
vessels, legal scholars must ally with AI experts, in order to come
up with solutions that comply with the actual practice of the law as
fairly and as efficiently as possible.
7. CONCLUSION
This paper served the purpose of highlighting the interplay between
the criminal justice systems and the AI technology in connection
with AI being employed as evidence tool, the sticking points of this
risky venture, and some brief deliberations about possible
solutions. The inducement that AI offers to criminal justice systems
is big. The challenge will be, as Hildebrandt puts it, for the law to
engage with AI ‘without either sacrificing or petrifying its
identity’.43 It is imperative, therefore, for legal scholars to cross
disciplinary boundaries and work together with AI experts, in order
for them to demystify and understand in depth the workings of AI.
Only then they can produce relevant and applicable laws that will
effectively incorporate AI in our legal reality and justice systems
and regulate its possible dangers.
41 Buyers (n 25) 22‒3. 43 Hildebrandt (n 12) 187.
42Available at: https://ec.europa.eu/futurium/en/ai-alliance-
consultation/guidelines#Top. Especially on the matter of creating credibility-testing
mechanisms, see A Roth, ‘Machine Testimony’ (2017) 126 The Yale Law Journal 1972‒
2051, 2022‒38.