=Paper= {{Paper |id=Vol-2937/paper6 |storemode=property |title=Machine Learning and Legal Argument |pdfUrl=https://ceur-ws.org/Vol-2937/paper6.pdf |volume=Vol-2937 |authors=Jack Mumford,Katie Atkinson,Trevor Bench-Capon }} ==Machine Learning and Legal Argument== https://ceur-ws.org/Vol-2937/paper6.pdf
Machine Learning and Legal Argument
Jack Mumford1 , Katie Atkinson1 and Trevor Bench-Capon1
1
    Department of Computer Science, University of Liverpool, L69 3BX, UK


                                         Abstract
                                         Although the argumentation justifying decisions in particular cases has always been central to AI and
                                         Law, it has recently become a burning issue as black box machine learning approaches become prevalent.
                                         In this paper we review the understanding of legal argument that has been developed in AI and Law,
                                         and indicate the most appropriate ways in which Machine Learning approaches can contribute to legal
                                         argument. We identify some key questions that must be explored to provide acceptable explanations for
                                         legal ML systems. This provides the context and directions of our current research project.

                                         Keywords
                                         Machine Learning, Legal reasoning, Justification




1. Introduction
The use of Machine Learning (ML) techniques to produce algorithms to classify new instances on
the basis of a large set of past instances has become prevalent: so much so that this approach is
now almost synonymous with “Artificial Intelligence” in the popular press. Where these systems
explain their reasoning it is typically in terms of the algorithm: they may identify the words
of features that contributed most to the classification, or display some sort of visualisation in
the form of a “heatmap” [23]. Explanations of legal decisions are, however, somewhat different
from those required in many other ML applications. The outcome of a case is not a property
waiting to be discovered, but the result of a decision made by the appropriate empowered
authority, such as a judge. Now, it may be that the explanation of how the decision was made
can be something non-legal: for example it has been found that judges can be more lenient
before lunch and towards the end of the day [19]. Such an explanation is not, however, what is
required for a legal decision. What is required is a justification of how the decision represents
the application of the law. The explanation of legal decisions must take the form of an argument
able to persuade its audience of the correctness of the decision, in terms of the applicable law.
To achieve this, the argument must be couched in natural terms, so that the decision can be
seen to follow from the law, rather than in the quasi-statistical terms that would explain how
an ML algorithm had arrived at its prediction.
   There has in recent years been an explosion of interest in the application of ML techniques
to law. Several tasks have been addressed including case retrieval [33], summarisation [14],

CMNA’21: Workshop on Computational Models of Natural Argument, September 2-3, 2021, Online
Envelope-Open Jack.Mumford@liverpool.ac.uk (J. Mumford); K.M.Atkinson@liverpool.ac.uk (K. Atkinson); tbc@liverpool.ac.uk
(T. Bench-Capon)
GLOBE https://jamumford.github.io (J. Mumford); https://www.csc.liv.ac.uk/~katie/ (K. Atkinson);
https://www.csc.liv.ac.uk/~tbc/ (T. Bench-Capon)
                                       © 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073
                                       CEUR Workshop Proceedings (CEUR-WS.org)
and legal argumentation mining [39]. In this paper, however, we will focus on the important
class of applications intended to support the task of deciding legal cases. This task has received
attention from many researchers: the European Convention on Human Rights (ECHR) alone
has been the subject domain of a number of studies including [4], [26], [18], and [24].
   A prediction of the outcome of a case on its own, however, offers little help to a person
charged with deciding the case. This is discussed in [12] where it is cogently argued that,
without an explanation of why the case was so classified, the adjudicator has no reason to follow
the advice. The performance of prediction systems is by no means perfect (typically less that
80%), so there can be no assurance that the outcome will be correct, and the law requires a very
high degree of certainty. Judges will therefore still need to form their own independent opinion
and without the reasons for the machine opinion they would have no reason to give any weight
to that of the machine. Moreover, there are reasons to believe that the machine will not be able
to learn the applicable law. For one thing, as argued in [8], the data used to train the system is
likely to contain decisions reflecting bias and misunderstanding, and changes in law and societal
values mean that decisions become increasingly unreliable as they age [26]. Moreover, even if
the dataset is perfect, empirical work has shown that it may fail to find the correct rationale for
its decisions (see [6] and [35]). This means that an explanation of the machine’s suggestion is
required if incorrect rationales are not to be applied.
   We will consider how ML can support legal decision making, given that what is required is an
outcome accompanied by an argument which justifies that outcome in terms of the applicable
law. We will first review work on the generation of such arguments in AI and Law, then consider
what part ML approaches might play and how we are addressing this topic in our current project.


2. Modelling Legal Argument in AI and Law
Arguments justifying legal decisions have been a central concern of AI and Law since 1976
when McCarty’s TAXMAN [25] attempted to model both the majority and minority arguments
in the famous tax law case of Eisner v Macomber. Most influential has been the stream of work
on modelling US Trade Secrets Law originating in HYPO [31], developed further in CATO [5]
and subsequently explored by many others [7]. This work has shown that legal argumentation
in cases can be seen as passing through a series of layers, as articulated in [2].
   The top layers supply a logical framework: this may derive from statute law [34], or emerge
from case law [30]. Thus for Trade Secrets Law, in order to find for the plaintiff, the information
must be both a Trade Secret and have been misappropriated. To be a Trade Secret the information
must be valuable and have had adequate measures taken to protect its secrecy. To have been
misappropriated there must have been a breach of confidence or the use of improper means to
obtain the information. These elements, known as issues, form an and/or tree with the children
providing necessary and sufficient conditions for their parent. At this level the explanation can
be in terms of these logical rules: e.g. find for the defendant because the information was not a
Trade Secret and it was not a Trade Secret because the measures taken to protect its secrecy were
inadequate.
   The next layer comprises factors, a notion made popular by CATO [5]. Factors are stereotypical
patterns of fact that provide a (non-conclusive) reason to decide for one side or the other. In
Trade Secrets Law, these include: whether the information was disclosed in negotiations; whether
the information was known to be confidential; and the ease with which the information was
reverse engineerable by inspecting the product. Like issues, factors can form a tree, with abstract
factors explained in terms of base level factors. Unlike issues, factors do not provide necessary
and sufficient conditions for their parents: they provide reasons for and against the presence of
the parent, which must be weighed against one another and a preference expressed. Precedent
cases provide a source of such preferences. Where the question has been considered previously,
the decision in the previous case constrains the decision: where the question has not previously
arisen the court must make a choice which will constrain future decisions. Thus, at this layer
we get a rather different style of argument, taking the form of a statement of the factors (the
reasons for both sides), the status of the issue under consideration, and a precedent justifying
the preference that gave that status. For example where the information had been disclosed
in negotiations but the defendant was aware that the information was confidential, a duty of
confidence existed (cite National Rejectors v Trieman 1966).
   Below factors are the facts, and it is on the basis of these that the factors are ascribed. Often
this will require argument: for example if the defendant has claimed that the information was
not valuable because it could be reverse engineered, the court will need to look closely at the
“ease or difficulty with which the information could be properly acquired or duplicated by
others” to decide whether the facts do indeed suggest that this was a reason for the defendant
and that the factor reverse engineerable can be ascribed. An example of an argument at this
level can be found in Technicon Data Systems Corp. v. Curtis 1000, Inc: “The Court reasoned that
the process had required over two-thousand hours, and still had not yielded a fully functional
product. The Court held that this amount of time indicated that a trade secret was not readily
ascertainable.”
   At the very lowest layer is the evidence on which the facts are based, which will include
witness testimony, forensic evidence and the like. The reasoning here is not specifically legal,
but is similar to that used to establish the truth of matters in everyday life. Indeed, the facts are
often decided not by lawyers, but by a lay jury. In higher courts, where a decision is appealed,
the facts are usually taken as those established by the lower court. Although arguments based
on evidence have received attention in AI and Law in, for example, [13] and [37], we will not
consider them further in this paper, concentrating instead on the distinctively legal arguments.

2.1. Layers of Reasoning in AI and Law
Although the complete justification of a legal decision would involve starting from the evidence
and working through the facts, factors, and issues to reach the final verdict, few approaches
have examined the entire process. The focus in evidential approaches such as [11] was resolving
conflicting stories as to the facts, and other systems have covered different parts of the range.
   Approaches based of the formalisation of legislation [32] are concerned only with the upper-
most levels: they ask users to resolve the issues and the system provides the outcome based
on their answers. CATO [5] took the factors as input and produced arguments for both sides,
leaving it to the user to resolve these arguments to reach a decision. IBP [17], extended the
CATO approach with a logical model of the issues so that it could predict outcomes. HYPO
represented cases as facts and identified the applicable dimensions to enable factors to be
Table 1
Layers of Statements in a Legal Decision and Some Example Systems
   Statement Type    BNA [32]    HYPO [31]     CATO [5]     IBP [17]   Bex et al. [11]   NIHL [3]
   Outcome           X                                      X                            X
   Issues            X                         X            X                            X
   Factors                       X             X            X                            X
   Facts                         X                                     X                 X
   Evidence                                                            X


ascribed. It did not use issues and the user was left to decide which side was favoured on these
dimensions, and how these resolved the overall case.
   The ANGELIC methodology [1] addresses all the layers above evidence. Using ANGELIC,
knowledge is represented as an Abstract Dialectical Framework (ADF) [16]. The ADF has
the form of a tree, beginning with verdict and then working through the different statement
types. In an ADF each node is associated with acceptance conditions local to the node, which
determine the status of the node in terms of its children. Because these acceptance conditions
are local to a node they can reflect the different reasoning styles used for the different statement
types: the upper layers can use propositional formulae, while the issues can be resolved using
prioritised combinations of factors to reflect reasoning with the weighing of reasons for and
against, often termed ‘balance of factors’ reasoning [36]. At the very lowest level the use of
thresholds can convert dimensional facts and probabilities into factors. A full application of the
methodology to a real world application is given in [3]. An example ADF for Trade Secrets law
is given in [9]. The coverage of various systems is summarised in Table 1.


3. Explaining Predictions from ML Approaches
Little attention is paid to the explanation or justification of the prediction in current work using
ML, e.g. [4], [26], [18], and [24]. One of them [4], however, did offer a list of twenty words,
listed in order of their Support Vector Machine weight. The list for violation of article 6 in the
ECHR domain, was:

      court, applicant, article, judgment, case, law, proceeding, application, government,
      convention, time, article convention, January, human, lodged, domestic, February,
      September, relevant, represented

   Such a list inspires no confidence in the sound legal basis of the prediction: indeed finding
month names among the most predictive words suggests rather that the algorithm is relying on
features of the data which have no legal significance and so should be irrelevant. Certainly there
is nothing here that would form the basis of a persuasive argument. A subsequent work [18]
did not attempt to provide any justification and commented on those produced in [4] saying
that they “are far from being justifications that legal practitioners could trust”.
   The above systems take as input a natural language description of the facts of the case and
output a prediction of the outcome. But, as we discussed in the previous section, legal reasoning
must pass through several stages between facts and outcome, and arguments justifying the
outcome are naturally expressed in terms of issues and factors, not facts. The justification needs
to bridge this conceptual gulf between outcome and facts by the use of legally pertinent legal
concepts: issues and factors. This suggests that if we are to be able to justify the outcome, we
need to learn the factors present in the case.
   This is the basis of the approach taken by Branting et al. [15]. Their approach is applied
to World Intellectual Property Organization (WIPO) domain name dispute cases and exploits
structural and semantic regularities in case corpora to identify textual patterns that have both
predictable relationships to case decisions and explanatory value: these regularities essentially
correspond to factors. The approach used is a semi-supervised one that makes use of a manually
annotated set of representative cases. The manually annotated set is a very small proportion of
the available corpus (25 of the 16,024 available).
   Branting et al. in [15] do not propose any particular method of explanation using the factors,
One possible model, used in [28], is provided by CATO. In CATO the justification takes the form
of a three-ply argument. In the first ply a proponent cites the most-on-point precedent (i.e. the
precedent with the greatest overlap of factors, irrespective of which side they favour, decided
for the side being argued for). In the second ply the opponent either cites a counterexample (a
case which favours the other side and is at least as on point as the case cited by the proponent)
or distinguishes the precedent by pointing to a pro factor in the precedent but not the new
case, or a con factor in the current case but not the precedent. In the third ply the proponent
offers a rebuttal by distinguishing the counterexamples, or downplaying the distinguishing
factor by pointing to a factor which can cancel the additional factor or a factor which can be
substituted for the absent factor. These moves were represented as a set of argument schemes
in [29], and [28] builds a 3-layer tree based on these schemes. In CATO, the user must decide
whether the rebuttals succeed or not but, if we have a predicted outcome, we can explain it by
making that side the proponent and knowing that the rebuttal of the opponent’s objection will
have succeeded in order to establish that particular decision.
   This technique is illustrated in [28] with an example of three cases based on US Trade Secrets
Law. The cases, which we have given mnemonic names, are shown in Table 2:

Table 2
Cases in the Example From [28]
    Case           Outcome       Plaintiff Factors            Defendant Factors
    Deceived       P             SecurityMeasures Deception   Disclosures AvailableElsewhere
    NoMeasures     D             Bribery                      Disclosures AvailableElsewhere
    Bribed         TBA           Bribery SecurityMeasures     Disclosures ReverseEngineerable

   If the new case, Bribed, is decided for the plaintiff we can form a tree of arguments as shown
in Figure 1.
   Because the decision was for the plaintiff, we know that the distinctions were successfully
substituted or cancelled, but in fact not all of the arguments should succeed. Examining
the nodes with the benefit of domain knowledge, we can see that Bribery can substitute for
Deception, because they are different forms of improper behaviour. Deception cannot, however,
cancel AvailableElsewhere because they relate to two quite different issues. Similarly while
AvailableElsewhere can substitute for ReverseEngineerable, Bribery cannot be substituted for it
Figure 1: Example Dialogue Tree From [28]


because it relates to a different issue. The problem is that a ‘balance of factors’ argument is
being used to explain the case as a whole, whereas this form of argument is only appropriate to
the layer in which issues are resolved. The need to use factors to justify the resolution of issues,
and then issues to justify the overall decision is argued in [10].
   Thus in order to provide a good justification, it is necessary to have a knowledge of the domain
structure in terms of issues and factors, so that the appropriate style of argumentation can be
used at each level. This structure is provided in [15] because the initial annotations identify
argument elements including issues and factors. The required analysis is also encapsulated in
the ADF produced by the ANGELIC methodology [1].
   There remains the task of justifying the attribution of the factors on the basis of the facts.
This stage of the argument has received little attention in the AI and Law literature, which
has typically taken the factors present in a case as given. The experiments in [15] suggest that
highlighting the predictive elements in the natural language statement of the facts description
did not provide a useful justification of the overall decision. It may, however, be that identifying
the elements in the fact description used to ascribe the factors, does provide a helpful justification
of this step. This is an idea worth exploring.


4. Roles for Machine Learning
Justification of a legal decision needs two components. First, an understanding of the legal
domain, as established in statute and case law, is needed to structure the justification and
to enable a smooth passage from facts to factors to issues to the overall outcome. Second,
knowledge of the individual case is needed so that it can be related to this structure. Structural
knowledge comprises:

    • The issues that must be resolved if a decision is to be made for a particular party, and the
      logical relations between them;
    • The factors that are used to resolve the issues, and the preferences between them estab-
      lished by precedent cases;
    • The facts which need to be considered to ascribe the factors. Again this may require
      the use of precedents to establish such things as thresholds such as how readily the
      information must be ascertainable to allow the ascription of say, the ReverseEngineerable
      factor.
   All these elements are identified by manual analysis in traditional approaches to AI and
Law as represented by [5] and [1] and are also needed in the semi-supervised approach of [17].
But the question arises as to whether these elements can be identified by ML. There is some
prospect that they can. The inductive logic programming approach of [27] was able to derive
an effective set of rules from a set of facts. These rules were able to distinguish the relevant
facts from the irrelevant facts and the antecedents grouped together facts which were related to
a given issue. In the domain used by [27], the facts provided necessary and sufficient conditions
for the outcome, and so the antecedents of these rules resembled issues. This is because the
example domain contained no ‘balance of factors’ style reasoning. One might speculate that in
a less precisely defined domain, the antecedents would identify factors rather than issues. In
such a domain the use of association rule mining as in [38] might be more effective, given that
the factor-based rules would be defeasible and provide varying degrees of support.
   Thus with regard to the structural knowledge of the domain, the question is whether machine
learning can identify the elements required to build a justification: issues and factors. With
regard to individual cases, prediction of an outcome is not enough: a justification, couched
in legal terms, is required, whether to support a judge making the decision, or to present
the reasoning to the public in an acceptable way, given the right to explanation in law [22].
Therefore as well as predicting the outcome, the machine learning system should assign factors
to particular cases, as recommended by [15]. Given the factors, a justification of the outcome
can be produced, using techniques such as those suggested in [28], perhaps modified to take
account of issues as suggested in [10].


5. Concluding Remarks
We have reviewed how legal decisions have been explained in AI and Law, and the part ML
might play. We are currently engaged in a project, motivated by the above considerations,
exploring how ML techniques can be applied to support making and justifying legal decisions.
We will attempt to answer a number of research questions. We will address the domain of the
European Convention on Human Rights, since there are a number of existing ML approaches for
comparison and inspiration, and also the use of more traditional techniques produced an ADF
for Article 6 [20] and a very detailed ADF for deciding questions of admissibility of applications
[21]. We will begin by attempting to identify factors to construct an explanation using the
pre-existing ADF of [20]. Next we will attempt to extend the explanation beyond what is found
in CATO inspired approaches, such as [28], to offer an explanation of why the particular factors
were ascribed to the case in terms of the case facts. If we are successful in these two objectives,
we will then explore the possibility of learning the domain structure.
   Explorations will in turn raise a number of questions, both with regard to ascription in
individual cases and to understanding the domain structure, including:
    • To what extent is the ML process scalable in terms of cost in time and space resources?
    • How close in terms of fidelity are elements identified by ML to those produced by tradi-
      tional analysis methods?
    • The domain will evolve over time: how can changes in social preferences and the identifi-
      cation of new factors to consider be accommodated?

We will also wish to perform evaluation with users to explore questions of how the explanations
are received by different audiences, such as:

    • To what extent do relevant audiences trust different explanation techniques?
    • How well do relevant users perform their different tasks when interpreting and applying
      the explanations to new instances?

   The development of an effective ML system will be underpinned by three core aspects:
training on a small annotated data set; leveraging domain knowledge as prior constraints for the
learning; and reinforcement learning to allow for more focused application of expert annotation.
We will examine the interplay between these three aspects, with the assistance of legal expertise,
with the intention of crafting an ML system that can ascribe factors with suitable fidelity and
justification. If successful, we then expect to apply the same three core aspects to the larger
problem of learning the domain structure, in order to produce high fidelity and justifiable case
outcomes directly from the facts of the case.


References
 [1] Latifa Al-Abdulkarim, Katie Atkinson, and Trevor Bench-Capon. A methodology for
     designing systems to reason with legal cases using ADFs. Artificial Intelligence and Law,
     24(1):1–49, 2016.
 [2] Latifa Al-Abdulkarim, Katie Atkinson, and Trevor Bench-Capon. Statement types in legal
     argument. In Proceedings of JURIX 2016, pages 3–12. IOS Press, 2016.
 [3] Latifa Al-Abdulkarim, Katie Atkinson, Trevor Bench-Capon, Stuart Whittle, Rob Williams,
     and Catriona Wolfenden. Noise induced hearing loss: Building an application using the
     ANGELIC methodology. Argument & Computation, 10(1):5–22, 2019.
 [4] Nikolaos Aletras, Dimitrios Tsarapatsanis, Daniel Preoţiuc-Pietro, and Vasileios Lampos.
     Predicting judicial decisions of the European Court of Human Rights: A natural language
     processing perspective. PeerJ Computer Science, 2:e93, 2016.
 [5] Vincent Aleven. Teaching case-based argumentation through a model and examples. PhD
     thesis, University of Pittsburgh, 1997.
 [6] Trevor Bench-Capon. Neural networks and open texture. In Proceedings of the 4th ICAIL,
     pages 292–297, 1993.
 [7] Trevor Bench-Capon. HYPO’s legacy: introduction to the virtual special issue. AI and
     Law, 25(2):205–250, 2017.
 [8] Trevor Bench-Capon. The need for good old fashioned AI and Law. In Walter Hötzendorfer,
     Christof Tschohl, and Franz Kummer, editors, International trends in legal informatics: a
     Festschrift for Erich Schweighofer, pages 23–36. Weblaw, Bern, 2020.
 [9] Trevor Bench-Capon. Using issues to explain legal decisions. CoRR abs/2106.14688, 2021.
[10] Trevor Bench-Capon and Katie Atkinson. Precedential constraint: The role of issues. In
     Proceedings of the 18th ICAIL, pages 12–21. ACM Press, 2021.
[11] Floris J Bex. Arguments, stories and criminal evidence: A formal hybrid theory, volume 92.
     Springer Science & Business Media, 2011.
[12] Floris J Bex and Henry Prakken. On the relevance of algorithmic decision predictors for
     judicial decision making. In Proceedings of the 18th ICAIL, pages 175–179. ACM Press, 2021.
[13] Floris J Bex, Peter J Van Koppen, Henry Prakken, and Bart Verheij. A hybrid formal theory
     of arguments, stories and criminal evidence. Artificial Intelligence and Law, 18(2):123–152,
     2010.
[14] Paheli Bhattacharya, Kaustubh Hiware, Subham Rajgaria, Nilay Pochhi, Kripabandhu
     Ghosh, and Saptarshi Ghosh. A comparative study of summarization algorithms applied
     to legal case judgments. In European Conference on Information Retrieval, pages 413–428.
     Springer, 2019.
[15] L Karl Branting, Craig Pfeifer, Bradford Brown, Lisa Ferro, John Aberdeen, Brandy Weiss,
     Mark Pfaff, and Bill Liao. Scalable and explainable legal prediction. Artificial Intelligence
     and Law, 29(2):213–238, 2021.
[16] Gerhard Brewka, Stefan Ellmauthaler, Hannes Strass, Johannes Peter Wallner, and Stefan
     Woltran. Abstract dialectical frameworks revisited. In Proceedings of the Twenty-Third
     IJCAI, pages 803–809. AAAI Press, 2013.
[17] Stefanie Brüninghaus and Kevin D Ashley. Predicting outcomes of case based legal
     arguments. In Proceedings of the 9th ICAIL, pages 233–242. ACM, 2003.
[18] Ilias Chalkidis, Ion Androutsopoulos, and Nikolaos Aletras. Neural legal judgment predic-
     tion in English. arXiv preprint arXiv:1906.02059, 2019.
[19] Daniel L Chen. Judicial analytics and the great transformation of american law. AI and
     Law, 27(1):15–42, 2019.
[20] Joe Collenette, Katie Atkinson, and Trevor Bench-Capon. An explainable approach to
     deducing outcomes in european court of humanrights cases using adfs. In Proceedings
     COMMA 2020, pages 21–32. IOS Press, 2020.
[21] Joe Collenette, Katie Atkinson, and Trevor Bench-Capon. Practical tools from formal
     models: The echr as a case study. In Proceedings of the 18th ICAIL, pages 170–174. ACM
     Press, 2021.
[22] Finale Doshi-Velez, Mason Kortz, Ryan Budish, Chris Bavitz, Sam Gershman, David O’Brien,
     Stuart Schieber, James Waldo, David Weinberger, and Alexandra Wood. Accountability of
     AI under the law: The role of explanation. arXiv preprint arXiv:1711.01134, 2017.
[23] Łukasz Górski and Shashishekar Ramakrishna. Explainable artificial intelligence, lawyer’s
     perspective. In Proceedings of the 18th ICAIL, pages 60–68. ACM Press, 2021.
[24] Arshdeep Kaur and Bojan Bozic. Convolutional neural network-based automatic prediction
     of judgments of the European Court of Human Rights. In 27th AIAI Irish Conference on AI
     and Cognitive Science, pages 458–469. CEUR 2563, 2019.
[25] L Thorne McCarty. Reflections on taxman: An experiment in artificial intelligence and
     legal reasoning. Harvard Law Review, 90:837, 1976.
[26] Masha Medvedeva, Michel Vols, and Martijn Wieling. Using machine learning to predict
     decisions of the European Court of Human Rights. AI and Law, pages 1–30, 2019.
[27] Martin Možina, Jure Žabkar, Trevor Bench-Capon, and Ivan Bratko. Argument based
     machine learning applied to law. Artificial Intelligence and Law, 13(1):53–73, 2005.
[28] Henry Prakken and Ratsma Rosa. A top-level model of case-based argumentation for
     explanation: formalisation and experiments. Argument and Computation, Available On-
     Line. 2021.
[29] Henry Prakken, Adam Wyner, Trevor Bench-Capon, and Katie Atkinson. A formalization
     of argumentation schemes for legal case-based reasoning in ASPIC+. Journal of Logic and
     Computation, 25(5):1141–1166, 2015.
[30] Adam Rigoni. An improved factor based approach to precedential constraint. AI and Law,
     23(2):133–160, 2015.
[31] Edwina L Rissland and Kevin D Ashley. A case-based system for Trade Secrets law. In
     Proceedings of the 1st ICAIL, pages 60–66. ACM, 1987.
[32] Marek J Sergot, Fariba Sadri, Robert A Kowalski, Frank Kriwaczek, Peter Hammond, and
     H Cory. The British Nationality Act as a logic program. Communications of the ACM,
     29(5):370–386, 1986.
[33] Yunqiu Shao, Jiaxin Mao, Yiqun Liu, Weizhi Ma, Ken Satoh, Min Zhang, and Shaoping Ma.
     Bert-pli: Modeling paragraph-level interactions for legal case retrieval. In Proceedings of
     IJCAI-20, pages 3501–3507, 2020.
[34] David B Skalak and Edwina L Rissland. Arguments and cases: An inevitable intertwining.
     AI and Law, 1(1):3–44, 1992.
[35] Cornelis Cor Steging, Silja Renooij, and Bart Verheij. Discovering the rationale of decisions:
     Towards a method for aligning learning and reasoning. In Proceedings of the 18th ICAIL,
     pages 235–239. ACM Press, 2021.
[36] Nina Varsava. How to realize the value of stare decisis: Options for following precedent.
     Yale Journal of Law and the Humanities, 30:62–120, 2018.
[37] Charlotte S Vlek, Henry Prakken, Silja Renooij, Bart Verheij, and R Hoekstra. Extracting
     scenarios from a bayesian network as explanations for legal evidence. In Proceedings of
     JURIX 2014, pages 103–112, 2014.
[38] Maya Wardeh, Frans Coenen, and Trevor Bench-Capon. Multi-agent based classification
     using argumentation from experience. Autonomous Agents and Multi-Agent Systems,
     25(3):447–474, 2012.
[39] Huihui Xu, Jaromír Šavelka, and Kevin D Ashley. Using argument mining for legal text
     summarization. In Proceedings of JURIX 2020, pages 184–193. IOS Press, 2020.