Logic and Theory Repair in Legal Modification Yiwei Lu2 , Yuhui Lin1,3 , Xue Li1 , Alan Bundy1 , Burkhard Schafer,2 and Andrew Ireland3 1 School of Informatics, University of Edinburgh 2 School of Law, University of Edinburgh 3 School of Mathematical & Computer Sciences, Heriot-Watt University Abstract The complexities involved in adapting current traffic regulations to accommodate autonomous vehicles (AVs) arise from the intricate legal rules and the changes in the traditional roles of drivers and legal responsibility. We argue that revising the law during the legal reform process can be seen as an effort to rectify the legal reasoning that is no longer suitable for the new circumstance. In this paper, we propose to apply an automatic theory repair system, called ABC [1, 2, 3], to assist legal professionals in making changes to legal rules more efficiently. Keywords legal ontology, abduction, automated theory repair, belief revision, conceptual change, reformation 1. Legal Reform and AI Technologies The rapid innovation of emerging technology products, especially artificial intelligence products, has brought great pressure and a heavy task to the reform of laws. This is not only reflected in the reconstruction of abstract legal theories, but also in the lack of effective and clear standards for the practical reasoning of AI products in accordance with existing laws. The first thing an autonomous vehicle (AV) or its designers would have to figure out is the legal responsibility they must assume in different scenarios and incorporate tailored solutions into their design. However, this process is hampered by the problem of the compatibility of existing legal systems with AI products. For example, in the past, a car manufacturer could not be held liable for a car accident if the safety features of the car itself, such as the brakes, met production standards. Nowadays, it is ambiguous who is legally responsible for the consequences of the actions of an AV. The problem of adapting AI systems to human-oriented systems from a legal perspective lays in that AI blurs the distinction between actors and tools, breaking some basic assumptions of the law, such as the capacity of the actors. It leads to a wider range of legal inconsistencies brought about by innovations in AI. This needs to be detected and corrected from a combination of perspectives, including legal intent, specific norms, the characteristics of AI, and the ways humans are involved. Cognitive AI 2023, 13th-15th November, 2023, Bari, Italy. Envelope-Open y.lu-104@sms.ed.ac.uk (Y. Lu); y.lin4@ed.ac.uk (Y. Lin); xue.shirley.li@ed.ac.uk (X. Li); bundy@ed.ac.uk (A. Bundy); b.schafer@ed.ac.uk (B. Schafer,); a.ireland@hw.ac.uk (A. Ireland) © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) CEUR ceur-ws.org Workshop ISSN 1613-0073 Proceedings Figure 1: Legal changing process in ABC Reforming the legal system entirely manually in response to the above features would involve a great deal of human labour and would naturally be prone to error. Given the rapid evolution of AI technologies, there is a disadvantage of efficiency in dynamic modification. Therefore, we need an automated system to reform the law into an artificially intelligent product. This is a cognitive task, and the required system needs to 1) understand the existing law; 2) represent the law in a way that can be used in its internal automated processes; and 3) make recommendations for law reformation that can be used by human users. Arguably, one can consider the adjustment of law in the legal reforming process as a repair of the legal ontology that is no longer adapted to the new situation. Here, we propose to utilise the advantages of automatic reasoning systems in terms of computational power and reasoning accuracy to assist legal professionals in making changes to legal rules more efficiently. In particular, we would like to apply a theory repair system called ABC [1, 3, 2]. In the proposed process, ABC takes logical encoding of targeted regulations under revision, e.g. legal ontologies, as well as additional logical encoding of scenarios and expected observations from legal experts, to detect and suggest changes to the conflicts of legal responsibility caused by AV, as shown in Figure 1. Here, ABC system is chosen because, to the best of our knowledge, it is the only repair system that addresses conceptual changes including splitting or merging a predicate, e.g., splitting the concept ‘driver’ into ‘human driver’ and ‘AI driver’. The running example presented in this paper is available at the paper resource webpage [4]. 2. ABC and Legal Modification ABC [1, 2, 3] is a repairing system that detects and fixes conflicts in a theory encoded in logic clauses. Given a benchmark, called preferred structure (PS), ABC detects whether an input theory violates the benchmark and then applies representational changes to repair any violations. Briefly, an input theory 1 consists of: 1) a collection of implication rules and assertions, e.g. driver(p, v). %% Asserting that p is the driver of car v. check_injuries(A, B) :- accident(X, B), driver(A, X). %% Reading right to ↪ left: A should check victim B in an accident. 1 The original ABC only accept Datalog theories but it has been extended to first-order logic recently. and 2) PS contains observations of positive and negative proposition sets (𝒯(S) and ℱ(S)) where the positive ones are expected and negative ones should not appear. ABC repairs a theory by deleting/adding axiom(s) or language changes, i.e., renaming a predicate/constant or changing the arity of a predicate. As a motivating example of using ABC to detect conflicts raised because of the introduction of AV, considering a case of a car accident with an injured victim, the current rules dictate that drivers must get out of their cars to help the injured at the scene of an accident, i.e., 1 %% --- Input: Encoding of scenario --- 2 human(a). human(b). car(x). driver(a, x). 3 accident(x, b). %% a car accident with victim b 4 5 %% --- Input: Preferred Structure (PS) --- 6 T(S) = {check_injuries(a, b), legal_liable(a, x)}, F(S) = {} 7 8 %% --- Input: Encoding of legal responsibility as rules--- 9 %% a driver must check the injuries when an accident happens 10 check_injuries(A, B) :- accident(X, B), driver(A, X). 11 legal_liable(A, X) :- driver(A, X). where the first part (encoding of scenario) describes a scenario, and PS provides constraints for expected and unwanted observation under the rules encoded in the third part. However, this does not work for AV, because the current legal knowledge base cannot fully explain the new situation with AV, and the relationship between the concept of ‘driver’ and the individual ‘driving system’ is uncertain. ABC can then be used to generate repaired theories. One possible repair can be: 1 %% --- Input: New encoding of scenario with AV--- 2 human(a1). human(b1). 3 car(x1, autonomous). producer(x1, p1). 4 driver(a1, x1). accident(x1, b1). 5 6 human(a2). human(b2). 7 car(x2, non-autonomous). producer(x2, p2). 8 driver(a2, x2). accident(x2, b2). 9 10 %% --- Input: New Preferred Structure (PS) --- 11 T(S) = { check_injuries(a1, b1), check_injuries(a2, b2), 12 legal_liable(a2, x2),legal_liable(p1, x1)} 13 F(S) = { legal_liable(a1, x1), legal_liable(p2, x2)} 14 15 %% --- Output: ABC repairs the encoding of legal responsibility by adding ↪ new preconditions to the rules. 16 legal_liable(A, X) :- driver(A, X), car(X, non-autonomous). 17 legal_liable(A, X) :- producer(A, X), car(X, autonomous). where ABC repairs the rules of 𝑙𝑒𝑔𝑎𝑙_𝑙𝑖𝑎𝑏𝑙𝑒 by adding new preconditions. The changes are highlighted in cyan. As can be seen, in this example, ABC repaired inference rules to a law that would not otherwise be able to fully explain the new situation. This allows the autonomous vehicle to make reasoning with practical consequences in this case. It distinguishes between autonomous and non-autonomous driving situations, giving different rules for assigning legal responsibility by adding a new function 𝑙𝑒𝑔𝑎𝑙_𝑟𝑒𝑝𝑟𝑒𝑠𝑒𝑛𝑡𝑎𝑡𝑖𝑣𝑒(). More importantly, these repairs are obtained in the context of established legal goals to be derived and legal outcomes to be avoided. This process is an important aid to the innovation of the legal system. Firstly, it operates within the context of established goals and maintains internal consistency within the semantics of first-order logic, which guarantees that the new legal system can achieve self-consistency without compromising the original legislative intentions. At the same time, this also applies to the introduction of various legal values to be protected. Secondly, as to how to fix the problem of the law not being able to complete the reasoning of the new situation, ABC gives a solution that ensures a definite and logical result. Although this repair solution is currently not directly applicable to the law and requires selection and modification by experts, it is still an efficient means of giving a global solution. Finally, for the previously mentioned problem of assigning legal responsibility, ABC also demonstrates in this example the appropriate ability to give clear possible criteria for assigning responsibility and the reasoning logic for the assignment. However, it is also important to note that, as stated in the previous paragraph, there are still some differences between the solution given by ABC and the way the law is adjusted in reality, so it requires a second revision by experts. For example, when adding new concepts or functions, ABC is using the needs of first-order logic to include meaningless symbols, whose legal names and meanings still need to be re-matched to the legal text. As well, the legal interpretation of the same legal concepts may change in different contexts, which is not yet reflected in the current ABC. But as an assisting system to help legal experts to innovate the law, it is effective in providing feasible solutions for reference. Figure 2: A possible mapping of ABC repair operations with a legal modification account Also, in order to filter out irrelevant repairs, additional heuristics will be helpful to make ABC more efficient to fit in the domain of legal modification for AV. For example, ABC can protect certain predicates and constants from being changed, i.e. constrain the possible repair- ing plan to add new axioms rather than changing foundation facts about an individual, e.g., 𝑐𝑎𝑟(𝑥1, 𝑎𝑢𝑡𝑜𝑛𝑜𝑚𝑜𝑢𝑠), 𝑝𝑟𝑜𝑑𝑢𝑐𝑒𝑟(𝑥1, 𝑐𝑥) etc. It is worth noting that, the various possible repair methods and heuristics in ABC, e.g. renam- ing constant in an axiom, can be mapped to legislative modification e.g. textual modifications with substitution. A general mapping account between ABC repairs and legal modification taxonomy as modelled in a legislative change management system called Akoma-Ntoso, shown in Figure 2. In future, we would like to investigate the legal versioning data from Akoma-Ntoso to further establish the mapping and develop the proposed ABC-based legal revision process as an expert system. 3. Future work & Conclusion We have discussed the challenges that AVs bring to reform the existing traffic regulations to accommodate the change of the traditional roles of drivers and legal responsibility. As a solution, we proposed to apply the ABC theory repair system to capture such representational change and suggest possible revision plans. This approach can arguably aid legal professionals in making adjustments to legal regulations with increased efficiency. Our approach currently requires human experts with logic backgrounds to encode/decode the input/outputs, as they are in the form of logic formulas instead of natural language form. One possible future work, therefore, is to bridge the gap by NLP (Natural Language Processing) technologies, such as LLM (Large language model), to facilitate the legal experts to apply our approach for legal drafting and revision. Also, as ABC’s PS (Preferred Structure) currently is restricted to only contain facts, we would like to extend it to support logical rules representing general requirements, e.g., a driver should have a driving licence. In addition, the search space for repairs can be reduced by designing heuristics that are effective in the application of legal modification. References [1] X. Li, A. Bundy, An overview of the ABC repair system for datalog-like theories, in: International Workshop on Human-Like Computing (HLC 2022), volume 28, 2022, p. 30. [2] A. Bundy, X. Li, Representational change is integral to reasoning, Philosophical Transactions of the Royal Society A 381 (2023) 20220052. [3] X. Li, Automating the repair of faulty logical theories (2021). [4] ABC CogAI paper resource webpage, Available at https://colab.research.google.com/drive/ 111e_HCJbjUshR09QLOL1f8M8vNl6Pk-w?usp=sharing (2023/09/01), 2023.