<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Legal Logic Programming Framework for Autonomous Vehicles</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Galileo Sartor</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Adam Wyner</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Swansea University</institution>
          ,
          <addr-line>Swansea</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>In this paper, we present a framework for representing and reasoning with trafic rules in Autonomous Vehicles (AVs). We base the work on a fragment of the United Kingdom's Highway Code (HC), which outlines legal requirements and good practices to fulfil reasonable duty of care. Humans and AVs will have to interact on shared roads, so we propose that a unitary, high-level computational model to be used by both types of users, which would represent shared knowledge and practice of road use, road users, and legal rules such as they appear in the HC. The road use of AVs should be consistent with the expectations of human actors. We abstract from the specifics of data acquisition, actuator use, and vehicle control to focus on reasoning with the state of the world visible to the vehicle, it's intended action, and the reason (i.e., justification) for that action. To provide such a model, we represent portions of the HC in Logical English (LE), a controlled natural language that translates into Prolog, that is then used by the vehicle; it also provides a human readable interface. The system is composed of multiple agents that have diferent goals: the vehicle, violation detectors, and a validator that evaluates the violations, taking into consideration mitigating factors. These systems cooperate to obtain an environment where vehicles can reason with rules in a complex, defeasible way, while maintaining safety and responsibility when determining the validity of an action.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;logic programming</kwd>
        <kwd>automated vehicle</kwd>
        <kwd>highway code</kwd>
        <kwd>controlled natural language</kwd>
        <kwd>legal reasoning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Autonomous vehicles (AVs) are expected to be a transformative technology, with the potential to
improve road safety, reduce trafic jams, and in general enhance mobility. We argue that for this to
become reality, AVs should be able to operate in a shared space with human drivers (HVs) and other
road users, making decisions on the basis of the same set of legal (and behavioural) rules that govern
human driving. This is a challenging task, as AVs must be able to interpret and apply these rules in
real-time, while also taking into account the actions of other road users. In this paper, we propose a
framework for the legal reasoning of AVs that is based on the principles of legal reasoning and the use
of formal models.</p>
      <p>The inclusion of a strict legal framework may render AVs less flexible and adaptable to the dynamic
nature of real-world driving. This is particularly important in circumstances where the law may not
provide clear guidance or where strict adherence to the law may lead to dangerous situations. These
circumstances are often easy for human drivers to understand, such as mounting the pavement, or
moving past a red trafic light to leave room for an emergency vehicle.</p>
      <p>In the application of trafic rules there is a distinction between hard constraints, explicit exceptions
(e.g., special rules for safety vehicles), and reasons for violating a rule. While the HC has explicit
constraints, not every scenario is encoded, thus a more adaptive reasoning process is needed. To address
this, our framework allows for reasoning also with rule breaking (violations). In this context we deal
specifically with those violations that may be seen as mitigated by the driver (in this case the AV).</p>
      <p>Section 2 describes the current state of the art with a short selection of previous works, focusing on
the representation of trafic rules with diferent formalisations and the issue of rule breaking in driving
scenarios. In section 3 the general framework is presented, identifying the diferent actors involved
and their responsibilities and roles. This section also discusses the issue of rule breaking and dealing
with violations. Section 4 delves in the technical details of the logic representation, describing how
the Knowledge base is structured, and how trafic rules are combined within the violation reasoning.
Section 5 will present the final element in the framework, the Simulations, and describe how it may be
used to validate and analyse the running rulebase. Finally in section 6 we will summarise the current
state of this ongoing development, and plot a direction for future work.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>Autonomous vehicles (AVs) have been a topic of interest for decades, and recent advancements in
technology have made the development of fully autonomous vehicles a realistic goal. There is still
a limit in the capability of AVs, especially when interacting with other agents. Most examples of
“self-driving cars” (not vehicles with advanced driving assistance capabilities such as lane changing)
still require human intervention to navigate more complex situations, especially when dealing with
roads where the vehicles were not specifically trained, or shared roads.</p>
      <p>The focus in development has been mainly on the implementation and improvement of automated
sensors (the world perception) and actuators (the implementation of actions by the AV), and in validating
vehicle behaviour and reasoning in specific complex scenarios such as lane changing. There is also a
separate research direction that aims to more generally understand the complex decision-making that
is required of driving, and how it can be implemented for AVs to navigate real-world trafic scenarios
safely and eficiently.</p>
      <p>
        One of the major hurdles remaining in the development of autonomous vehicles is enabling these
systems to make informed decisions in complex scenarios where trafic regulations may be
contextdependent. As discussed in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], many trafic rules function more as situational guidelines than rigid
mandates. For this reason the adaptation of the existing rulebase (e.g., the UK Highway Code) for AVs is
a complex task, but one that is necessary to ensure the behaviour of AVs is aligned with the expectations
of the other (human) road users.
      </p>
      <p>The representation of trafic rules, as encoded in documents such as the HC can have two main
approaches: (i) a limited set of rules that deal with specific verifiable situations, or (ii) a more
comprehensive set of rules that include all the provisions relevant to the vehicle.</p>
      <p>
        Instances of the first approach can be related to intersections [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ] as well as overtaking and safe
distance calculations [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], which often use languages for formal verification such as Isabelle/HOL [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
This approach is used in trajectory monitoring and planning to ensure the mathematical reasoning is
safe and consistent.
      </p>
      <p>
        The second approach involves a broader set of rules and may introduce more issues related to
exceptions, vagueness, and ambiguity in the source text. Relevant instances of this approach use
defeasible deontic logic (DDL)[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] to handle rule exceptions and the resolution of vague terms in
rules. Furthermore, we could consider Prolog representations of intentions and actions[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], focusing on
modelling the rules as combinations of beliefs, intentions, and context.
      </p>
      <p>
        Navigating trafic rules often requires drawing on common sense and situational understanding. For
AVs, enhancing their ability to reason in nuanced contexts may require encoding additional background
knowledge or commonsense reasoning, as suggested in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. This enhancement of the reasoning in
complex situations has been addressed in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] by using case based reasoning, with the knowledge
presented to the vehicle as a set of situations, the execution by the AV, and the assessment (e.g.,
violation, accident, ...).
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], the authors note that human drivers may bend trafic rules without incurring in penalties,
adjusting their behaviour according to social expectations and contextual cues. They propose a “Moral
ATA” which incorporates a multi-tiered rule system allowing ethical reasoning to guide or prioritize
possible actions.
      </p>
      <p>Many systems investigate the possibility of applying the rules in a BDI (Belief-Desire-Intention)
agent [10], in order to trace the behaviour of the autonomous vehicle and validate the rules [11].</p>
      <p>Ethical dimensions of AV behaviour have been explored in works like [12], where the presence of an
“ethical knob” allows users to set the vehicle’s moral priorities. This efectively transfers liability from
the manufacturer to the user by empowering passengers to select ethical parameters with regard to
extreme driving scenarios.</p>
      <p>The use of dilemmas like the trolley problem in AV ethics remains contentious. Critics argue that
such binary decision frameworks fail to represent the complexity and variability of real-world driving
[13]. More realistic solutions may emerge from commonsense reasoning approaches, which better
reflect the kinds of trade-of humans make regularly [14].</p>
      <p>Adaptive frameworks, such as the one discussed in [15], propose integrating reinforcement learning
with monitoring systems that assess in advance whether a legal breach might be justifiable. These
systems attempt to proactively manage rule violations while balancing safety and legality.</p>
      <p>
        The presented work is influenced by the previously mentioned ones, with some diferences in the
focus or implementation. With regard to the two main models of representation this work fits in the
second group, i.e., the more comprehensive rulebases, similarly to [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Our goal is to have a version of
the HC that can be shared and understood by both human and machine agents, and for this we strive to
have a higher level of isomorphism. We did not consider the issue of enhancing the detection of trafic
cues (lights, signs, ...) as in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], as we are focusing on the reasoning after the detection has occurred.
The work done in perception would happen before our component enters the process. Regarding the
issue of rule breaking/bending expressed in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], we focus less on conflicting rules, rather on a generic
determination for rule violation that can be traced and reasoned upon after the fact. The idea of multiple
layers of reasoning is similar to the division of labour and the distinction we keep between trafic rules
and behavioural rules (as will be described in the following sections). Furthermore, the issue of liability
or responsibility highlighted in [12] is left for future determination, focusing at the moment on the
possibility of rule breaking and considering the decisions to come from the AV (not the passenger).
While the proposed system could be used to train a RL model, through feedback coming from the
decision making in long term scenarios, we propose that the evaluation happen at runtime, with the
“Logic” component being integrated in the AV itself.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. The Framework</title>
      <p>The proposed framework is based on three main components: (i) a formal representation of the legal
rules that govern driving, (ii) a set of reasoning mechanisms that allow AVs to reason with these rules
in real-time, and (iii) a set of mechanisms for handling rule violations, detecting them and checking the
potentially mitigating circumstances.</p>
      <p>The framework also includes three main agents: (i) the AV, (ii) the violation detectors (e.g., cameras),
and (iii) the validator, which is responsible for assessing whether the detected violations merit a penalty
(to one degree or another) in given a circumstance.</p>
      <sec id="sec-3-1">
        <title>3.1. Division of labour</title>
        <p>We structure the system as a multi agent simulation, where the autonomous agents are driven in part
by Prolog rules. This makes it possible to have a clear division of labour between the diferent agents
and the tools used. In particular, the Prolog rules only reflect the trafic rules (Section 4.1) and some
behavioural reasoning with regard to violations (Section 3.3). The physical constraints of the world (not
part of the legal rules) are left to the simulators. Examples of this are the physicality of agents, namely
two agents cannot occupy the same space, or they would cause an accident. This means that we can
have the diferent vehicles abiding (or not) by the HC rules and even vehicles acting on the basis of
diferent rulebases, while still maintaining the same physical constraints.</p>
        <p>This structure enables us to abstract the view that the diferent components have at runtime, making
it possible to swap components. An example of this is the Simulator. At the current state of development
there are two Simulators, as described in Section 5, but the logic rules can remain the same.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. System Components</title>
        <p>As previously discussed, the system is made of diferent components and agents. In this section we will
briefly describe these components and their purpose. The model is designed to emulate how a similar
system works currently with human drivers, so two agents (vehicles and detectors) only act with the
information they can perceive of the world, while the validator reasons with information it receives
from the other two.
3.2.1. Vehicle
The vehicle is designed in an abstract way, as the specifics of the vehicle’s design are not relevant for
the legal reasoning. We assume the vehicle will have machine learning (ML) components, both for
the detection of its surroundings (through image recognition) and for the act of driving itself. The
proposed Logic component is an addition that validates the actions proposed by the ML model. The
Logic component is responsible for ensuring that the vehicle’s actions are in compliance with the legal
rules and for providing explanations for its decisions. The proposed structure is built to be modular,
so that each component can be replaced, e.g., to allow vehicles to switch between diferent legal rules
depending on the country they are in. The vehicle is designed with a set of properties that are used
to determine its state, the actions it can take, and the bearing on the validation mechanism. These
properties include:
• The type of vehicle (car, bus, truck, emergency vehicle, etc.): This is important for determining
the rules that apply to the vehicle, as diferent types of vehicles may be subject to diferent rules.
• The status of the vehicle (normal, emergency, etc.): This is used to distinguish vehicles that are in
a states that may allow rule breaking (e.g., emergency vehicles heading to an accident, normal
vehicles with potentially mitigating circumstances).
• The Risk status (also called “Behaviour”): This is used to determine the risk level of the vehicle
and to adjust its behaviour accordingly. This is the property that determines a vehicle’s likelihood
of breaking the rules.</p>
        <sec id="sec-3-2-1">
          <title>While driving, the AVs act according to three main components:</title>
          <p>• Belief: The state of the world surrounding the vehicle it can perceive with sensors (signs, lights,
other road users).
• Justification: The abstract goals the vehicle has (time constraints, preferred plan, etc).
• Intention: The action (or set of actions) the vehicle wants to perform in a given time.</p>
          <p>The first part, the “Belief”, contains all the information the vehicle can detect about its surroundings.
This information will be used when querying the system on the validity of an action. The vehicle in
question will continuously collect the information from its sensors, which could also be used by an ML
component (though not developed here). In this research we are not interested in how the information
is collected, i.e., about the specifics of the sensors or what type they are. The first assumption we
make about the system is that the output of the sensors can be trusted and that by collecting all the
information it is possible to abstract high level statements about the environment, such as “there is
a trafic light and its colour is red”, or “there is a vehicle oncoming from the right”, regardless of the
specific technical aspects. The implementation or validation of this part is out of the scope for the logic
component, as it relies on the abstract, “human accessible” representation.</p>
          <p>The main diference with existing models is the “Justification” component, which is used to represent
the overarching goal of the vehicle in a given situation. The justification is the high level goal that the
vehicle has (time to reach its destination, emergency needs, etc.) and it will be used to determine the
risk the vehicle is willing to take in safe situations with regard to breaking trafic rules. As an example
the vehicle might be willing to speed if the other vehicles are going at a similar speed, thus its action
would help it reach its goal without causing accidents. The Justification and how the vehicle reasons
with it is going to be discussed more in detail in Section 4.2</p>
          <p>Finally, the third part, the “Intention”, in our case is the action the vehicle intends to take in the
current situation “S”. This is usually a single action, such as “entering the junction”, “crossing the trafic
light”, etc. The reasoning component of the vehicle will use this as the query, to determine if this action
is logically permissible in the given context.</p>
          <p>As a simple example consider the following scenario:
1 the vehicle is at a junction.
2 the vehicle sees a traffic light.
3 the traffic light is red.</p>
          <p>In this case, if the vehicle asks whether it can enter the junction, it will pose the query “ego can enter
the junction”, and the predicted response should be false. This is a very simple scenario, but the same
reasoning should be applicable to more complex situations, e.g., with other vehicles, trafic signs, etc.
Note that the listing above represents the scenario in natural language, and the same structure is used
when modelling the rules in Logical English in Section 4.3.</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Violation Detection</title>
        <p>In the proposed system, the violation detection is a static, reactive process, with no real diference from
the current systems in use for human drivers. The detectors are not part of the vehicle, but are external
devices that monitor the road and detect violations, such as speed cameras, red light cameras, and
other types of sensors. The detectors are not able to detect the intention of the vehicle nor auxiliary
information in the circumstances, but only the action that the vehicle is taking. This is a very important
distinction, as it means that the detectors are not able to reason about the violation, but only to detect
it. The “detected violation” is passed on to the validator for further reasoning.</p>
        <p>The violation detectors can be made to detect diferent types of violations, such as speeding, running
a red light, or entering a junction without stopping, just as current systems do. In this case we are
considering detectors as cameras for speeding and red light detection. We are not considering here
what would happen if the violation is not detected. In this case, the vehicle would still reason with the
rule violation, but the violation would not be detected by the system. It may be that the vehicle is not
aware of the presence of the detectors, so it cannot take them into account when reasoning about the
violation.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Validator</title>
        <p>The “Validator” is the component responsible for assessing the detected violations, admitting any
mitigating information, and determining whether to apply a penalty. The analogue in the current
environment would be for example a police oficer or judge. This agent also has the same set of rules as
the vehicle and is able to reason about the same scenario. The validator receives from the detector the
detected violation and from the vehicle the details of the situation in which the violation occurred. In
this case the scenario is composed of the vehicle’s type and status, the view of it’s surrounding, and the
reasoning (i.e., the trace of the program execution) in which it happened. Thus each action analysed by
the validator is assigned a penalty or a mitigated status. In the current implementation there is no efect
of the action on further ones by the validator. While the vehicle may alter its behaviour on the basis of
the previous penalties and violations, the validator determination is specific to that action. It would be
possible to construct a more complex validator to keep track of the behaviour over time, and to reason
with multiple concurrent violations. The validator can then use this information to determine if the
violation is exempted or not. Specifically there might be valid exceptions contained in the HC or other
legal provisions (e.g., emergency vehicles, etc.) that may have allowed the vehicle in question to break
the rule. There might also be “Mitigating Circumstances” that may have justified the violation, such as
a sudden change in the environment (e.g., a pedestrian crossing the road), a safety reason (e.g., the need
to let an emergency vehicle pass), or others. While it is possible to model a list of the explicit exceptions
or abnormal situations, it is not possible to have an exhaustive list. To address this we can introduce
a specific term to express abnormal situations that can be satistied by either the explicit exceptions
or implicit/runtime ones. This can be leveraged by the vehicle to include its own justification in the
execution process, and pass this information to the Validator.</p>
        <p>The validator would potentially be able to reason also with case base scenarios to determine if a
mitigating factor is present or not. This is a very complex task, and it is not the focus of this paper, but
it is a possible future direction for the research.</p>
        <p>If the vehicle is caught violating a rule, and if the validator determines that a penalty should be
applied, the vehicle will be notified of this; and it will consider the penalty as a factor in its future
violation decisions. This will be discussed more in section 4.2.</p>
        <p>While an in depth discussion of the penalty system is beyond the scope of this paper, we can mention
that the penalty system is based on a points system, similar to the one used for human drivers. Vehicles
start with a certain number of points, and each confirmed violation will result in a deduction of points.
At each potential violation the vehicle will check its remaining points, and will thus be less willing to
break rules the less points it has.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Knowledge Representation</title>
      <p>In this section, we discuss the knowledge representation used in the system, which is based on the UK
Highway Code (HC) and modelled in Logical English (LE). We will then discuss how the knowledge is
used by the vehicle to reason about its actions and how it deals with the possibility of rule violations.</p>
      <sec id="sec-4-1">
        <title>4.1. Knowledge base</title>
        <p>The Knowledge base is derived from the UK Highway Code (HC), which is a set of rules and guidelines
for road users in the UK. In the HC there are diferent types of rules. Some describe obligations (e.g.,
you must stop at a red light) and may be linked to a legal provision (e.g., the Road Trafic Act) as well as
a penalty. Others are indications for good driving behaviour, usually using the term “should”. Most
rules in the HC are not legally binding on their own, but they are considered best practices for road
users, and they may be used as a reference by the police and the courts in case of accidents or disputes
so as to allocate liability or severity of penalty (aggravating circumstances). The HC is also used as
a reference by the DVLA (Driver and Vehicle Licensing Agency) for the driving test, which implies
the knowledge that human drivers bring to the road. For this paper, we are focusing on a subset of
the HC, namely the rules used when navigating junctions. Most rules in this section are expressed as
recommendations with a few explicit obligations. This gives us a good starting point to test the system,
as it allows us to reason with the diferent rules, and how to integrate them in the system.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Behaviour</title>
        <p>The previously mentioned “Justification” of the vehicle is its high level goal, representing the reason
the vehicle has for violating rules, e.g., rushing a patient to the hospital. The vehicle can use this to
determine the best action to take in a given situation, i.e., the “Intention” in that situation. Putting
them together, if the vehicle is in a hurry (thus wishes to arrive at his destination in as little time as
possible), the vehicle may choose to speed up and take more risks and potentially incur more violations.
The system uses this higher level goal to attempt to accomplish each action in as little time as possible
if it is safe to do so, i.e., the immediate action would not cause an accident. In the current state, the
relation between the Justification and Intention is not formally defined, and the implementation is a
proof of concept. But the relation is still a key part of the system that will be developed further. The
determination of how this high level goal is done is out of the scope for this research, as is the issue of
liability in this determination . We assume that a decision has been made and that the vehicle will try
to alter its behaviour accordingly.</p>
        <p>In the running system, the justification is used to set a risk behaviour for the vehicle, which is a
measure of how likely the vehicle is to break the rules. This is a simple measure done by the vehicle,
which tries to combine the information about the environment with its specific justification. What this
means is that the vehicle will determine that it is more likely to break the rules if it is in a hurry. To
maintain a high degree of safety, the vehicle is aware about the diferent rules (explicit obligations and
suggestions) and identifies which rules are “safety critical” and which might allow for more variation
depending on context. This distinction is made as simple as possible and is based on the following
criteria: If the vehicle sees another road user that it may collide with in case of a rule violation, it will
stick to the rules that would prevent such accident. If, conversely, the vehicle does not perceive any
other road user, it will be more flexible in its decision making, for instance deciding to speed or not
stop at a junction. This is a very simple Proof of Concept approach, but it is a first step towards a more
complex model.</p>
        <p>In our framework we distinguish between explicit exception states mentioned in the norms, e.g., the
diferent behaviour for emergency vehicles, and the violations that may be caused by a normal vehicle.
Furthermore, we use the distinction in the HC between diferent kinds of norms:
• legal requirements, where disobeying such rules means committing a criminal ofence. You may
be fined, given penalty points on your licence or be disqualified from driving;
• advisory norms, that while not binding may be used in evidence in any court proceedings under
the Trafic Acts.</p>
        <p>In this context the Validator may determine that a penalty should be set or that there were mitigating
circumstances that made the vehicle action admissible such as avoiding an accident.
4.3. Logic
The trafic rules are encoded in Logical English (LE) [ 16], a controlled natural language built on Prolog.
LE enables us to write rules and interact with the program in natural language. The LE rules are
automatically converted into Prolog code that is evaluated by a Prolog interpreter, and we can use this
Prolog code in the autonomous vehicle1.</p>
        <p>The vehicles can query the Prolog code to determine if an action they intend to take is permitted or
if there is a specific obligation/prohibition in that case. The system will provide a simple response, log
the scenario, query, and result for future reference. This may be used in the validation process, when
evaluating violations as mentioned in 3.4.</p>
        <p>As an example, we can consider the following rules from the HC:
• Rule 170: You should [...] give way to pedestrians crossing or waiting to cross a road into which
or from which you are turning. If they have started to cross they have priority, so give way (see
Rule H2) [...]
• Rule 171: You MUST stop behind the line at a junction with a ‘Stop’ sign and a solid white line
across the road. Wait for a safe gap in the trafic before you move of.</p>
        <p>These rules can be seen as examples of the two types of rules we mentioned before. The first one is
a recommendation , while the second one is an obligation2. The second rule is a good example of a
strict rule in the HC, which are generally more straight forward to represent in a logic form, whereas
the recommendations are often more open textured. The recommendations may be violated without
a direct penalty, but if the violation causes an accident or injury to a pedestrian, it will be taken into
account due to failure of duty of care in legal proceedings.</p>
        <p>A possible representation of the two rules in Logical English can be seen in Listing 1.</p>
        <p>To this representation we can add a further condition that the vehicle is not in an abnormal situation
and further qualify this as follows:
1An in depth description of Logical English or its use in this context is beyond the scope of this paper, but we will provide a
few examples of how it can be used to represent the rules of the road [17]
2In this case the rule is in fact linked to Sec. 36 of the Road Trafic Act 1988 (Drivers to comply with trafic signs).</p>
        <p>Listing 2: Example of Logical English code with abnormal situation
1 should(A, ’give way’, B)
:2 is_at(A, ’the junction’),
3 ( ’_is’(B, crossing)
4 ; ’_is’(B, ’waiting to cross’)
5 ).
6 must(A, ’stop behind the line at a junction’)
:7 is_at(A, ’the junction’),
8 sees(A, ’stop sign’),
9 sees(A, ’solid white line across the road’).</p>
        <sec id="sec-4-2-1">
          <title>Listing 3: Example of Prolog code generated from Logical English</title>
          <p>Listing 2 shows an example of the dynamic inclusion of the Justification (i.e., an abnormal situation)
in the knowledge base. When the vehicle is created it will incorporate a fact asserting its Justification,
of the form “ego has the justification that ...”. This information would be used by the reasoning system
to determine that the rule has an exception, thus it can be broken. The same information would be
passed by the vehicle to the Validator when reasoning on the validity of the violation and the need for
a penalty. Currently the representation of the violation is a work in progress, as it existed in the above
mentioned form in a previous implementation, and it is being incorporated in the new system.</p>
          <p>In Listing 3 we can see an example of the Prolog code generated from Logical English and used by
the simulated vehicle. Lines 6-9 are a translation of Rule 171.</p>
          <p>The implementation of the rule-breaking decision system is shown in Listing 4. The first two lines
are only needed in the simulator to determine if the vehicle will attempt to break the rule. At runtime,
this is where the vehicle would determine its own abnormal situation (as described in Section 4.2),
determining if it has a justification for breaking the rule. Lines 4-5 check that vehicle A has obligation
B, rule B has a penalty for violations, and the vehicle has enough tokens to (eventually) pay the fine.
Lines 6-7 check that the vehicle has enough tokens to pay the penalty. Lines 8-11 determine the new
“Behaviour”, and the new set of tokens, after paying the penalty. This will impact the future decision
making of the vehicle, as it will have less tokens to pay for future penalties, and thus will be less likely
to break the rules in the future.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Simulation</title>
      <p>The behaviour of the system can be visualised in two main modes: (i) the scenario sequence, and (ii)
the simulation. The scenario sequence is a simple list of independent scenarios the simulated vehicle is
going to encounter. For each of these, the vehicle is give the relevant facts and the intended action. The
vehicle will then query the system, act accordingly, log the result, then move to the next scenario.</p>
      <p>The simulation is a more complex environment, where the vehicle is placed in a simulated environment
(NetLogo or CARLA), and it has to navigate through it, encountering diferent situations and other
simulated agents. The vehicle is able to detect its surroundings using the simulated sensors, and reasons
only with the high level facts.</p>
      <p>In both cases the diferent scenarios are mostly independent, with the only link being the variation
in behaviour of the vehicle.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>In this paper we presented a framework to enable legal reasoning with trafic rules in autonomous
vehicles. The framework is designed to allow AVs to interpret and apply trafic rules in real-time in
a way that is understandable to other road users. The goal of this work is to model how AVs should
behave in complex shared environments (e.g., in busy city streets). In this scenario, AVs must be able to
reason and behave in a way that is consistent with the expectations of human drivers and other road
users, while also being able to adapt to the dynamic nature of real-world driving without increasing
the burden on the other road users. The framework also includes a set of mechanisms for handling
rule violations, and a set of reasoning mechanisms that allow AVs to interpret and apply these rules in
real-time.</p>
      <p>The proposed framework will be expanded in future work in a twofold way. First, the rulebase will
be expanded, increasing the number of rules and violation conditions. In addition, it will improve the
tractability of the reasoning process. Further, we could consider how rule breaking can be integrated in
constructing plans by simulating the potential scenarios encountered. Secondly, the integration with
simulation systems will be further developed with the goal of extracting metrics on the system while
running, to enable more in depth comparisons of the various approaches and changes to the model.
This could help in investigating the interaction of rules that express obligations (those marked with
“must” or “shall”, and linked with penalties) and the other rules. Finally the validator system could be
made more complex, analysing also the historic behaviour of a vehicle over time, and determining if
the feedback loop onboard the vehicle requires analysing also undetected violations.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <sec id="sec-7-1">
        <title>The author(s) have not employed any Generative AI tools.</title>
        <p>[10] M. Bratman, Intention, Plans, and Practical Reason, Cambridge, MA: Harvard University Press,</p>
        <p>Cambridge, 1987.
[11] G. V. Alves, L. Dennis, M. Fisher, A Double-Level Model Checking Approach for an Agent-Based
Autonomous Vehicle and Road Junction Regulations, Journal of Sensor and Actuator Networks 10
(2021) 41.
[12] G. Contissa, F. Lagioia, G. Sartor, The ethical knob: ethically-customisable automated vehicles and
the law, Artificial Intelligence and Law 25 (2017) 365–378.
[13] D. Cecchini, S. Brantley, V. Dubljević, Moral judgment in realistic trafic scenarios: moving
beyond the trolley paradigm for ethics of autonomous vehicles 40 (2025) 1037–1048. URL: https:
//doi.org/10.1007/s00146-023-01813-y. doi:10.1007/s00146-023-01813-y.
[14] J. De Freitas, A. Censi, B. Walker Smith, L. Di Lillo, S. E. Anthony, E. Frazzoli, From driverless
dilemmas to more practical commonsense tests for automated vehicles 118 (2021) e2010202118.</p>
        <p>URL: https://www.pnas.org/doi/10.1073/pnas.2010202118. doi:10.1073/pnas.2010202118.
[15] J. Liu, W. Zhou, H. Wang, Z. Cao, W. Yu, C. Zhao, D. Zhao, D. Yang, J. Li, Road Trafic Law
Adaptive Decision-making for Self-Driving Vehicles, in: 2022 IEEE 25th International Conference
on Intelligent Transportation Systems (ITSC), 2022, pp. 2034–2041. URL: http://arxiv.org/abs/2204.
11411. doi:10.1109/ITSC55140.2022.9922208, arXiv:2204.11411 [cs].
[16] R. Kowalski, J. Dávila, M. Calejo, Logical english for legal applications, Conference: XAIF, Virtual</p>
        <p>Workshop on XAI in Finance (2021).
[17] G. Sartor, A. Wyner, G. Contissa, Mind the gaps: Logical english, prolog, and multi-agent systems
for autonomous vehicles, in: P. Cabalar, F. Fabiano, M. Gebser, G. Gupta, T. Swift (Eds.), Proceedings
40th International Conference on Logic Programming, ICLP 2024, University of Texas at Dallas,
Dallas Texas, USA, October 14-17 2024, volume 416 of EPTCS, 2024, pp. 111–124. URL: https:
//doi.org/10.4204/EPTCS.416.9. doi:10.4204/EPTCS.416.9.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>H.</given-names>
            <surname>Prakken</surname>
          </string-name>
          ,
          <article-title>On the problem of making autonomous vehicles conform to trafic law</article-title>
          ,
          <source>Artif. Intell. Law</source>
          <volume>25</volume>
          (
          <year>2017</year>
          )
          <fpage>341</fpage>
          -
          <lpage>363</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Maierhofer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Moosbrugger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Althof</surname>
          </string-name>
          ,
          <article-title>Formalization of intersection trafic rules in temporal logic</article-title>
          ,
          <source>in: 2022 IEEE Intelligent Vehicles Symposium (IV)</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>1135</fpage>
          -
          <lpage>1144</lpage>
          . URL: https:// ieeexplore.ieee.org/document/9827153. doi:
          <volume>10</volume>
          .1109/IV51971.
          <year>2022</year>
          .
          <volume>9827153</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rizaldi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Keinholz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Huber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Feldle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Immler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Althof</surname>
          </string-name>
          , E. Hilgendorf, T. Nipkow,
          <article-title>Formalising and monitoring trafic rules for autonomous vehicles in isabelle/HOL</article-title>
          , in: N.
          <string-name>
            <surname>Polikarpova</surname>
          </string-name>
          , S. Schneider (Eds.),
          <source>Integrated Formal Methods, Cham</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>50</fpage>
          -
          <lpage>66</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Costescu</surname>
          </string-name>
          ,
          <article-title>Autonomous vehicles' safety in mixed trafic: Accounting for incoming vehicles when overtaking</article-title>
          ,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          . URL: https://ieeexplore.ieee.org/document/8893110. doi:
          <volume>10</volume>
          .1109/ EV.
          <year>2019</year>
          .
          <volume>8893110</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>H.</given-names>
            <surname>Bhuiyan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Governatori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rakotonirainy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. W.</given-names>
            <surname>Wong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mahajan</surname>
          </string-name>
          ,
          <article-title>Driving decision making of autonomous vehicle according to queensland overtaking trafic rules</article-title>
          ,
          <source>The Review of Socionetwork Strategies</source>
          <volume>17</volume>
          (
          <year>2023</year>
          )
          <fpage>233</fpage>
          -
          <lpage>254</lpage>
          . URL: https://doi.org/10.1007/s12626-023-00147-x. doi:
          <volume>10</volume>
          .1007/s12626-023-00147-x.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Collenette</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Dennis</surname>
          </string-name>
          , M. Fisher,
          <article-title>Advising autonomous cars about the rules of the road</article-title>
          ,
          <source>Electronic Proceedings in Theoretical Computer Science</source>
          <volume>371</volume>
          (
          <year>2022</year>
          )
          <fpage>62</fpage>
          -
          <lpage>76</lpage>
          . URL: http://arxiv.org/ abs/2209.14035. doi:
          <volume>10</volume>
          .4204/EPTCS.371.5. arXiv:
          <volume>2209</volume>
          .14035 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kothawade</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Khandelwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Basu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          , G. Gupta, AUTO-DISCERN:
          <article-title>Autonomous driving using common sense reasoning</article-title>
          , in: J.
          <string-name>
            <surname>Arias</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>A. D'Asaro</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Dyoub</surname>
            , G. Gupta,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Hecher</surname>
            , E. LeBlanc,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Peñaloza</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Salazar</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Saptawijaya</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Weitkämper</surname>
          </string-name>
          , J. Zangari (Eds.),
          <source>International Conference on Logic Programming 2021 Workshops</source>
          , volume
          <volume>2970</volume>
          <source>of CEUR Workshop Proceedings</source>
          , CEUR,
          <year>2021</year>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2970</volume>
          /#gdepaper7.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Vacek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Gindele</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Zollner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Dillmann</surname>
          </string-name>
          ,
          <article-title>Using case-based reasoning for autonomous vehicle guidance</article-title>
          ,
          <year>2007</year>
          , pp.
          <fpage>4271</fpage>
          -
          <lpage>4276</lpage>
          . URL: https://ieeexplore.ieee.org/document/4399298. doi:
          <volume>10</volume>
          . 1109/IROS.
          <year>2007</year>
          .4399298, ISSN:
          <fpage>2153</fpage>
          -
          <lpage>0866</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rakow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Schwammberger</surname>
          </string-name>
          , Brake or Drive:
          <article-title>On the Relation Between Morality and Trafic Rules when Driving Autonomously</article-title>
          , in: Software Engineering 2023 Workshops,
          <year>2023</year>
          , p.
          <fpage>104</fpage>
          . URL: https://publikationen.bibliothek.kit.edu/1000169851. doi:
          <volume>10</volume>
          .18420/se2023-ws-12.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>