<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>LDAC</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Representing Normative Regulations in OWL DL for Automated Compliance Checking Supported by Text Annotation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ildar Baimuratov</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Denis Turygin</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ITMO University</institution>
          ,
          <addr-line>St. Petersburg</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>L3S Research Center, Leibniz University Hannover</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>TIB - Leibniz Information Centre for Science and Technology</institution>
          ,
          <addr-line>Hannover</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>13</volume>
      <fpage>09</fpage>
      <lpage>11</lpage>
      <abstract>
        <p>Compliance checking involves determining whether a regulated entity adheres to applicable normative regulations. Currently, this process is largely manual, requiring considerable time and expert knowledge, while remaining prone to human error. To address these challenges, various approaches to automating compliance checking have been explored, including Machine Learning. However, the reliability of ML-based methods remains questionable due to issues such as hallucinations, lack of transparency, and limited reproducibility. In contrast, symbolic reasoning is inherently accurate, reproducible, and explainable. While recent work has predominantly relied on SHACL, this study advocates for modeling regulations using OWL DL. It provides several advantages, such as a more human-readable syntax, semantics grounded in decidable Description Logics, the ability to detect redundant or inconsistent regulations through reasoning and generating explanations that ensure traceability. To support this approach, we introduce an annotation schema and a corresponding algorithm that transforms regulatory text annotations into OWL DL code. We validate our method through a proof-of-concept implementation applied to examples from the building construction domain.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Normative Regulations</kwd>
        <kwd>Text Annotation</kwd>
        <kwd>OWL DL</kwd>
        <kwd>Reasoning</kwd>
        <kwd>Compliance checking</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Normative regulations govern business processes, industry, law, and various other domains. The
process of verifying whether a regulated entity meets these regulations is known as compliance
checking. Currently, this process is predominantly manual, requiring significant time and expertise
while remaining prone to human error. For instance, in building construction, a single compliance review
cycle can take several weeks, and multiple review cycles may be necessary due to design modifications.
Non-compliance with building regulations can result in fines, penalties, or even criminal prosecution.
Moreover, studies have shown significant discrepancies in manual code reviews, with diferent plan
review departments often reaching inconsistent conclusions when evaluating the same set of plans [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
Additionally, the redundancy of building codes contributes to ineficiencies and increases the likelihood
of errors during the compliance checking process [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>Thus, there is a clear need to automate compliance checking. However, normative regulations are
typically presented in human-readable formats, making them incompatible with software processing.
Compared to manual compliance checking, automated compliance checking (ACC) is expected to
improve eficiency by reducing time, costs, and errors. However, existing compliance checking systems,
such as the Solibri1 building model checker, rely on manually created, hard-coded, proprietary rules
to represent normative regulations. While efective for a specific set of regulations within a given
timeframe, this rigid approach requires significant efort to adapt to diferent regulatory codes and
maintain over time. Machine learning (ML), particularly large language models (LLMs), ofers potential
support for ACC. However, the trustworthiness of ML models remains questionable due to issues such
as hallucinations, lack of transparency, and limited reproducibility — critical factors in responsible
domains like building construction.</p>
      <p>
        In contrast, symbolic reasoning is inherently accurate, reproducible, and explainable. In recent years
researchers have investigated the use of Semantic Web technologies, such as RDF, OWL, SPARQL, SWRL
and SHACL for compliance checking. However, to the best of our knowledge, no study has applied
OWL DL2 reasoning for ACC. Among the most relevant works, Fitkau and Hartmann [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] modeled part
of building regulations using OWL DL but processed them solely with DL querying. Nuyts et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]
employed OWL DL to check information availability, while compliance constraints were modeled using
SPARQL, SWRL, and SHACL. However, query-based approaches, such as SHACL and DL Query, have
several limitations, including the lack of established semantics; the absence of consistency checking;
the inability to trace violations back to their sources; non-completely human-readable syntax; and
dependence on the RDF graph structure. In contrast, OWL DL reasoning ofers several advantages, such
as a standardized, human-readable (Manchester) syntax; semantics grounded in decidable description
logics (DLs); independence from data complexity; explanations that ensure traceability; and identifying
redundant or inconsistent regulations. The primary challenge that hinders researchers from using
OWL is the open-world assumption (OWA). However, we argue that OWL DL is expressive enough to
produce the same results as closed-world reasoning, provided that the data is modeled correctly.
      </p>
      <p>
        Converting textual regulations into machine-readable formats such as OWL DL remains a challenging
task. Various approaches have been explored to facilitate the formalization of regulations, including
NLP techniques to generate Prolog clauses [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] or SHACL shapes [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], deep learning for LegalRuleML [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ],
and Large Language Models (LLMs) for SPARQL [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. However, since modeling regulations in OWL
DL has been largely unexplored, there are no studies on translating regulation texts into OWL DL. To
address these gaps, we propose a text annotation schema and an algorithm for automatically converting
annotated regulations into OWL DL code. The annotation schema facilitates the alignment of text with
the regulations’ semantics, making it accessible to domain experts. It also leverages existing annotation
tools, removing the need for custom formalization interfaces, and opens the way for future integration
with machine learning models to support the annotation process.
      </p>
      <p>The contributions of this research include: 1) An approach for representing normative regulations in
OWL DL that enables ACC through OWL reasoning. 2) A text annotation schema and an algorithm for
automatically converting annotated regulations into OWL DL code. The proposed approach is validated
through a proof-of-concept implementation applied to examples from the building construction domain.</p>
      <p>The paper is organized as follows: section 2 reviews the relevant research, section 3 describes the
proposed text annotation schema, section 4 presents the algorithm for converting regulations into OWL
DL code, section 5 demonstrates the proof of concept, section 6 discusses the advantages of using OWL
DL, and section 7 concludes the work.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>In this section, we review research relevant to our work, focusing on approaches for machine-readable
representation of regulations and the streamlining of their formalization.</p>
      <sec id="sec-2-1">
        <title>2.1. Regulation representation</title>
        <p>Normative regulations are studied across various disciplines, including law, legal reasoning, deontic
logic, and artificial intelligence. Researchers have explored the formalization of these regulations to
facilitate ACC. In this subsection, we focus specifically on the manual conversion of the regulations.</p>
        <p>
          Hashmi et al. [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] proposed an abstract regulatory compliance framework for business processes
based on deontic logic. Among machine-readable representation, LegalRuleML [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] extended the syntax
of RuleML3 with concepts and features specific to legal norms. However, LegalRuleML provides only
mechanisms to capture and represent diferent interpretations of legal norms, without relying on any
specific logical framework. Gandon et al. [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] explored the application of Semantic Web frameworks to
the formalization and processing of normative regulations. They built upon the LegalRuleML model,
incorporating notions of regulatory compliance from [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. Their approach modeled states of afairs as
named graphs and utilizes SPARQL for querying. Francesconi [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] also utilized SPARQL to reason
over legislation within a DL framework. Lam and Hashmi [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] presented a method for transforming
legal norms from LegalRuleML into a variant of Modal Defeasible Logic. This transformation was
implemented as an extension to the DL reasoner SPINdle.
        </p>
        <p>
          In the building construction domain, the use of Semantic Web technologies has also been evaluated.
Yurchyshyna and Zarli [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] represented constraints with SPARQL queries. Pauwels et al. [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] utilized
N3 logic. The SWRL language was used to encode rules in [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] and [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. Fitkau and Hartmann
[
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] developed a Fire Safety Ontology using SPARQL, DL Query and SWRL. The use of SHACL for
compliance checking has been evaluated in [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ], [19],[20], [21],[
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] and [22]. The use of OWL was
previously demonstrated in [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], however, the authors applied it solely to model information availability
constraints, while compliance constraints were modeled using SPARQL, SWRL, or SHACL.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Streamlining regulation formalization</title>
        <p>
          Converting textual rules into machine-readable formats is a challenging task. Researchers have explored
the potential of NLP and ML techniques to translate natural language regulatory texts into specific
representation formats. Hjelseth and Nisbet [23] introduced an annotation schema for normative texts
called RASE, although the study does not address the conversion of these annotations into an executable
format. Zhang and El-Gohary [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] proposed an ACC system that uses NLP techniques to automatically
extract normative information from documents and convert it into logical rules, opting for first-order
logic and its implementation in B-Prolog. Donkers and Petrova [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] leveraged the linguistic structure of
sentences to automatically generate SHACL representations using predefined templates. Recent studies
also explore automating regulation formalization with LegalRuleML via deep learning [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] or with
SPARQL using LLMs [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. To address the lack of resources for ML, Hettiarachchi et al. [24] introduced
CODE-ACCORD, a dataset of sentences from the building regulations of England and Finland, annotated
with a custom set of entities and relations not grounded in any formal framework.
        </p>
        <p>In contrast to these approaches, we propose an annotation schema that directly aligns with OWL
DL syntax, ensuring that the translation of annotated regulations is both transparent and
humancontrollable. The intermediate annotation step facilitates the alignment of the text with the regulations’
semantics, making it more accessible to domain experts, and leverages existing annotation tools,
eliminating the need for custom formalization interfaces. Finally, machine learning models can be
trained on these annotations to further assist in the annotation process.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Annotation Schema</title>
      <p>In this section, we introduce a schema for annotating regulatory text to streamline its translation into
OWL DL code. The annotation schema includes three layers: Terms, Semantic Types, and Semantic
Roles. Each layer includes both span-based tags and arrows connecting them. For clarity, annotation
tags are written in italics and OWL expressions are written in teletype.</p>
      <p>Terms. The first annotation layer focuses on domain terms. This layer is essential for aligning the
formalized regulations with the domain data for validation and for enabling eficient search and filtering
of regulations. This layer utilizes domain vocabularies or ontologies as tags. For instance, in the building
construction domain, Building Information Models (BIM) [25] are defined using the Industry Foundation
Classes (IFC)4. Its OWL serialization, ifcOWL [26], can be employed as values within the Term layer.
Semantic Types. The Semantic Types layer includes tags that correspond to syntactic elements of the
OWL DL language, enabling the construction of class restrictions. We primarily use Manchester syntax5
for tag names, though some are slightly modified for better readability by domain experts. For example,
owl:ObjectProperty is represented by the tag Relation, and owl:DataProperty corresponds to
Property. The mapping of these tags to OWL is provided in Table 1. Additionally, the Semantic Types
layer includes the specific arrows Domain, Range and Of. Their possible starting and ending tags are
provided in Table 2.</p>
      <p>If a Term tag appears on the same tokens as a Semantic Type tag, the latter might seem redundant.
However, our goal is to provide a comprehensive representation of the regulation’s semantics at the
Semantic Type layer, independent of the specific vocabulary used in the Term layer. This approach
ensures that Semantic Types can be reused across diferent domain models (or even without any). As a
consequence, raw OWL code may contain multiple instances of the same concept, reflecting its various
spellings and synonyms. However, since our aim is not to create a single ontology for all regulations
but rather a set of individual OWL DL programs, each representing a regulation, we do not need to
align these variations.</p>
      <p>Semantic Roles. Semantic Roles are essential for constructing axioms in the OWL DL language.
OWL axioms are statements that are asserted to be true within the domain of interest. In OWL, two
types of assertions are possible: one about the relation between individuals, and the other about the
membership of an individual or class in a class. In the context of normative regulations, we focus on
asserting relationships between restricted classes (annotated with Semantic Types), so only the relation
owl:SubClassOf, or General Class Inclusion (GCI) is needed. To annotate this relation, two semantic
roles are introduced: 1) Subject, which represents the subclass, 2) and Requirement, which represents
the superclass. In other words, the Subject role denotes the OWL class to which the regulation applies,
while the Requirement role represents the class containing the imposed regulation. The arrow between
Subject and Requirement, corresponding to the owl:SubClassOf relation is annotated with the tag To,
resulting in the triple &lt;Subject, To, Requirement&gt;.</p>
      <sec id="sec-3-1">
        <title>4https://technical.buildingsmart.org/standards/ifc</title>
        <p>5https://www.w3.org/TR/owl2-manchester-syntax/
Linguistic arrows. In addition to arrows that represent the semantic relationships between diferent
tags, the proposed annotation schema also includes linguistic arrows, which only connect separate
pieces of text related to the same tag. There are three linguistic arrows with distinct properties, which
are described as follows:
• Concatenation:  →  ⇒ ,
• Distribution:  → ,  →  ⇒ , ,
• Self-Distribution:  →  ⇒ , ,
where , , and  are pieces of text and  or  represent  and  or  and  respectively concatenated.
Self-distribution can be considered as Distribution in which one of the strings is empty:
 → 0,  →  ⇒ , .</p>
        <p>However, since not all tools allow annotating an empty line in the text, we introduce Self-distribution as
a separate arrow.</p>
        <p>Annotation Interface. To annotate regulations following the proposed schema, we use the
INCEpTION tool [27]. One of the key advantages of this tool is its ability to import ontologies, which can
then be used as annotation tags. In our use-case, the imported ifcOWL ontology serves as the tagset for
the Term layer, and tagsets for other layers are created manually. Additionally, this tool supports the
integration of ML-based recommenders, which can streamline the annotation process in future work.
The original regulations are imported into INCEpTION in TXT format, and the annotations are exported
in WebAnno TSV v3.3. Example 1 and Example 2 illustrate annotated regulations, one qualitative and
the other quantitative.</p>
        <p>Example 1. As an example, consider a qualitative regulation from the building construction domain
with the text “If the degree of fire resistance of a building is III, then the fire resistance limit of the
beams in it should be R15”. In this regulation, we annotate the terms ifcBuilding, ifcBeam, FIRESAFETY,
and FIREPROTECTION. The first two are annotated with the Semantic Type Class, while the latter two
are annotated as Property. Additionally, “III” and “R15” are annotated as Literal, the preposition “in” is
labeled as a Relation, and “should be” is annotated with the tag Only. Finally, the phrases “If the degree
of fire resistance of a building is III” and “the beams in it” are connected with a Concatenation arrow
and form together a subject of the regulation. Similarly, the phrases “then the fire resistance limit”
and “should be R15” are also connected with a Concatenation arrow and identified as the regulation’s
requirement. Figure 1 shows the annotation of this regulation within the INCEpTION interface.
Example 2. As a quantitative requirement, consider the following: “For buildings with a capacity of
not more than 300 students the height of classrooms must be at least 3.0 m”. Compared to Example 1, the
subject of this regulation contains a chain consisting of the relation “for” and the property “capacity”,
as well as the constrained data type “not more than 300”. Its requirement also includes the constrained
data type “at least 3.0”. Figure 2 illustrates the annotation of this regulation.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Transforming Annotations into OWL DL</title>
      <p>In this section, we present our algorithm for transforming annotated regulations into OWL DL code,
which enables the classification of domain objects into regulation subjects and the subsequent checking
of their compliance with the imposed requirements with OWL reasoning. The algorithm is divided
into several steps: 1) data preprocessing, 2) generating elementary OWL entities, 3) aligning domain
terms with semantic types, 4) vocabulary-based mapping of numbers and comparisons, 5) constructing
class restrictions, and finally, 6) constructing axioms. The algorithm is recursive, allowing it to process
annotations of any complexity.</p>
      <p>Preprocessing. The algorithm takes as input a TSV file exported from INCEpTION, which contains
the annotated regulation in a tabular format. In this table, each row corresponds to a word-based token,
indexed by its order of appearance in the text, and each column represents a distinct annotation layer.
Before generating OWL code, the data undergoes preprocessing in two stages. First, linguistic arrows
are processed, so that each table row corresponds to exactly one tag. New tags derived from these
operations are added to the table, while the old ones are removed. Second, tokens are grouped and
extracted into three separate tables, each corresponding to an annotation layer: the table of Terms  ,
the table of Semantic Types  , and the table of Semantic Roles . Additionally, we use lowercase
letters to denote single elements, while uppercase denotes sets. For example,  refers to a single row
from the table of Terms  . Table 3a, Table 3c, and Table 3b present the layer tables for Example 1 and
Table 4a, Table 4c, and Table 4b correspond to Example 2.</p>
      <p>(a) Terms
(c) Semantic types</p>
      <p>Index
120
1-9
111
125
5
6
Generating entities. To transform an annotated regulation into OWL Dl code, we start with an
ontology  that includes entities imported from a domain ontology corresponding to   (in our use-case,
ifcOWL). The first step involves generating elementary entities in  based on the tags Class, Relation,
and Property from  , according to the mappings in Table 1. Additionally, OWL classes are created
based on the tags Subject and Restriction from . The original tokens from the regulation text are
assigned as labels for the created entities. Figure 3a demonstrates OWL entities generated based on
Example 1 and Figure 3b for Example 2.
(b) Based on Example 2
(a) Based on Example 1
Entity alignment. Once the elementary entities are created, we establish connections between them
across diferent annotation layers based on token inclusion. Specifically, for each Term tag, we check
whether its tokens also belong to any Class tag. If so, we add the owl:equivalentTo relation. These
connections ensure that domain objects align with regulation classes and are further classified through
class restrictions into regulation subjects. In Example 1, the equivalence between the ifc:IfcBeam
class and the beams OWL class is generated. This guarantees that any object identified as belonging to
the ifc:IfcBeam class in a BIM model also belongs to the beams class in the ontology. The same for the
ifc:IfcBuilding and buildings classes.</p>
      <p>Vocabulary mapping. To balance annotation complexity with ontology generation accuracy, we
employ vocabulary-based mappings for certain tokens from  to OWL operators. Specifically, we
define two mappings:  :  →  , which matches strings with integers  for constructing cardinal
restrictions, and  :  → {≤ , =, ≥} , which is used to define constrained data types. For instance,
in Example 2, the token “not more than”, labeled with the tag Comparison, is mapped through 
to xsd:maxInclusive. For the  mapping, no relevant cases appear in our examples, however, it
could be used to map, for instance, the word “two” to the integer 2.</p>
      <p>Constructing class restrictions. Next, we use the remaining tags from  to construct restricted
classes based on the previously generated elementary OWL entities. For shortness, we refer to the
combined set of ObjectProperties and DataProperties as predicates  . First, we identify the set of
range classes and literals, denoted as , that appear at the end of predicate chains, i.e. those that
are not domains for other predicates. We then apply backward induction to iteratively construct
intermediate restricted classes utilizing the  table. In each iteration, we check if any Not or Or tags
are associated with elements in  and apply owl:complementOf or owl:unionOf respectively. For
each  ∈  (a class if it corresponds to a Relation or a literal if it corresonpds to a Property), we identify
its incoming predicates  through  and determine for every predicate its associated restriction.
Predicate restrictions are generated using the tags Some, Only or Number. If a predicate has no annotated
restriction, we assign Some by default if it belongs to the subject of the regulation and Only if it belongs
to the requirement. The  mapping is used to define cardinal restrictions. If a Literal is accompanied
by a Comparison tag, the mapping  is used to generate a constrained data type. Each intermediate
restricted class  is added to the resulting set , while the original  is removed from . These
intermediate restricted classes are then passed to the next iteration of the algorithm. Once all predicate
chains are processed, the terminal restricted classes along with any remaining classes in , i.e. those
that are not ranges of any predicates but possibly with complements or unions, form the complete set
of the building blocks for defining the regulation’s subject and requirement. Algorithm 1 outlines the
recursive construction of a set of intermediate restricted classes.</p>
      <p>Constructing axioms. Finally, using all intermediate classes  obtained from Algorithm 1, we
construct class restrictions that accurately capture the semantics of regulation subjects and requirements.
This enables domain object classification and compliance checking with OWL reasoning. To achieve this,
for each  ∈ , we identify the corresponding class restrictions  ⊂  based on textual token
inclusion. The selected classes from  are then combined using the owl:intersectionOf relation
to form a single complex class , which is declared equivalent (owl:equivalenTo) to the given
. Finally, we utilize the To arrow between semantic roles in  to establish an owl:subClassOf
relation between the class , which represents the subject of the regulation, with the class , which
represents the requirement of the regulation. Algorithm 2 details this final step. Figure 4a and Figure 4b
demonstrate the resulting axioms, which represent the semantics of the regulations from Example 1
and Example 2, respectively.</p>
      <sec id="sec-4-1">
        <title>Algorithm 1 Constructing class restrictions</title>
        <p>Require:  , 
 ← ℎ()
 ← ℎ ()
 ← ∅
for  ∈  do
 ←  ()
for  ∈  do
 ←
 ←
 ←
end for
 ←  ∖ {}
end for
Apply Algorithm 1 to  , 
 ←  ∪ 
 ( )
(, , )
 ∪ {}</p>
      </sec>
      <sec id="sec-4-2">
        <title>Algorithm 2 Constructing axioms</title>
        <p>Require: , 
for  ∈  do
 ← (, )
 ← ⋂︀ 
 ≡ 
end for
for   ∈  do
(, ) ← ( )
 ⊆ 
end for</p>
        <p>As a result, given an annotated regulation, the proposed method, with certain limitations, generates
an ontology in machine-interpretable OWL DL code. In this ontology, domain terms are connected
to restricted classes representing the subject and requirement of the regulation. Finally, subjects
and requirements are connected using the owl:subClassOf relation. Consequently, the ontology
enables the classification of objects described by domain terms into regulation subjects and facilitates
trustworthy compliance checking through OWL reasoning.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Proof of Concept</title>
      <p>To validate the proposed approach, we developed a proof of concept that converts annotated regulations,
exported from INCEpTION, into OWL DL code using the described algorithm. We applied this prototype
to the regulations from Example 1 and Example 2 and validated the resulting OWL DL representations
through a compliance-checking scenario with modeled data.</p>
      <p>Prototype. The prototype receives annotated regulations exported from INCEpTION in WebAnno
TSV v3.3 format and generates OWL DL code with machine-actionable regulations following the
proposed algorithm. The prototype is implemented in Python using the Owlready2 library [28].
Data. To validate the OWL code generated based on Example 1, we create six “closed” individuals:
building (Listing 1), building with the degree of fire resistance III (Listing 3), beam (Listing 2), beam in
the building (Listing 4), beam in the building of the degree of fire resistance III (Listing 5), and beam in
the building of the degree of fire resistance III with fire resistance limit R15 (Listing 6). According to
the complexity levels defined in [ 29], the last one is classified as level L3, as evaluating this individual
involves three triples: (beam, FIREPROTECTION, R15), (beam, in, building_III) and (building, FIRESAFETY,
III).</p>
      <p>Individual: building</p>
      <p>Individual: beam
Types:</p>
      <p>BUILDING,
in only owl:Nothing
Listing 1: Building
Types:</p>
      <p>BEAM,
in only owl:Nothing</p>
      <p>Listing 2: Beam
(a) From Example 1
(b) From Example 2</p>
      <p>Types:</p>
      <p>BUILDING,
in only owl:Nothing,
FIRESAFETY only {"III"^^xsd:</p>
      <p>string}
Facts:</p>
      <p>FIRESAFETY "III"^^xsd:string
Listing 3: Building of the degree of fire resistance</p>
      <p>III
Individual: beam_in_III</p>
      <p>Types:</p>
      <p>BEAM,
in only ({building_III})
Facts:</p>
      <p>in building_III
Listing 5: Beam in the building of the degree of
ifre resistance III</p>
      <p>Individual: beam_in</p>
      <p>Types:</p>
      <p>BEAM,
in only ({building})
Facts:</p>
      <p>in building</p>
      <sec id="sec-5-1">
        <title>Listing 4: Beam in the building</title>
        <p>Individual: beam_in_III_R15</p>
        <p>Types:</p>
        <p>BEAM,
in only ({building_III})
Facts:
in building_III,</p>
        <p>FIREPROTECTION "R 15"^^xsd:string</p>
      </sec>
      <sec id="sec-5-2">
        <title>Listing 6: Beam in the building of the degree of</title>
        <p>ifre resistance III with fire resistance
limit R15
Validation script. To validate the generated OWL DL code, a user scenario was developed. It consists
of the following steps:
1. Choose a regulation .
2. Generate onto  from  using the prototype.
3. Import into  individuals representing domain objects that comply with .
4. Run reasoner.
5. Modify the individuals so that they do not comply with .</p>
        <p>6. Run reasoner.</p>
        <p>If the generated OWL DL code is correct, the reasoning in Step 4 should successfully classify the
individuals as subjects of . However, after Step 6, the reasoner is expected to detect noncompliance as
 becomes inconsistent. This scenario was implemented in Python using the Owlready2 library and
Pellet reasoner [30].</p>
        <p>Results. We applied the described scenario to the knowledge graph obtained by importing the
individuals listed above into the ontology generated from Example 1. The logs from the initial reasoner
run are provided in Listing 7. As expected, the process completed successfully in 3.28 sec. Among other
results, it correctly classified the instances beam, beam_in_III, and beam_in_III_R15 as subjects
of the regulation. Finally, we modified the fire resistance limit of beam_in_III_R15 from “R15” to
‘”R14”, and reran the reasoner. As expected, it raised an error due to ontology inconsistency. Listing 8
Provides Pellet’s explanation for this inconsistency. These results confirm the validity of the generated
OWL DL code.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Discussion</title>
      <p>Researchers have raised concerns that OWL, based on the Open World Assumption (OWA), may not
efectively handle constraints for compliance checking. As a result, recent studies have predominantly
* Owlready2 * Running Pellet...
* Owlready2 * Pellet took 3.277837038040161 seconds
* Owlready2 * Pellet output:
...
* Owlready * Reparenting reg.beam: {reg.BEAM} =&gt; {reg.Subject}
* Owlready * Reparenting reg.beam_in: {reg.BEAM} =&gt; {reg.Subject, reg.BEAM}
* Owlready * Reparenting reg.beam_in_III: {reg.BEAM} =&gt; {reg.Subject}
* Owlready * Reparenting reg.beam_in_III_R15: {reg.BEAM} =&gt; {reg.Subject}
...</p>
      <p>Listing 7: Successful reasoning
Explanation for: owl:Thing SubClassOf owl:Nothing
1) EquivalentProperties: FIREPROTECTION, ’fire resistance limit’
2) ’beam_in_III_R15’ Type ’If the degree of fire resistance of a building is III the
beams in it’
3) ’the fire resistance limit should be R15’ EquivalentTo ’fire resistance limit’ only
{"R15"^^xsd:string}
4) ’If the degree of fire resistance of a building is III the beams in it’ SubClassOf ’
the fire resistance limit should be R15’
5) beam_in_III FIREPROTECTION "R14"^^xsd:string
Listing 8: Explanation of inconsistency
used SHACL, which relies on the Closed World Assumption (CWA). In this section, we advocate for
using OWL DL to model regulations.</p>
      <p>First, we compare OWL DL and SHACL in terms of their expressive power and computational
decidability by relating them to the common framework of description logics (DLs). Leinberger et al.
[31] map SHACL shapes to DLs, enabling shape containment to be addressed through DL reasoning. They
conclude that the corresponding description logic for SHACL shapes is ALCOIQ(∘ ), which is undecidable
for infinite models. A restricted fragment of SHACL, limiting path concatenation, corresponds to the
description logic SROIQ, which underpins OWL DL. Bogaerts et al. [32] explore the relationship between
SHACL and OWL, arguing that SHACL is, in fact, a description logic. However, they point out that
other tasks typically studied in DLs, such as consistency checking, are undecidable in SHACL, except
for finite model checking.</p>
      <p>Second, although OWL is based on OWA, it is still expressive enough to support closed-world
reasoning. To achieve this, missing information and disjointness must be explicitly defined. For
instance, if an individual  has only one relation (, ) (in DL notation), this fact must be explicitly
declared in the ontology at the time of compliance checking as  ∈ ∀.{}. Alternatively, if  has
no relations at all, and there exists a relation  in the ontology, it must be formulated as  ∈ ∀.⊥.
Software tools like Owlready2 [28] provide methods to algorithmically apply local closure to specific
individuals, classes, or even entire ontologies. Once closed, these ontologies can be processed by OWL
reasoners to perform constraint checking similarly to closed-world reasoning. Therefore, the OWA is
more relevant to data modeling than to regulation modeling, and the former does not require additional
efort from users.</p>
      <p>Finally, OWL ofers better human-computer interaction. Ahmetaj et al. [33] emphasize that in SHACL,
validation reports provide limited information, mainly identifying the node and indicating constraint
violations. In contrast, OWL reasoners ofer detailed explanations of classifications and inconsistencies,
allowing users to trace them back to their sources. Additionally, OWL DL is supported by the Manchester
syntax, which is more human-readable compared to the serialization format available for SHACL.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion</title>
      <p>In this study, we addressed the challenge of converting normative regulations into a
machineinterpretable OWL DL code to enable compliance checking using automated reasoning. To facilitate the
formalization, we proposed an annotation schema for regulation texts that includes three tag layers:
domain Terms, Semantic Types and Semantic Roles, and a number of arrows between the tags. Additionally,
we developed an algorithm to convert annotated regulations into OWL DL code. In the resulting OWL
DL representations, domain terms are connected to restricted OWL classes that represent regulation
subjects, enabling the automated identification of applicable regulations. Regulatory requirements are
similarly modeled as restricted OWL classes and are connected to their respective subjects using general
class inclusion axioms. As a result, if an object is classified as a regulation subject but fails to satisfy the
corresponding requirement, an inconsistency is detected during reasoning. To validate the proposed
method, we implemented a proof of concept and a validation script. The prototype was successfully
applied to examples from the building construction domain.</p>
      <p>Limitations. The proposed approach has several limitations. First, it relies on vocabulary mappings
to generate cardinality restrictions and restricted data types. While the set of words representing
natural numbers or comparisons is countable, any new ones or typos in regulatory texts require manual
processing. Second, not all relational restrictions or logical connectives are explicitly stated in natural
language text. Our approach assigns default values in such cases, but this can potentially lead to
inaccuracies in regulation modeling. Specifically, in Example 1, it can be stated that the fire resistance
limit of the beams should be at least R15, rather than strictly R15. If this is the case, the automated
generation of the corresponding OWL code from the annotation becomes infeasible, as the comparison
is not explicitly mentioned in the text. However, with human intervention, this interpretation can still
be formalized in OWL DL as ’fire resistance limit’ only xsd:float[&gt;= 15.0f]. Finally,
no ontology is entirely comprehensive for any application domain. For instance, the regulation in
Example 2 applies to classrooms, yet the ifcOWL ontology lacks a dedicated class for classrooms. This
gap between domain data and regulations must be addressed to enable automated compliance checking.
Future work. In the future, we will focus on automating regulation annotation using machine
learning models. We believe that approaches relying on automatically generating code for ACC or
performing direct compliance checking with ML will never be trustworthy enough for full automation.
In contrast, our approach will leverage machine learning solely to suggest annotation tags, while the
semantic modeling of regulations remains under human control. This ensures that human autonomy
and decision-making are preserved, aligning with ethical principles for AI development and deployment,
such as the Artificial Intelligence Act 6 and the EU Ethics Guidelines for Trustworthy AI7.</p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgments</title>
      <p>We would like to acknowledge the funding by the German Ministry of Education and Research (BmBF)
for the project KISSKI AI Service Center (01IS22093C) and the Deutsche Forschungsgemeinschaft (DFG,
German Research Foundation) under Germany´s Excellence Strategy – EXC 2163/1 - Sustainable and
Energy Eficient Aviation – Project-ID 390881007.</p>
    </sec>
    <sec id="sec-9">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used ChatGPT in order to: Grammar and spelling
check. After using these tool(s)/service(s), the author(s) reviewed and edited the content as needed and
take(s) full responsibility for the publication’s content.
[19] A. T. Kovacs, A. Micsik, Bim quality control based on requirement linked data, International</p>
      <p>Journal of Architectural Computing 19 (2021) 431–448.
[20] S. Zentgraf, P. Hagedorn, M. König, Multi-requirements ontology engineering for automated
processing of document-based building codes to linked building data properties, in: IOP Conference
Series: Earth and Environmental Science, volume 1101, IOP Publishing, 2022, p. 092007.
[21] E. Nuyts, J. Werbrouck, R. Verstraeten, L. Deprez, Validation of building models against legislation
using shacl, in: LDAC2023: Linked Data in Architecture and Construction Week, volume 3633,
CEUR, 2023, pp. 164–175.
[22] P. Patlakas, I. Christovasilis, L. Riparbelli, F. K. Cheung, E. Vakaj, Semantic web-based automated
compliance checking with integration of finite element analysis, Advanced Engineering Informatics
61 (2024) 102448.
[23] E. Hjelseth, N. N. Nisbet, Capturing normative constraints by use of the semantic mark-up rase
methodology, in: Proceedings of CIB W78-W102 Conference, 2011, pp. 1–10.
[24] H. Hettiarachchi, A. Dridi, M. M. Gaber, P. Parsafard, N. Bocaneala, K. Breitenfelder, G. Costa,
M. Hedblom, M. Juganaru-Mathieu, T. Mecharnia, et al., Code-accord: A corpus of building
regulatory data for rule generation towards automatic compliance checking, Scientific Data 12
(2025) 170.
[25] C. M. Eastman, C. Eastman, P. Teicholz, R. Sacks, K. Liston, BIM handbook: A guide to building
information modeling for owners, managers, designers, engineers and contractors, John Wiley &amp;
Sons, 2011.
[26] P. Pauwels, W. Terkaj, Express to owl for construction industry: Towards a recommendable and
usable ifcowl ontology, Automation in Construction 63 (2016) 100–133. doi:https://doi.org/
10.1016/j.autcon.2015.12.003.
[27] J.-C. Klie, M. Bugert, B. Boullosa, R. E. de Castilho, I. Gurevych, The inception platform:
Machineassisted and knowledge-oriented interactive annotation, in: Proceedings of the 27th International
Conference on Computational Linguistics: System Demonstrations, Association for Computational
Linguistics, 2018, pp. 5–9. Event Title: The 27th International Conference on Computational
Linguistics (COLING 2018).
[28] J.-B. Lamy, Owlready: Ontology-oriented programming in python with automatic classification
and high level constructs for biomedical ontologies, Artificial Intelligence in Medicine 80 (2017)
11–28. doi:https://doi.org/10.1016/j.artmed.2017.07.002.
[29] M. Bonduel, J. Oraskari, P. Pauwels, M. Vergauwen, R. Klein, The ifc to linked building data
converter: current status, in: 6th International Workshop on Linked Data in Architecture and
Construction, CEUR-WS. org, 2018, pp. 34–43.
[30] E. Sirin, B. Parsia, B. C. Grau, A. Kalyanpur, Y. Katz, Pellet: A practical owl-dl reasoner, Web</p>
      <p>Semant. 5 (2007) 51–53. doi:10.1016/j.websem.2007.03.004.
[31] M. Leinberger, P. Seifer, T. Rienstra, R. Lämmel, S. Staab, Deciding shacl shape containment
through description logics reasoning, in: The Semantic Web–ISWC 2020: 19th International
Semantic Web Conference, Athens, Greece, November 2–6, 2020, Proceedings, Part I 19, Springer,
2020, pp. 366–383.
[32] B. Bogaerts, M. Jakubowski, J. Van den Bussche, Shacl: A description logic in disguise, in:
International Conference on Logic Programming and Nonmonotonic Reasoning, Springer, 2022,
pp. 75–88.
[33] S. Ahmetaj, R. David, M. Ortiz, A. Polleres, B. Shehu, M. Simkus, Reasoning about explanations
for non-validation in shacl, in: Description Logics, 2021.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. M.</given-names>
            <surname>El-Gohary</surname>
          </string-name>
          ,
          <article-title>Integrating semantic NLP and logic reasoning into a unified system for fully-automated code checking</article-title>
          ,
          <source>Autom. Constr</source>
          .
          <volume>73</volume>
          (
          <year>2017</year>
          )
          <fpage>45</fpage>
          -
          <lpage>57</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.autcon.
          <year>2016</year>
          .
          <volume>08</volume>
          .027.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>I.</given-names>
            <surname>Fitkau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hartmann</surname>
          </string-name>
          ,
          <article-title>An ontology-based approach of automatic compliance checking for structural fire safety requirements</article-title>
          ,
          <source>Advanced Engineering Informatics</source>
          <volume>59</volume>
          (
          <year>2024</year>
          )
          <fpage>102314</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>L. van Berlo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Costa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Klooster</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Breitenfelder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Lavikka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Schneider</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Paasiala</surname>
          </string-name>
          ,
          <article-title>Bim information reliability consequences for digital permit checking</article-title>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E.</given-names>
            <surname>Nuyts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bonduel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Verstraeten</surname>
          </string-name>
          ,
          <article-title>Comparative analysis of approaches for automated compliance checking of construction data</article-title>
          ,
          <source>Advanced Engineering Informatics</source>
          <volume>60</volume>
          (
          <year>2024</year>
          )
          <fpage>102443</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Donkers</surname>
          </string-name>
          , E. Petrova,
          <article-title>Converting fire safety regulations to shacl shapes using natural language processing</article-title>
          ,
          <source>in: Proceedings of the 3rd NLP4KGC</source>
          :
          <article-title>Natural Language Processing for Knowledge Graph Construction co-located with the 20th</article-title>
          <source>International Conference on Semantic Systems (SEMANTiCS</source>
          <year>2024</year>
          ),
          <article-title>CEUR-WS</article-title>
          . org,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Fuchsa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dimyadia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Witbrocka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Amora</surname>
          </string-name>
          ,
          <article-title>Transformer-based semantic parsing of building reg-ulations: Towards supporting regulators in drafting machine-readable rules</article-title>
          ,
          <source>in: Digital Building Permit Conference</source>
          <year>2024</year>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>N.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y.</surname>
          </string-name>
          <article-title>An, Automated building information modeling compliance check through a large language model combined with deep learning and ontology</article-title>
          ,
          <source>Buildings</source>
          <volume>14</volume>
          (
          <year>2024</year>
          )
          <year>1983</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hashmi</surname>
          </string-name>
          , G. Governatori,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wynn</surname>
          </string-name>
          ,
          <article-title>Normative requirements for regulatory compliance: An abstract formal framework</article-title>
          ,
          <source>Information Systems Frontiers</source>
          <volume>18</volume>
          (
          <year>2015</year>
          ).
          <source>doi:10.1007/ s10796-015-9558-1.</source>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>T.</given-names>
            <surname>Athan</surname>
          </string-name>
          , G. Governatori,
          <string-name>
            <given-names>M.</given-names>
            <surname>Palmirani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Paschke</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Wyner,
          <source>LegalRuleML: Design Principles and Foundations</source>
          , Springer International Publishing, Cham,
          <year>2015</year>
          , pp.
          <fpage>151</fpage>
          -
          <lpage>188</lpage>
          . doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>319</fpage>
          -21768-
          <issue>0</issue>
          _
          <fpage>6</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>F.</given-names>
            <surname>Gandon</surname>
          </string-name>
          , G. Governatori,
          <string-name>
            <given-names>S.</given-names>
            <surname>Villata</surname>
          </string-name>
          ,
          <article-title>Normative requirements as linked data</article-title>
          ,
          <source>in: JURIX 2017- The 30th international conference on Legal Knowledge and Information Systems</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          . doi:
          <volume>10</volume>
          .3233/978-1-
          <fpage>61499</fpage>
          -838-9-1.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>E.</given-names>
            <surname>Francesconi</surname>
          </string-name>
          ,
          <article-title>Semantic model for legal resources: Annotation and reasoning over normative provisions</article-title>
          ,
          <source>Semantic Web</source>
          <volume>7</volume>
          (
          <year>2016</year>
          )
          <fpage>255</fpage>
          -
          <lpage>265</lpage>
          . doi:
          <volume>10</volume>
          .3233/SW-140150.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>H.-P.</given-names>
            <surname>Lam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hashmi</surname>
          </string-name>
          ,
          <article-title>Enabling reasoning with legalruleml</article-title>
          ,
          <source>Theory and Practice of Logic Programming</source>
          <volume>19</volume>
          (
          <year>2018</year>
          )
          <fpage>1</fpage>
          -
          <lpage>26</lpage>
          . doi:
          <volume>10</volume>
          .1017/S1471068418000339.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Yurchyshyna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zarli</surname>
          </string-name>
          ,
          <article-title>An ontology-based approach for formalisation and semantic organisation of conformance requirements in construction, Automation in Construction 18 (</article-title>
          <year>2009</year>
          )
          <fpage>1084</fpage>
          -
          <lpage>1098</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>P.</given-names>
            <surname>Pauwels</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. Van Deursen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Verstraeten</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. De Roo</surname>
            , R. De Meyer, R. Van de Walle,
            <given-names>J. Van Campenhout</given-names>
          </string-name>
          ,
          <article-title>A semantic rule checking environment for building performance checking</article-title>
          ,
          <source>Automation in construction 20</source>
          (
          <year>2011</year>
          )
          <fpage>506</fpage>
          -
          <lpage>518</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>T. H.</given-names>
            <surname>Beach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Rezgui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kasim</surname>
          </string-name>
          ,
          <article-title>A rule-based semantic approach for automated regulatory compliance in the construction sector</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>42</volume>
          (
          <year>2015</year>
          )
          <fpage>5219</fpage>
          -
          <lpage>5231</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Fahad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Bus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Andrieux</surname>
          </string-name>
          ,
          <article-title>Towards mapping certification rules over bim</article-title>
          ,
          <source>in: CIB W78 Conference</source>
          , volume
          <volume>3</volume>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>L.</given-names>
            <surname>Shi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Roman</surname>
          </string-name>
          ,
          <article-title>From standards and regulations to executable rules: A case study in the building accessibility domain</article-title>
          , in: N.
          <string-name>
            <surname>Bassiliades</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Bikakis</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Costantini</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Franconi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Giurca</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Kontchakov</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Patkos</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Sadri</surname>
            ,
            <given-names>W. V.</given-names>
          </string-name>
          <string-name>
            <surname>Woensel</surname>
          </string-name>
          (Eds.),
          <source>Proceedings of the Doctoral Consortium</source>
          , Challenge, Industry Track, Tutorials and Posters @ RuleML+
          <article-title>RR 2017 hosted by International Joint Conference on Rules and Reasoning 2017 (RuleML+RR</article-title>
          <year>2017</year>
          ), London, UK,
          <source>July 11-15</source>
          ,
          <year>2017</year>
          , volume
          <volume>1875</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Stolk</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.</surname>
          </string-name>
          <article-title>McGlinn, Validation of ifcowl datasets using shacl</article-title>
          ,
          <source>in: Proceedings of the 8th Linked Data in Architecture and Construction Workshop</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>91</fpage>
          -
          <lpage>104</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>