<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Explainability in enterprise architecture models</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Elena Romanenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Free University of Bozen-Bolzano</institution>
          ,
          <addr-line>Universitätsplatz 1 - piazza Università, 1, Bozen-Bolzano, Italy - 39100</addr-line>
        </aff>
      </contrib-group>
      <fpage>61</fpage>
      <lpage>65</lpage>
      <abstract>
        <p>Providing explainability to enterprise architecture models is a highly important task. The paper reveals why user-neutrality is a strong limitation of the existing approaches. This work presents the initial stage of the research.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Explainability</kwd>
        <kwd>enterprise architecture models</kwd>
        <kwd>complexity management</kwd>
        <kwd>user modelling</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Explanation could play an important role in building trust in information system’s decision.
Thus, there is a need for understanding how well a system’s decisions are grounded, especially
in cases when the derived decisions may significantly afect humans’ lives, e.g., in the domains
of medicine or law [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        According to Arrietta et al., explainability should be considered as an interface between
humans and the system, which is comprehensible to humans [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The authors suggested to
place the audience in the centre when explaining the model, and to consider diferent categories
of users. It can be claimed that there is no such thing as a universal explanation, but there
is a need for personalised explanation to a given user. One of the approaches to information
management personalization is user profiling. According to [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] ontologies can be used for
modelling user context. Enterprise architecture modelling is a domain, where reaching the right
level of information granularity is very important for several reasons. Firstly, due to the fact that
such models can grow very fast, especially for large enterprises. Secondly, because these models
are always used by diferent types of users, from stakeholders and managers to programmers.
One of The Open Group ®1 standards is the ArchiMate® Specification 2. The ArchiMate modeling
language for enterprise architectures enables to describe, analyze, and visualize the relationships
among business domains in an unambiguous way. However, according to [3, p. 59], in ArchiMate,
“semantics was explicitly left out”. This situation led to some initiatives to use ontologies for
enterprise architecture analysis, e.g. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Hence, the problem of providing explainability to
enterprise architecture models for diferent users can be reduced to personalised complexity
management in large models according to the user profile.
      </p>
      <p>The rest of the paper is organized as follows. Section 2 defines a notion of explainability.
We describe an enterprise architecture model and personalization in such models in Section 3.
The state-of-the-art of complexity management in large models is given in Section 4. Finally,
Section 5 concludes the paper.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Defining Explainability</title>
      <p>
        In the last few years, the domain of Explainable Artificial Intelligence (XAI) has become a subject
of intense study. Large organizations announced their work in that direction. IBM proclaimed
explainability together with fairness and robustness as three pillars for building trustworthy AI
pipelines 3. Also the US Defense Advanced Research Projects Agency (DARPA) assigned the
highest priority to the XAI project, planning to complete it in 2021 [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. However, most of the
research so far is done in the field of explainable Machine Learning, with the focus on Deep
Neural Networks, see, e.g., [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        There is still some terminology variation on what is meant by explainability in an information
system. As it was mentioned previously, according to Arrietta et al., it is an interface between
humans and the system, while interpretability is defined as an ability to explain a phenomenon
in understandable terms to a human [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Erasmus et al. go even further and define interpretation
as “something one does to an explanation with the aim of producing another, more
understandable one” [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], i.e., interpretation is understood as an operator leading to another, more preferable
explanation. The authors claimed, that a complex explanation is an explanation no less, and
the main issue is in the user’s capacity to understand such explanation. Another approach is
considered in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], where interpretability should lead to explainability of the system, while the
latter is referred to as “the understanding the human user has achieved from the explanation”.
      </p>
      <p>
        Consequently, explainability strictly depends on the user and user’s competencies, thus, the
user profile should be taken into account when providing explanations, and this idea is reflected
in the literature. Gunning and Aha considered the “user’s mental model” that directly afects
user comprehension [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Arrieta et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] suggested placing the audience in the centre when
explaining the model and distinguished the following categories of users: (1) domain experts,
users of the model; (2) regulatory entities/agencies; (3) managers and executive board members;
(4) data scientists, developers; (5) users afected by the model’s decisions.
      </p>
      <p>
        However, there is no common agreement on which groups of users should be selected and
while some authors follow the suggested profiles, e.g., in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], some others lay emphasis on other
groups (see [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]). Although these groups could be adapted to diferent information systems,
such approach is still not very convenient, because (i) these groups are preselected and fixed,
(ii) the users can have diferent competencies even within the same group and (iii) some user’s
characteristics may change in time [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>
        Another approach to adaptivity and personalization support that can be found in the literature
is ontology-based user profiling [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Instead of extracting user stereotypes, one can
develop a user profile ontology, that would be able to exhibit users’ characteristics. Hence, an
explainable information system should provide diferent users with an explanation of the system
results according to the right level of user’s competencies and goals reflected in the user’s ontology.
3https://developer.ibm.com/technologies/artificial-intelligence/articles/the-ai-360-toolkit-ai-models-explained/
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Importance of Explainability in Enterprise Architecture</title>
    </sec>
    <sec id="sec-4">
      <title>Models</title>
      <p>Enterprise Architecture (EA) models are used by diferent categories of users and should provide
the right level of information granularity for all of them. According to Lankhorst et al. EA is
“a coherent whole of principles, methods, and models that are used in the design and
realisation of an enterprise’s organisational structure, business processes, information systems, and
infrastructure” [3, p. 3]. EA models are used by professionals with diferent background, goals
and competencies, namely stakeholders, architects, developers, and sometimes also regulatory
agencies representatives, hence, should inherently reflect their views to the given enterprise,
including guiding managers in designing business processes and developers in building applications
according to business objectives and policies [ibid. p. 4].</p>
      <p>As it was previously mentioned, the ArchiMate language is an international standard for EA
modelling. Despite the facts that (i) the resulting architecture should be coherent by definition
and (ii) the number of elements in such models may grow fast for large organizations, the
language is missing formal semantics [3, p. 59] and complexity management in such models is
usually done manually by architects with so-called views over the model. These views are aimed
at explaining the content of the model according to the diferent predefined roles of users.</p>
      <p>Imagine a company working on the flights’ aggregation service. Assume that on the
motivation layer the enterprise architect formulated the following principle: “Search results should
contain only relevant information”. At the application level this principle may lead to two
services: (1) rates comparison service; (2) travel conditions information service. Given two
programmers working separately on each of these services, e.g. applying the microservices
development approach, who are interested in the information and decisions regarding one
service only, the architect would have to manually create two separate views. It could be done by
the means of modern modelling tools and proper viewpoints, but this approach is not scalable.</p>
      <p>
        Kang et al. claimed that lack of semantics in EA models is a source of communication problems,
because EA components are defined in natural language and could be misunderstood [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Since
there is a necessity to support enterprise architects in the development process, there are several
initiatives for bringing formal semantics into EA models. Gampfer et al. even noticed, that “the
focus of EA research has shifted from understanding EA in the early years to managing EA
today” [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Some of these initiatives are based on providing (meta-)ontologies, e.g., [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], while
others attempt to verify the models with methods of formal logic, e.g., [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
    </sec>
    <sec id="sec-5">
      <title>4. Complexity Management Approaches</title>
      <p>To the best of our knowledge so far there is no suggested approach to complexity management
of EA models that could provide viewpoints according to the user profile. However, EA models
can be considered as a special class of conceptual models.</p>
      <p>
        For quite some time both ontological complexity management and complexity management
in conceptual modelling have been areas of intensive research. Investigations are mostly done
on the following tasks:
1. modularization: the process of fragmenting a model into several parts;
2. view extraction: the process of computing a self-contained portion (closure) of a model
that results from a particular traversal of links starting at a central concept or concepts,
defined by the user (see, e.g., [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]);
3. model abstraction or summarization: the process of producing a reduced version of the
original model by omitting details and concentrating on the semantic context [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
      <p>
        The techniques that are used to solve the above tasks can be roughly classified into
traversalbased and logic-based approaches. Some of the approaches from the first group, e.g. [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], assumes
that the engineer has a deep understanding of the given ontology, while others are able to deal
with the task in an automatic way [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], yet leading to the deterministic results without any
obvious way for adapting them to the user profile.
      </p>
      <p>
        Logic-based approaches are usually grounded on the notion of forgetting. According to
Yizheng Zhao [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] forgetting is a non-standard reasoning service that creates views by
eliminating concepts and roles from description logic-based ontologies while preserving all logical
consequences up to the remaining symbols. Comparing these approaches to the traversal-based
approaches one can decide that they are more ‘user-centric’, since the user decides what exactly
she wants to forget, however, such approaches also (i) require an understanding of the ontology
and (ii) are deterministic up to the given seeds.
      </p>
      <p>In the above-mentioned example, our architect could be interested not only in the automatic
generation of two viewpoint-based models but also in adapting those models to the knowledge
level of each programmer, incl. providing additional information about the system if needed.</p>
    </sec>
    <sec id="sec-6">
      <title>5. Conclusions</title>
      <p>
        According to Erasmus et al. increasing the complexity of a phenomenon does not make it
less explainable [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Given a complicated EA model, there is a need for automatic complexity
management, but widely used visual language cannot provide it. A “better system” should
rather present results tailored to characteristics of each user, rather than being intent on some
“typical” person [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The research is on the initial stage, however, it is expected, that user
profile can be described with the help of the ontology, which should be able to reflect not only
long-term user characteristics, such as area of interest or level of expertise, but also relatively
short-term ones, e.g., current goal.
      </p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>This paper and the research behind it would not have been possible without the exceptional
support and valuable comments of my supervisors, Prof. Diego Calvanese and Prof. Giancarlo
Guizzardi.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A. B.</given-names>
            <surname>Arrieta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Díaz-Rodríguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Ser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bennetot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Tabik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Barbado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Garcia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gil-Lopez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Molina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Benjamins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chatila</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Herrera</surname>
          </string-name>
          ,
          <article-title>Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible ai</article-title>
          ,
          <source>Information Fusion</source>
          <volume>58</volume>
          (
          <year>2020</year>
          )
          <fpage>82</fpage>
          -
          <lpage>115</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.inffus.
          <year>2019</year>
          .
          <volume>12</volume>
          .012.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Katifori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Golemati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Vassilakis</surname>
          </string-name>
          , G. Lepouras,
          <string-name>
            <given-names>C.</given-names>
            <surname>Halatsis</surname>
          </string-name>
          ,
          <article-title>Creating an ontology for the user profile: Method and applications</article-title>
          ,
          <source>in: Proceedings of the First IEEE International Conference on Research Challenges in Information Science (RCIS)</source>
          ,
          <year>2007</year>
          , pp.
          <fpage>407</fpage>
          -
          <lpage>412</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lankhorst</surname>
          </string-name>
          , Enterprise Architecture at Work: Modelling,
          <source>Communication and Analysis</source>
          , 3rd ed.,
          <year>2013</year>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          -29651-2.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Kang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <article-title>An ontology-based enterprise architecture</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>37</volume>
          (
          <year>2010</year>
          )
          <fpage>1456</fpage>
          -
          <lpage>1464</lpage>
          . URL: https://www.sciencedirect.com/ science/article/pii/S0957417409006368. doi:
          <volume>10</volume>
          .1016/j.eswa.
          <year>2009</year>
          .
          <volume>06</volume>
          .073.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Gunning</surname>
          </string-name>
          , D. Aha,
          <article-title>Darpa's explainable artificial intelligence (XAI) program</article-title>
          ,
          <source>AI</source>
          Magazine
          <volume>40</volume>
          (
          <year>2019</year>
          )
          <fpage>44</fpage>
          -
          <lpage>58</lpage>
          . URL: https://ojs.aaai.org/index.php/aimagazine/article/view/2850. doi:
          <volume>10</volume>
          . 1609/aimag.v40i2.
          <fpage>2850</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>G.</given-names>
            <surname>Ras</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. van Gerven</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Haselager</surname>
          </string-name>
          , Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges, Springer International Publishing,
          <year>2018</year>
          , pp.
          <fpage>19</fpage>
          -
          <lpage>36</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -98131-
          <issue>4</issue>
          _
          <fpage>2</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Erasmus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Brunet</surname>
          </string-name>
          , E. Fisher, What is interpretability?, Philosophy &amp; Technology
          <string-name>
            <surname>Online</surname>
          </string-name>
          (
          <year>2020</year>
          ).
          <source>doi:10.1007/s13347-020-00435-2.</source>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rosenfeld</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Richardson</surname>
          </string-name>
          ,
          <article-title>Explainability in human-agent systems, Autonomous agents and multi-agent systems 33 (</article-title>
          <year>2019</year>
          )
          <fpage>673</fpage>
          -
          <lpage>705</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10458-019-09408-y.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Heuillet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Couthouis</surname>
          </string-name>
          ,
          <string-name>
            <surname>N.</surname>
          </string-name>
          <article-title>Díaz-Rodríguez, Explainability in deep reinforcement learning</article-title>
          ,
          <source>Knowledge-Based Systems</source>
          <volume>214</volume>
          (
          <year>2021</year>
          )
          <article-title>106685</article-title>
          . URL: https://www.sciencedirect.com/science/ article/pii/S0950705120308145. doi:
          <volume>10</volume>
          .1016/j.knosys.
          <year>2020</year>
          .
          <volume>106685</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>E.</given-names>
            <surname>Rich</surname>
          </string-name>
          ,
          <article-title>Users are individuals: individualizing user models</article-title>
          ,
          <source>International Journal of HumanComputer Studies</source>
          <volume>51</volume>
          (
          <year>1999</year>
          )
          <fpage>323</fpage>
          -
          <lpage>338</lpage>
          . URL: https://www.sciencedirect.com/science/article/ pii/S1071581981603122. doi:
          <volume>10</volume>
          .1006/ijhc.
          <year>1981</year>
          .
          <volume>0312</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Sieg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mobasher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Burke</surname>
          </string-name>
          ,
          <article-title>Learning ontology-based user profiles: A semantic approach to personalized web search</article-title>
          ,
          <source>IEEE Intelligent Informatics Bulletin</source>
          <volume>8</volume>
          (
          <year>2007</year>
          )
          <fpage>7</fpage>
          -
          <lpage>18</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>F.</given-names>
            <surname>Gampfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jürgens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Buchkremer</surname>
          </string-name>
          ,
          <article-title>Past, current and future trends in enterprise architecture - a view beyond the horizon</article-title>
          ,
          <source>Computers in Industry</source>
          <volume>100</volume>
          (
          <year>2018</year>
          )
          <fpage>70</fpage>
          -
          <lpage>84</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/S0166361517306723. doi:
          <volume>10</volume>
          . 1016/j.compind.
          <year>2018</year>
          .
          <volume>03</volume>
          .006.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>P.</given-names>
            <surname>Szwed</surname>
          </string-name>
          ,
          <article-title>Evaluating Eficiency of ArchiMate Business Processes Verification with NuSMV</article-title>
          , volume
          <volume>243</volume>
          <source>of Lecture Notes in Business Information Processing</source>
          ,
          <year>2015</year>
          , pp.
          <fpage>179</fpage>
          -
          <lpage>196</lpage>
          . doi:
          <volume>10</volume>
          . 1007/978-3-
          <fpage>319</fpage>
          -30528-8_
          <fpage>11</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>N. F.</given-names>
            <surname>Noy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Musen</surname>
          </string-name>
          , Traversing Ontologies to Extract Views,
          <year>2009</year>
          , pp.
          <fpage>245</fpage>
          -
          <lpage>260</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          -01907-
          <volume>4</volume>
          _
          <fpage>11</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>G.</given-names>
            <surname>Guizzardi</surname>
          </string-name>
          , G. Figueiredo,
          <string-name>
            <surname>M. M. Hedblom</surname>
          </string-name>
          , G. Poels,
          <article-title>Ontology-based model abstraction</article-title>
          ,
          <source>in: Proceedings of the 13th International Conference on Research Challenges in Information Science (RCIS)</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          . doi:
          <volume>10</volume>
          .1109/RCIS.
          <year>2019</year>
          .
          <volume>8876971</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <source>Automated Semantic Forgetting for Expressive Description Logics, Ph.D. thesis</source>
          , University of Manchester,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>