<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards XMAS: eXplainability through Multi-Agent Systems?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>University of Bologna</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Italy</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>giovanni.ciatto</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>roberta.calegari</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>andrea.omicinig@unibo.it</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Applied Sciences and Arts Western Switzerland</institution>
          ,
          <country country="CH">Switzerland</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>1841</year>
      </pub-date>
      <abstract>
        <p>In the context of the Internet of Things (IoT), intelligent systems (IS) are increasingly relying on Machine Learning (ML) techniques. Given the opaqueness of most ML techniques, however, humans have to rely on their intuition to fully understand the IS outcomes: helping them is the target of eXplainable Arti cial Intelligence (XAI). Current solutions { mostly too speci c, and simply aimed at making ML easier to interpret { cannot satisfy the needs of IoT, characterised by heterogeneous stimuli, devices, and data-types concurring in the composition of complex information structures. Moreover, Multi-Agent Systems (MAS) achievements and advancements are most often ignored, even when they could bring about key features like explainability and trustworthiness. Accordingly, in this paper we (i) elicit and discuss the most signi cant issues a ecting modern IS, and (ii) devise the main elements and related interconnections paving the way towards reconciling interpretable and explainable IS using MAS.</p>
      </abstract>
      <kwd-group>
        <kwd>MAS</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>XMAS</title>
    </sec>
    <sec id="sec-2">
      <title>XAI explainability road map</title>
      <p>In the current decade, Internet of Things (IoT) systems, devices, and frameworks
boomed, demanding contributions from both industry and academia. People's
daily lives are getting entangled with uncountable cyber-physical devices capable
of sensing, acting, and reasoning about the surrounding environment and who
populates it. This leads to an intriguing set of socio-technical challenges for the
Arti cial Intelligence (AI) researchers and practitioners. The complexity of the
IoT systems is increasing at a fast pace, employing underlying AI techniques such
as Machine Learning (ML) in the system core mechanisms to analyse, combine,
and pro le heterogeneous sets of data. For instance, virtual assistants such as
? Copyright c 2019 for this paper by its authors. Use permitted under Creative
Commons License Attribution 4.0 International (CC BY 4.0)</p>
      <p>
        Alexa, Cortana, Siri, or Google Home [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] exploit ML to improve a seamless vocal
interaction and re ned recommendation systems; Nest, the smart thermometer
from Google, uses ML to learns from the user's habits. Moreover, such devices can
interact with each other, thus increasing the input data-types and the overall
complexity that the system has to deal with. The e ect of such a deep and
pervasive adoption of IoT within the productive and service fabrics of human
societies is that AI and ML are going to control { or at least a ect { an ever
increasing number of aspects of people's lives.
      </p>
      <p>
        The bene ts of such AI-powered evolution are remarkable. Industries can now
reach a novel degree of automation, whereas customers can now enjoy a plethora
of new services mediated by their devices, as data and services now can fuel
unprecedented business opportunities and markets. However, such a transition
is unlikely to occur without costs. Besides ethical and sociological issues, the
current usage of AI is far from being socially acceptable. In particular, the recent
hype on ML, Deep Learning (DL), and other numeric AI methods { commonly
referred as \third AI spring" { has led to a situation where several decisions are
delegated to subsymbolic predictors out of human control and understanding|
as demonstrated by the many cases where they blatantly misbehaved [
        <xref ref-type="bibr" rid="ref11 ref15 ref9">11,15,9</xref>
        ].
      </p>
      <p>Furthermore, as broadly acknowledged by many research communities, we
argue that the development process of current intelligent systems is awed by
the following issues:
Lack of generalisation | Most tasks in AI require very speci c modelling,
design, and development/training process. As a result, the integration,
combination, and comparison of di erent { yet similar { methods in AI is either
impossible or achieved through highly human-intense ad hoc (thus not
scalable/extendable) system design.</p>
      <p>Opaqueness | When numeric (data driven) methods are exploited,
predictions come without an understandable motivation|or, more generally,
without a model. Unfortunately, in most applications, data scientists only care
about predictive performance and generalisation capability. However, the
adoption of opaque methods or predictors reduces the scope of application
of intelligent systems|possibly due to either practical or legal constraints,
and, more concerning, to the lack of trust manifested by people and
organisations.</p>
      <p>
        Lack of automation (in the development process) | Despite AI is a tool
ultimately adopted to seek automation, the training process of most numeric
predictors is far from being automatic. The experience and the background
of the data scientist are still heavy discriminants for the overall predictive
performance. Methodologies and guidelines do not ensure any success in the
general case, and a lot of trials-and-errors are typically unavoidable.
Centralisation of both data and computation: it is often required or preferred
given the centralised nature of most algorithms or the legal constrains data
is subject to. Centralisation poses severe technical limitations to the way
data is managed, and to the design of computing systems.
Both the industry and the academia tend to tackle such problems individually,
without looking at the whole picture. As a result, most of research activities
focus on: (i) creating ad-hoc integration of AI sub-systems tailored on speci c
problems; (ii) developing techniques easing the interpretation of speci c numeric
predictors/predictions, exploiting results from the eXplainable AI (XAI) research
area [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]; (iii) improving AI/ML performances in speci c problems; (iv) setting
up custom parallel or distributed implementations of speci c AI methods|which
may easily result in overly complicated solutions if legal constrains on data
location have to be enacted. Accordingly, we believe that such trends actually
slow down the identi cation of a general and comprehensive solution.
      </p>
      <p>
        In this paper we claim that Multi-Agent Systems (MAS) [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] have the
potential to provide a general { both conceptual and technical { framework to
address most of the aforementioned issues. MAS are composed of several (possibly
distributed) intelligent software (or cyber-physical) agents prone to automatic
reasoning and symbolic manipulation. We argue that such agents can be
employed to: (i) dynamically provide interpretations and explanations for opaque
systems, (ii) ease the integration among di erent solutions/components for
similar tasks or predictors, (iii) increase the degree of automation characterising the
development of intelligent systems, and (iv) support the provisioning of machine
intelligence even in those (possibly distributed and decentralised) contexts where
data cannot be moved due to technical or legal limitations. Moreover, MAS can
be fruitfully combined with methods for symbolic knowledge extraction (out of
numeric predictors) [
        <xref ref-type="bibr" rid="ref12 ref2">2,12</xref>
        ], and Distributed Ledger Technologies (DLT, a.k.a.
blockchains [
        <xref ref-type="bibr" rid="ref10 ref4">4,10</xref>
        ]).
      </p>
      <p>
        Contribution. This paper drafts a long-term research line supporting our claim.
In particular, we provide a i [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ] formalisation of the foreseeable research
activities and the dependencies among them.
      </p>
      <p>The reminder of this paper is organised as follows. Section 2 recalls the main
research topics to be involved and combined in our research activity. Section 3
presents and discusses the aforementioned i modelling. Finally, Section 4
provides some remarks and concludes the paper.
2
2.1</p>
      <sec id="sec-2-1">
        <title>State of the Art</title>
        <p>
          eXplainable Arti cial Intellingence (XAI)
Modern intelligent systems (IS) often leverage on black-box predictive models
which are trained through ML or DL, or other numeric approaches. The
\blackbox" expression refers to models where knowledge is not explicitly represented
[
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. As a consequence, humans have di culties (or, have no way) to understand
that knowledge, and why it led to suggest or undertake a given decision.
Obviously, troubles in understanding black-boxes content and functioning prevents
people from fully trusting { therefore accepting { them. In contexts such as
medical or nancial, having IS merely suggesting / taking decisions is not acceptable
anymore|e.g., due to ethical and legal concerns. For example, current
regulations such as the GDPR [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ], ACM Statement on Algorithmic Transparency and
Accountability [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], and the European Union's \Right to Explanation" [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ],
demand IS to become understandable in the near future. In fact, understanding IS is
essential to guarantee algorithmic fairness, to identify potential bias/problems in
the training data, and to ensure that IS perform as designed and expected.
However, the notion of understandability is neither standardised nor systematically
assessed, yet. The recently-emerged XAI research eld is targeting such issues|
e.g., DARPA has proposed a comprehensive research road map [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. Research
e orts in XAI focus on achieving key properties in AI such as interpretability,
algorithmic transparency, explainability, accountability, and trustworthiness [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
Unfortunately, such goals are still far from reach. One of the main reasons is the
lack of a formal and agreed-upon de nition of such concepts [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. Moreover, most
works only target classi cation problems, and they rarely take wider properties
{ such as accountability and trustworthiness { into account [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ].
Interpretability vs. Explainability. In the context of XAI, the terms
\explainability" and \interpretability" are too often used carelessly [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. In
particular, they are interchanged or just conveniently associated with in-house {
misleading, often erroneous { de nitions. Although they are closely related and
both contributing to the ultimate goal of understandability, it is worth
pointing out the di erences in order to better comprehend our XMAS vision|where
XMAS stands for eXplainability through Multi-Agent Systems.
        </p>
        <p>On the one hand, we borrow the de nition of \interpretation" from logic,
where the word essentially describes the operation of binding objects to their
actual meaning in some context. Thus, as far as numeric models are concerned,
the goal of interpretability is to convey to humans the meaning hidden into the
data and the mechanisms/decisions characterising the predictors.</p>
        <p>On the other hand, we de ne \explanation" as the act making someone
understand the information conveyed in a given discourse. It worth to highlight
that the act of explaining is an activity involving at least two interacting parties,
one explaining (explainer) and the other(s) willing to understand (explainee).</p>
        <p>
          In the context of IS, the goal of explainability is to transfer to the receiver
(possibly humans) given information (e.g., awareness of the reasons leading the
system to act in a certain way) on a semantic level, aligning the State of Mind
(SoM) [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ] of the explainer and the explainee. The practice of explaining
involves unveiling some background knowledge, or some latent information, that
the explainee may not have \noticed" or explicitly required.
        </p>
        <p>Such a distinction between interpretability and explainability is crucial since
it shows how most XAI approaches proposed into the recent literature mostly
focus on interpretability. Thus, while research into interpretable ML is widely
recognised as important, a joint understanding of the concept of explainability
still needs to evolve.
Symbolic AI for Explainability. XMAS targets both the aspects
characterizing understandability. However, di erently from other research lines branching
from XAI, our vision poses a remarkable emphasis on explainability. In
particular, in XMAS, we commit to symbolic AI as the main means for explainability.</p>
        <p>
          By \symbolic AI" we mean the branch of AI focusing on symbolic knowledge
representation, automatic reasoning, constraint satisfaction, and logic
programming [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ]. Such areas are deeply entangled with the results from computational
logics [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], making their applications either inherently interpretable or easy to
explain|given their lack of ambiguity and their underlying sound reasoning.
        </p>
        <p>The reasons behind this commitment are threefold. Firstly, we let XMAS
support the wide gamma of results, methods, algorithms, and toolkits developed
under the umbrella of symbolic AI. Secondly, as further discussed in the following
sections, we believe that the adoption of symbolic AI to be an enabling choice
for the full exploitation of MAS. Finally, we argue that symbolic representations
(e.g., the language of 1OL formulas), may act as a lingua franca for knowledge
representation and exchange among heterogeneous IS.</p>
        <p>
          In particular, the potential of logic-based models and their extensions is
mainly due to their declarativeness and explicit knowledge representation {
enabling knowledge sharing at an adequate level of abstraction { modularity, and
separation of concerns [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]|which are especially valuable in open and dynamic
distributed systems.
        </p>
        <p>
          Symbolic knowledge extraction. The generality of symbolic approaches is
also due to the many research works recently pointing out that the
explanation capability of numeric predictors can be achieved via symbolic knowledge
extraction (SKE) [
          <xref ref-type="bibr" rid="ref12 ref6">12,6</xref>
          ].
        </p>
        <p>SKE groups methods and techniques for extracting symbolic representations
of the numeric knowledge buried in data and captured through ML during
predictors training. Indeed, one of the main issues in symbolic AI is that human
experts must often handcraft symbolic knowledge relying on their background
and experience. This is not what happens in ML, where useful { yet hard to
interpret { numeric knowledge is mined from data. Therefore, SKE can enable
the exploitation of both symbolic and numeric AI without their respective
shortcomings. In turn, XMAS aims at leveraging on the symbolic knowledge extracted
from ML-powered predictors as a basis for providing explanations of its
predictions and functioning.</p>
        <p>
          Many SKE techniques have been proposed in the literature. Some of them
focus on speci c sorts of ML predictors, such as neural networks { and they are
therefore called \decompositional" {, whereas others are more general any may
virtually target any predictor|and they are thus called \pedagogical". Several
relevant contributions to the topic are outlined in surveys such as [
          <xref ref-type="bibr" rid="ref2 ref28">28,2</xref>
          ].
2.2
        </p>
        <p>
          Multi-agent systems (MAS)
Multi-agent systems (MAS) represent an extensive research area placed at the
intersection between computer science, AI, and psychology, studying systems
composed by interactive, autonomous, and usually intelligent entities called agents
[
          <xref ref-type="bibr" rid="ref14">14</xref>
          ].
        </p>
        <p>The agent abstraction has been described in many ways. However, most
de nitions agree on the following traits. (i) Agents are entities operating into
an environment possibly perceived through sensors and a ect through
actuators. (ii) Agents are autonomous in the sense that have the capability of
deciding on their own which actions are to be performed in order to achieve (or
maintain) the goals they have been provided with|which in most cases are
explicitly represented through some symbolic language. (iii) Agents are social,
meaning that they can (and usually need to) interact (e.g., communicate,
cooperate, and/or compete) with each other or with human users in their attempt
to achieve/maintain their goals. (iv) Agents have a mind consisting of a belief
base (BB)|storing symbolic data representing the (possibly wrong, steady, or
biased) information each agent believes to know about the environment or other
agents. The content of a given agent's BB is a ected by its perceptions and
interactions, and it may in uence its actions. The general notion of agent is so
wide that both software entities and human beings may t it. Such formal laxity
is deliberate and useful. In fact, it allows human-machine and machine-machine
interactions to be captured at the same level of abstraction and to be described
through a coherent framework.</p>
        <p>
          Dialogical argumentation. A central role in agent sociality is played by
argumentation [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]: there, the emphasis is on the exchange of arguments and
counterarguments between agents, commonly aimed at making them able of reason and
act intelligently even in presence of incomplete, inconsistent, biased, or partially
wrong belief bases. Of course, the activity of argumentation involves a number
capabilities { ranging from arguments mining or building to argument exchange
in multi-party dialogues, and stepping through acceptability semantics {, and as
many research lines.
        </p>
        <p>
          Dialogical argumentation, in particular, is the activity performed by a number
of agents dynamically discussing about some topic they are concerned with from
di erent perspectives, in an attempt to agree on some shared truth about that
topic. Thus, dialogical argumentation accounts for how arguments and
counterarguments are generated and evaluated, how the agents interact { i.e. what kinds
of dialogical moves they can make {, and how agents can retract arguments,
update beliefs, etc. Usually, it is set against monological argumentation [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ],
where the goal is two provide algorithms for computing which arguments are
winning in a given setting, and which conclusions can be therefore drawn.
2.3
        </p>
        <p>The i</p>
        <p>modelling language
Modelling a domain is a human-intensive, non-automated task. The domain
IoT-powered IS is characterised, by itself, by complex requirements, theories,
and methods converging from several scienti c elds. Also, the XMAS vision
intertwines results from disparate research areas, which historically has been
kept mostly disjoint.</p>
        <p>
          To generate a clear and structured understanding of our vision, we adopt
the Goal-Oriented Requirement Engineering (GORE) approach [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ], and in
particular we exploit i as a modelling language [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ]. i is a graphical language
usually employed to model requirements for a single system. Nevertheless, it has
been successfully employed to explore and map user needs and requirements for
extensive application domains [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
        </p>
        <p>Here, an i model consists of a graph whose vertices are elements of four
kinds: Goals (ranged by Gi), Soft Goals (ranged by SGj), Tasks (ranged by
Tk), Resources (ranged by Rl); edges (a.k.a. links) represent relations of various
sorts among the aforementioned elements.</p>
        <p>Figure 1 depicts how elements and links are graphically represented. A more
complete description of the i graphical formalism can be found on the i wiki
web page3. Informally, goals are desired properties or objectives whose
achievement can either be satis ed or not, in a discrete fashion. Conversely, soft goals
are (non-necessarily measurable) desirable properties or objectives which can be
3 http://istar.rwth-aachen.de/tiki-index.php?page=iStarQuickGuide
satis ed qualitatively or up to some degree. Tasks represent activities to be
performed as an attempt to satisfy (resp. positively a ect) one or more goals (resp.
soft goals). Resources represent entities to be produced or consumed by tasks,
and whose availability may favor or hinder the satisfaction of goals.</p>
        <p>As far as links are concerned, soft goals are usually connected to each other
via \contribution links", which specify their contribution in ful lling the needs|
e.g., positive: Some+, or negative: Some-; goals are connected via \means-end"
arrows; tasks are connected via \decomposition" links. Other sorts of links mostly
de ne generic \dependencies".
3</p>
      </sec>
      <sec id="sec-2-2">
        <title>XMAS Vision: eXplainability through MAS</title>
        <p>Figure 2 graphically represents the i modelling of the XMAS vision, and
highlights its main aspects. The representation aims at providing an intuitive
graphical assessment of the various elements and their interconnections
concurring in advancing state of the art of XAI in IS. We argue that having a
overall mapping of XMAS requirements, objectives, as well as of their mutual
interdependencies, can also facilitate the assessment, presentation, design, and
implementation of a coherent research activity aimed at supporting our vision.</p>
        <p>About the main soft goals. Our vision stems from the recognition that the
success of IoT-powered IS is due to the general, increasing demand of analytical
and predictive performance { corresponding to the SG0 weak goal in Figure 2 {
transversally pervading the productive fabric of most developed societies.
However, we also acknowledge several other desiderata { mostly targeting the issues
highlighted in Section 1, and corresponding to as many weak goals into our i
model { which are required to some extent by modern AI, mostly due to the
increasingly pervasive adoption of AI techniques.</p>
        <p>For instance:
{ IS need to be understandable (SG1), meaning both interpretable and (above
all) explainable, in the sense outlined in Section 2.
{ Understandability, in turn, is only one of the aspects concurring in making
modern cyber-physical systems perceived as trustworthy (SG2). Other
aspects are important as well, like, egg,, having some degree of control on the
behaviour of autonomous agents, and on the data and knowledge they are
relying upon. In particular, when it comes to data and software, integrity
and tampering-resistance are properties of paramount importance, strongly
a ecting the trustworthiness of IS.
{ The IoT landscape also stresses the need to widen researchers' focus towards
the \system of systems" dimension. The vertical specialisation (G1) of
MLbased solution w.r.t. speci c tasks is not the only concern any longer. Indeed,
the horizontal integration (SG3) of heterogeneous systems is important as
well. There, we expect IS to acquire the capability of dynamically and
autonomously integrate, complement, and extend their respective knowledge,
in a similar way to what human beings do when talking to each other.
{ At the same time, the need for a higher degree of automation (SG4) in IS
development is impelling, as the current bottleneck in the development of IS
is due to the deep dependence of the process on human intuition.
{ Finally, in spite of the many legal and ethical constraints a ecting data
and their usage, IoT-powered IS eventually need to overcome the current
tendency to data centralization (SG5), as it imposes severe limitations over
their e ectiveness, e ciency, and adoption.</p>
        <p>The XMAS vision pursues the goal of tackling { or at least improving { all
such issues, and, ultimately, of making intelligent systems more trustworthy.</p>
        <p>To do so, we analysed the current trends in this area and understood that the
combination of numeric methods with more classical symbolic AI approaches,
possibly mediated by MAS, may provide bene cial e ects at several levels.
On predictive capability. However, before describing how and why our
proposal may provide an advantage, we need to brie y recall the most relevant
aspects in IS development, as well as their mutual interdependencies.</p>
        <p>In their quest for higher predictive performances (SG0), IS designers simply
apply numeric ML (T1) methods to the data (R1) they have collected through
IoT devices. Such a process is far from being automatic, as it requires the
experience and the trial-and-error work of well-prepared data-scientists. It is aimed at
the creation of predictors (R2), which are mathematical models capable of
providing numerical predictions (G2) in situations analogous to the ones described
by the data they have been trained with.</p>
        <p>Nevertheless, even after the deployment of the IS, its vertical specialisation
(G1) on the speci c problem described by data remains an ever-lasting process,
as new data keeps being produced/captured by the systems in production. Most
researchers or data scientists prefer to focus on such sorts of tasks as they are
very valuable and numerically quanti able|in terms of predictive performance.
On understandability. As far as the soft goal of understandability (SG1)
is concerned, we argue that it can be tackled either by easing predictors/data
interpretability (SG6) or by providing means for their explanability (SG7).</p>
        <p>When it comes to let humans interpret the predictions provided by
MLpowered predictors, the most common way to do so is to employ the most
adequate technique for the problem at hand, possibly mediated by some analytical
or visualisation toolkit (T2) and let the human intuition of experts (T3) do the
magic. Thus, despite being very e ective in speci c cases, such an approach lacks
generality and hinders automation.</p>
        <p>Conversely, when it comes to providing explanations to the users, the XMAS
vision recognises the prominent role of interaction in letting knowledge be
transferred from IS to humans (and possibly vice versa). In particular, we envision
a scenario where intelligent agents exchange symbolic knowledge with humans
through various channels, interfaces, and languages|i.e., we envision several
possible means for Human-Computer Interaction (HCI, G3).</p>
        <p>As a rst step in this direction, XMAS leverages on symbolic knowledge
extraction (SKE, T4) as a means for attaining logic-based, symbolic rules and facts
(R3) out of numeric predictors (R2) and raw data (R1).</p>
        <p>The next step consists of employing such symbolic rules and facts as
knowledge bases for cognitive, distributed agents (G4). More precisely, we state a
one-to-one correspondence among numeric predictors and the agents to be
deployed. Thus, we say that each predictor is wrapped by an agent.</p>
        <p>Such a wrapping is an enabling step in several directions. For instance, we
expect that by employing dialogical argumentation (T5), cognitive agents may
become able to compare and complement the knowledge they have extracted
from numeric predictors.</p>
        <p>The capability of knowledge revision is particularly interesting, especially if
one of the agents involved in the argumentation process is a human being.
Indeed, if adequately constrained, dialogical argumentation may act as a means for
providing interactive explanations (SG7) to the users, concerning the symbolic
knowledge wrapped by agents.</p>
        <p>In particular, such explanations can be even more e ective if the interaction
among users and software is mediated by some textual, vocal, or avatar-based
user interface aimed at easing human-computer interaction (G3).
On the bene ts of argumentation. However, the adoption of extracted
symbolic knowledge and dialogic argumentation is not merely aimed at supporting
the explanations.</p>
        <p>Instead, it may also positively a ect what we call the horizontal integration
(SG3) of heterogeneous IS attained by di erent { yet related { data. This, in
turn, enables the integration and exploitation of di erent perspectives on the
information carried by data|which implies that di erent points of view can be
merged to more precise predictions, as well as alternative predictive scenarios can
be produced. Horizontal integration could thus make (more) valuable the many
degrees of freedom and the inherent randomness characterising the processes of
data retrieval, selection, engineering, partitioning, and analysis.</p>
        <p>At the same time, the agents' capability of mutually updating and correcting
their belief bases (G5) may pave the way towards the development of IS where
predictions can be attained without relying on the centralization of data on
a speci c computational facility, nor on its transfer outside the organizational
domain it belongs to. In other words, XMAS enables the decentralization of
knowledge and computation (SG5). Despite data being usually subject to strict
regulations limiting { among the others { its transfer, this is possible because
aggregated { thus anonymous { data, such as the high-level rules extracted from
data or predictors, are subject to less limiting regulations.</p>
        <p>Similarly, argumentation may be conceived as a means for supporting a higher
degree of automation (SG4) in the development of IS. In particular, protocols
could be de ned, letting new agents query other agents for symbolic knowledge
they do not have. By doing so, cognitive agents can learn predictive or
explanatory rules autonomously, even without needing direct access to the data.
On trustworthiness. If the XMAS vision will be accomplished, the e ect of a
handcrafted malicious (or buggy) agent, deliberately or mistakenly attempting
to inject wrong knowledge into an agent society could be nefarious|and, by
assuming an open and distributed society such as the IoT, this contingency
cannot be excluded. This is another critical issue preventing people from fully
trusting IS nowadays.</p>
        <p>
          To mitigate such concerns, DLT (R4) could be exploited to prevent the
tampering of data or software (G6), or, to keep track of agents' reputation|
assuming some reputation-enforcing protocol (T6) [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] is enacted by the agent
society.
4
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>Conclusion</title>
        <p>In this paper we point out a number of issues a ecting modern IoT, and in general
distributed IS whose intelligence leverages on ML. In particular, focusing on the
data analytics layer of most IoT-based applications, we argue that a number
of issues are still far from being completely closed. For instance, we discuss
why most ML-powered IS lack transparency, automation (in the development
process), and decentralisation (of both data and computation).</p>
        <p>Elaborating on such open issues, we discuss a research line { called
eXplainability through Multi-Agent Systems (XMAS) { aimed at addressing them
altogether in a coherent and e ective way. In the XMAS vision, we plan to integrate
a number of contributions from the symbolic AI, MAS, and XAI research areas.</p>
        <p>Accordingly, in this paper we provide an overview of the state of the art of
the aforementioned areas, shortly discuss their main achievement and limitations
in the XMAS perspective, and present a formal model of the XMAS vision using
the i modelling language.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>ACM</given-names>
            <surname>US Public Policy</surname>
          </string-name>
          <article-title>Council: Statement on algorithmic transparency and accountability (</article-title>
          <year>Jan 2017</year>
          ), https://www.acm.org/binaries/content/assets/ public-policy/
          <year>2017</year>
          <article-title>_usacm_statement_algorithms</article-title>
          .pdf
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Andrews</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Diederich</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tickle</surname>
            ,
            <given-names>A.B.</given-names>
          </string-name>
          :
          <article-title>Survey and critique of techniques for extracting rules from trained arti cial neural networks</article-title>
          .
          <source>Knowledge-Based Systems 8(6)</source>
          ,
          <volume>373</volume>
          {389 (Dec
          <year>1995</year>
          ). https://doi.org/10.1016/
          <fpage>0950</fpage>
          -
          <lpage>7051</lpage>
          (
          <issue>96</issue>
          )
          <fpage>81920</fpage>
          -
          <lpage>4</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Anjomshoae</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Najjar</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Calvaresi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , Framling, K.:
          <article-title>Explainable agents and robots: Results from a systematic literature review</article-title>
          .
          <source>In: 18th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS</source>
          <year>2019</year>
          ). pp.
          <volume>1078</volume>
          {
          <fpage>1088</fpage>
          . International Foundation for Autonomous Agents and
          <string-name>
            <given-names>Multiagent</given-names>
            <surname>Systems</surname>
          </string-name>
          (
          <year>2019</year>
          ), https://dl.acm.org/citation.cfm?id=
          <fpage>3331806</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Bashir</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Mastering Blockchain: Distributed ledger technology, decentralization, and smart contracts explained</article-title>
          .
          <source>Packt Publishing Ltd</source>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Boyer</surname>
            ,
            <given-names>R.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moore</surname>
            ,
            <given-names>J.S.</given-names>
          </string-name>
          :
          <string-name>
            <given-names>A Computational</given-names>
            <surname>Logic</surname>
          </string-name>
          <string-name>
            <surname>Handbook</surname>
          </string-name>
          , Perspectives in Computing, vol.
          <volume>23</volume>
          . Academic Press (
          <year>1988</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Calegari</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ciatto</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dellaluce</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Omicini</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Interpretable narrative explanation for ML predictors with LP: A case study for XAI</article-title>
          . In: Bergenti,
          <string-name>
            <given-names>F.</given-names>
            ,
            <surname>Monica</surname>
          </string-name>
          , S. (eds.)
          <source>WOA</source>
          <year>2019</year>
          { 20th Workshop \
          <article-title>From Objects to Agents"</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>2404</volume>
          , pp.
          <volume>105</volume>
          {
          <fpage>112</fpage>
          .
          <string-name>
            <surname>Sun SITE Central Europe</surname>
          </string-name>
          , RWTH Aachen University (
          <volume>26</volume>
          {
          <issue>28</issue>
          <year>Jun 2019</year>
          ), http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2404</volume>
          /paper16.pdf
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Calvaresi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Claudi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dragoni</surname>
            ,
            <given-names>A.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Accattoli</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sernani</surname>
            ,
            <given-names>P.:</given-names>
          </string-name>
          <article-title>A goaloriented requirements engineering approach for the ambient assisted living domain</article-title>
          .
          <source>In: 7th International Conference on PErvasive Technologies</source>
          Related to Assistive
          <source>Environments (PETRA</source>
          <year>2014</year>
          )
          <article-title>(</article-title>
          <year>2014</year>
          ). https://doi.org/10.1145/2674396.2674416
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Calvaresi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mattioli</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dubovitskaya</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dragoni</surname>
            ,
            <given-names>A.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schumacher</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Reputation management in multi-agent systems using permissioned blockchain technology</article-title>
          .
          <source>In: 2018 IEEE/WIC/ACM International Conference on Web Intelligence (WI</source>
          <year>2018</year>
          )
          <article-title>(</article-title>
          <year>2018</year>
          ). https://doi.org/10.1109/WI.
          <year>2018</year>
          .
          <volume>000</volume>
          -
          <fpage>5</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Calvaresi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mualla</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Najjar</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Galland</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schumacher</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Explainable multi-agent systems through blockchain technology</article-title>
          . In: Calvaresi,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Najjar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Schumacher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            , Framling, K. (eds.) Explainable, Transparent Autonomous Agents and
            <surname>Multi-Agent Systems</surname>
          </string-name>
          . pp.
          <volume>41</volume>
          {
          <fpage>58</fpage>
          . Springer International Publishing,
          <string-name>
            <surname>Cham</surname>
          </string-name>
          (
          <year>2019</year>
          ). https://doi.org/10.1007/978-3-
          <fpage>030</fpage>
          -30391-4 3
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Ciatto</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bosello</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mariani</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Omicini</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Comparative analysis of blockchain technologies under a coordination perspective</article-title>
          . In:
          <string-name>
            <surname>De La Prieta</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gonzalez-Briones</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pawleski</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Calvaresi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Del Val</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lopes</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Julian</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Osaba</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sanchez-Iborra</surname>
            ,
            <given-names>R</given-names>
          </string-name>
          . (eds.)
          <article-title>Highlights of Practical Applications of Survivable Agents and Multi-Agent Systems</article-title>
          .
          <source>The PAAMS Collection, Communications in Computer and Information Science</source>
          , vol.
          <volume>1047</volume>
          ,
          <issue>chap</issue>
          . 7, pp.
          <volume>80</volume>
          {
          <fpage>91</fpage>
          . Springer (Jun
          <year>2019</year>
          ). https://doi.org/10.1007/978-3-
          <fpage>030</fpage>
          -24299-2 7
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Crawford</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Arti cial intelligence's white guy problem</article-title>
          .
          <source>The New York Times</source>
          (
          <year>2016</year>
          ), http://www.nytimes.com/
          <year>2016</year>
          /06/26/opinion/sunday/ artificial-intelligences
          <article-title>-white-guy-problem</article-title>
          .html
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>d'Avila Garcez</surname>
            ,
            <given-names>A.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Broda</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gabbay</surname>
            ,
            <given-names>D.M.</given-names>
          </string-name>
          :
          <article-title>Symbolic knowledge extraction from trained neural networks: A sound approach</article-title>
          .
          <source>Arti cial Intelligence</source>
          <volume>125</volume>
          (
          <issue>1-2</issue>
          ),
          <volume>155</volume>
          {207 (Jan
          <year>2001</year>
          ). https://doi.org/10.1016/S0004-
          <volume>3702</volume>
          (
          <issue>00</issue>
          )
          <fpage>00077</fpage>
          -
          <lpage>1</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Dung</surname>
            ,
            <given-names>P.M.</given-names>
          </string-name>
          :
          <article-title>On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games</article-title>
          .
          <source>Arti cial Intelligence</source>
          <volume>77</volume>
          (
          <issue>2</issue>
          ),
          <volume>321</volume>
          {
          <fpage>357</fpage>
          (
          <year>1995</year>
          ). https://doi.org/10.1016/
          <fpage>0004</fpage>
          -
          <lpage>3702</lpage>
          (
          <issue>94</issue>
          )
          <fpage>00041</fpage>
          -X
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Ferber</surname>
          </string-name>
          , J.:
          <string-name>
            <surname>Multi-Agent Systems</surname>
          </string-name>
          :
          <article-title>An Introduction to Distributed Arti cial Intelligence</article-title>
          . Addison-Wesley (
          <year>1999</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Fourcade</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Healy</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Categories all the way down</article-title>
          .
          <source>Historical Social Research / Historische Sozialforschung</source>
          <volume>42</volume>
          (
          <issue>1</issue>
          ),
          <volume>286</volume>
          {
          <fpage>296</fpage>
          (
          <year>2017</year>
          ), http://www.jstor. org/stable/44176033
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Goodman</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Flaxman</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>European union regulations on algorithmic decisionmaking and a \right to explanation"</article-title>
          .
          <source>AI</source>
          Magazine
          <volume>38</volume>
          (
          <issue>3</issue>
          ),
          <volume>50</volume>
          {
          <fpage>57</fpage>
          (
          <year>2017</year>
          ). https://doi.org/10.1609/aimag.v38i3.
          <fpage>2741</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Guidotti</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Monreale</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Turini</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pedreschi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Giannotti</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>A survey of methods for explaining black box models</article-title>
          .
          <source>ACM Computing Surveys (CSUR) 51(5) (Jan</source>
          <year>2019</year>
          ). https://doi.org/10.1145/3236009
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Gunning</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Explainable arti cial intelligence (XAI)</article-title>
          .
          <source>Funding Program DARPABAA-16-53</source>
          , Defense Advanced Research Projects Agency (DARPA) (
          <year>2016</year>
          ), http: //www.darpa.mil/program/explainable-artificial-intelligence
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Kepuska</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bohouta</surname>
          </string-name>
          , G.:
          <article-title>Next-generation of virtual personal assistants (Microsoft Cortana, Apple Siri, Amazon Alexa and Google Home)</article-title>
          .
          <source>In: 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC</source>
          <year>2018</year>
          ). pp.
          <volume>99</volume>
          {
          <fpage>103</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2018</year>
          ). https://doi.org/10.1109/CCWC.
          <year>2018</year>
          .8301638
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Kontarinis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Debate in a multi-agent system: multiparty argumentation protocols</article-title>
          .
          <source>Ph.D. thesis</source>
          , Universite Rene Descartes { Paris V (Nov
          <year>2014</year>
          ), https: //tel.archives-ouvertes.fr/tel-01345797
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Lipton</surname>
            ,
            <given-names>Z.C.</given-names>
          </string-name>
          :
          <article-title>The mythos of model interpretability</article-title>
          .
          <source>ACM Queue</source>
          <volume>16</volume>
          (
          <issue>3</issue>
          ) (May{
          <year>Jun 2018</year>
          ). https://doi.org/10.1145/3236386.3241340
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Oliya</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pung</surname>
            ,
            <given-names>H.K.</given-names>
          </string-name>
          :
          <article-title>Towards incremental reasoning for context aware systems</article-title>
          .
          <source>In: Advances in Computing and Communications</source>
          ,
          <source>Communications in Computer and Information Science</source>
          , vol.
          <volume>190</volume>
          , pp.
          <volume>232</volume>
          {
          <fpage>241</fpage>
          . Springer (
          <year>2011</year>
          ). https://doi.org/10.1007/978-3-
          <fpage>642</fpage>
          -22709-7 24
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Premack</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Woodru</surname>
          </string-name>
          , G.:
          <article-title>Does the chimpanzee have a theory of mind? Behavioral and brain sciences 1(4</article-title>
          ),
          <volume>515</volume>
          {
          <fpage>526</fpage>
          (
          <year>1978</year>
          ). https://doi.org/10.1017/S0140525X00076512
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Sun</surname>
          </string-name>
          , R.:
          <article-title>Arti cial intelligence: Connectionist and symbolic approaches</article-title>
          . http:// citeseerx.ist.psu.edu/viewdoc/summary?doi
          <source>=10.1.1.34.9688 (Dec</source>
          <year>1999</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Van Lamsweerde</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Goal-oriented requirements enginering: a roundtrip from research to practice [enginering read engineering]</article-title>
          .
          <source>In: 12th IEEE International Requirements Engineering Conference (ICRE</source>
          <year>2004</year>
          ). pp.
          <volume>4</volume>
          {
          <issue>7</issue>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2004</year>
          ). https://doi.org/10.1109/ICRE.
          <year>2004</year>
          .1335648
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Voigt</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , von dem Bussche,
          <string-name>
            <surname>A.</surname>
          </string-name>
          :
          <article-title>The EU General Data Protection Regulation (GDPR). A Practical Guide</article-title>
          . Springer (
          <year>2017</year>
          ). https://doi.org/10.1007/978-3-
          <fpage>319</fpage>
          - 57959-7
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>E.S.K.</given-names>
          </string-name>
          :
          <article-title>Modelling Strategic Relationships for Process Reengineering</article-title>
          .
          <source>Ph.D. thesis</source>
          , University of Toronto, Toronto, Ontario, Canada (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Zilke</surname>
            ,
            <given-names>J.R.</given-names>
          </string-name>
          , Loza Menc a, E.,
          <string-name>
            <surname>Janssen</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>DeepRED { rule extraction from deep neural networks</article-title>
          . In: Calders,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Ceci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Malerba</surname>
          </string-name>
          ,
          <string-name>
            <surname>D</surname>
          </string-name>
          . (eds.) Discovery Science (DS
          <year>2016</year>
          ).
          <source>Lecture Notes in Computer Science</source>
          , vol.
          <volume>9956</volume>
          , pp.
          <volume>457</volume>
          {
          <fpage>473</fpage>
          . Springer (
          <year>2016</year>
          ). https://doi.org/10.1007/978-3-
          <fpage>319</fpage>
          -46307-0 29
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>