<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Knowledge Graph for Explainable Cyber Physical Systems: A Case study in Smart Energy Grids</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Peb</forename><surname>Ruswono</surname></persName>
							<email>peb.aryan@tuwien.ac.at</email>
							<affiliation key="aff0">
								<orgName type="institution">Vienna University of Technology</orgName>
								<address>
									<settlement>Vienna</settlement>
									<country key="AT">Austria</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Knowledge Graph for Explainable Cyber Physical Systems: A Case study in Smart Energy Grids</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">2020A4504808481F289504EBDCB09841</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T14:31+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>knowledge graph</term>
					<term>explainability</term>
					<term>cyber-physical systems</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The rapid development of computing technology and automation widens the scope of the task delegated to cyber-physical systems (CPS) such as smart grids or smart buildings. Explainability, i.e., the ability to provide explanations about system states or behaviors becomes one of the requirements for future cyber-physical systems as more complex computer-made decisions affect our daily lives. The work on the explainability in CPS is scarce despite recent attention on the explainability of algorithms in artificial intelligence. This doctorate research aims to comprehensively understand the scope of explainability in CPS, identify the critical components of an explainable CPS, and methods and metrics to evaluate them. Specifically, our main research question is how and to what extent Knowledge Graphs can be applied in enabling the explainability of CPS. Using the design science approach, we attempt to answer these questions in a set of iterations, starting with a simulationbased approach and constructing a baseline system followed by more focused studies and more realistic settings using data from real-world CPS. The selected application domain in this work is industrial energy systems such as smart grids and smart buildings. The expected outcome of this work is a theoretical foundation and methods for developing an explainable CPS applicable in various domains.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Problem Statement</head><p>Recently there has been concern that future CPS which span both the realm of physical and cyber-worlds are challenged to explain their behavior to users, engineers, and other stakeholders <ref type="bibr" target="#b6">[7]</ref>. Rapid technological development in the digital aspect of CPS, such as communication, control, and computation, drives the increasing scale and complexity of CPSs. For example, low-power wireless communication allows the proliferation of objects connected through a more extensive network. Advances in machine learning allow data processing algorithms to become adaptive and capable of solving complex real-world tasks. When these complexities gain more influence on systems that impact our day-to-day life, the necessity of having explanations in terms of the behavior of systems is emerging.</p><p>Explainability is the ability of a (software) system to provide explanations about its states or behaviors in terms of a set of facts <ref type="bibr" target="#b13">[14]</ref>. Explanations foster understanding of a thing being explained (explanandum) by linking it with existing knowledge on the receiving stakeholder's side <ref type="bibr" target="#b10">[11]</ref>. Recent studies about explainability are primarily oriented towards artificial intelligence (AI)-based methods which function as black-boxes, meaning that their decisions are not transparent to end users <ref type="bibr" target="#b2">[3]</ref>. Explainability is also an emerging issue in more complex systems, such as cyber-physical systems. However, limited research has been performed so far on understanding the theoretical foundations of explainability in CPS as well as on exploring suitable solution paradigms to this problem.</p><p>Exploring various risk-related scenarios is undesirable to be conducted in the real system, that is in vivo. Having a in vitro platform and reusable framework allows for a more rapid development process and avoiding the unnecessary cost of trial and error when developing an explainable CPS.</p><p>As an illustrative example in the energy domain, smart electricity grids evolve from static to dynamically changing networks of large numbers of devices, e.g., photo-voltaic units (PV), electric vehicle charging stations (EVCS). The slow charging of an EVCS is an event that requires an explanation for several stakeholders including the EVCS owner, customer service representatives, field engineers, and grid planners. An explanation could be that overcast weather leads to lower than usual energy production through PVs in the region, this leads to a lack of supply in the grid segment and to a control intervention to reduce charging power by the grid operator. From the consumer's perspective, the change in energy consumption should be seen as independent of how it is produced. The energy production is then expected to run uninterrupted and provide sufficient supply even when there is an increase in consumption. A swift response is desired if an unwanted event such as failure to fulfill expected service or blackout is unavoidable since the loss caused by the fault would be a function of time. On the one hand, the technical operation employees expect detailed explanations in order to be able to decide the next course of action to remedy a potential fault/anomaly. On the other hand, the possibly larger population of affected consumers, an ideal explanation would be more succinct and related to their context, such as the service contract.</p><p>In order to generate perspicuous explanations for the intended stakeholders, the explanatory system needs to integrate information from various sources such as the structure of the system, the relationship between elements of the system, and the history of the system's state. Additionally, understanding the recipient of an explanation is also essential to be tailored to be easy to understand.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Related work</head><p>Artificial intelligence, mainly the area of expert and knowledge-based systems, extensively studies the task of providing explanation based on formalized logical reasoning <ref type="bibr" target="#b13">[14]</ref>. Recently the necessity to provide explainable reasoning for the complex network has been reignited due to rapid progress in practical applications of machine learning in particular deep architectures of artificial neural networks. Explainability becomes a hot topic in the AI community following the concern about the ethical implications of applying machine learning solutions under biased data <ref type="bibr" target="#b2">[3]</ref>. Explainable AI techniques developed to explain complex machine learning models to the users suggest that user orientation as one of the critical aspect of explainability <ref type="bibr" target="#b9">[10]</ref>.</p><p>The interpretation of explainability from the perspective of industrial systems is even more pragmatic. Related topics such as anomaly detection and subsequent root-cause analysis are essential topics in industrial (cyber-physical) systems and are currently achieved with methods such as FMEA (Failure Mode and Effect Analysis) <ref type="bibr" target="#b3">[4]</ref> and FTA (Fault Tree Analysis) <ref type="bibr" target="#b5">[6]</ref>. These methods require the specification of possible anomalies and their causes by various experts that know (parts of) the system and are typically hampered by the ambiguity and inconsistency of the collectively collected knowledge. The inconsistent terminology also hampers deriving meaningful explanations as a follow-up step of identifying a root cause for a given defect. Because of its specification in natural language, FMEA knowledge is difficult to reuse, is incomplete, and likely inconsistent (as there no formal way to check consistency) <ref type="bibr" target="#b4">[5]</ref>.</p><p>CPS, particularly the smart grid, is relatively new and evolving, and it combines different disciplines such as physics, statistics, and socio-economics. Studies of explainability in CPS are scarce, especially for specific topics such as the approaches based on the knowledge graph. One of the closest approaches is fault diagnosis systems in a smart building that combines a physical process model and data-driven approach <ref type="bibr" target="#b12">[13]</ref>. This work builds causality knowledge from experts into a knowledge graph and applies SPARQL update rules to infer potential causes of a given event. Considering the multi-disciplinary nature of CPS, different communities use different representations of causality knowledge for solving different tasks. For example, in the community of distributed systems and cloud computing, one tries to automate causality mining from time-series data using correlation <ref type="bibr" target="#b14">[15]</ref>. In the other community, i.e., energy and power systems, an ensemble of statistical causal models and deep neural network <ref type="bibr" target="#b15">[16]</ref> are used to build models for short-term forecasting.</p><p>In summary, explainability encompasses, on the one end, a human who needs an explanation and, on the other end, causality knowledge that is not known explicitly from the system's description. Existing literature addressed these issues only partially, and the focus of different communities is diverse. Only by collecting various puzzle pieces can we see the big picture and establish a solid foundation of explainable CPS.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Research Questions</head><p>The literature suggests that there is a limited understanding of what constitutes an explainable cyber-physical systems. One line of research focuses on only user aspect of explanation while other line struggles with ad-hoc or partial solutions. Therefore, the main question for this research is: RQ0 What are the main theoretical, methodological, and engineering foundations that enable effective and efficient implementation of explainable CPS?</p><p>This question is the starting point to the more specific questions: What are the core components needed to achieve explainability? To what extent knowledge graph can help build an explainable CPS? What are the requirements for making an explainable system applicable to various domains?</p><p>The literature also indicates that causality knowledge and user-oriented explanation generation algorithms are the critical aspect of an explainable CPS. Thus, this research also aims to address the following questions:</p><p>RQ1 What is the effective semantic representation to integrate different representations of causality knowledge? A different source of causality knowledge may have a different meaning of weight of a causal relationship. Some may involve the coefficient of a differential equation, and some others might refer to probabilistic quantities to refer to subjective belief or derived quality metrics.</p><p>RQ2 How to acquire causality knowledge efficiently from data and domain experts? How can we support domain experts to express causal relationships based on domain knowledge and data? One approach to express causality captured from domain experts used in <ref type="bibr" target="#b1">[2]</ref> is SPARQL. How can we aid domain experts to express their knowledge without learning about SPARQL first? Can we acquire causality knowledge by analyzing temporal data using time-series analysis or machine learning? RQ3 What are effective and efficient algorithms for generation and ranking of (alternative) explanations? What are the criteria to decide that an explanation is plausible? Given that there are multiple competing hypotheses, what metrics can be applied to compare and rank multiple explanation alternatives? RQ4 How to effectively present explanations to system end-users? What are the cognitive aspects of a user that are important in determining whether an explanation is understandable or not? What are the metrics used for measuring the comprehensibility of an explanation output on a selected user model?</p><p>The questions above correspond to the core functionalities of explanation generation: causality knowledge acquisition and exploitation. Other aspects of Fig. <ref type="figure">1</ref>. Architecture of explainable CPS built on smart grid simulation platform <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref> the interface to the end-users, such as visualization or generation of explanation in natural language, will not be addressed in this research. These aspects will become more apparent when the core and realistic use case scenarios have been developed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Research Plan and Preliminary results</head><p>This study adopts a classical Design Science methodology <ref type="bibr" target="#b7">[8]</ref>: (i) the rigor cycle is ensured by grounding methods for answering all research questions from a thorough understanding of relevant literature studies and dissemination of the intermediate results in the scientific community; (ii) deriving requirements from concrete application contexts using simulation and living lab data from ongoing research projects and the creation of PoCs to address these requirements constitute the relevance cycle; (iii) method development, testing, and subsequent revision constitute the design cycle.</p><p>This study has been conducted for a year. The following 2-3 years will be focused on answering each research question. Specifically, the second year will be allocated for addressing the representation and acquisition (RQ1) of causality knowledge (RQ2). The analytics (RQ3) and presentation (RQ4) aspect of the explanation will be conducted in the third year.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Preliminary results</head><p>The current state of the PhD has resulted in an understanding of explainability and explainable CPS. In the first iteration, the work is oriented towards understanding the explainability in energy systems. Developing plausible and feasible scenarios and data acquisition drives the focus on using a simulation platform (i.e. BIFROST <ref type="bibr" target="#b11">[12]</ref>, see Fig. <ref type="figure">2</ref>). Additionally, the general idea of an explanation generation algorithm was developed and evaluated using synthetic data of a scenario related to electric car charging <ref type="bibr" target="#b0">[1]</ref>.</p><p>The following iteration builds upon the previous idea with more concrete artifacts. One of the results is the architecture shown in Figure <ref type="figure">1</ref>. The realization of this architecture was then implemented as a prototype application for demonstrating that the explanations from the scenario can be derived based on Fig. <ref type="figure">2</ref>. Prototype of explainable CPS built on smart grid simulation platform simulated data and captured knowledge. The solution design and implementation result was then published in the energy community proposing the solution based on semantic web technologies <ref type="bibr" target="#b1">[2]</ref>. To this end, an ontology<ref type="foot" target="#foot_1">1</ref> for modeling data and knowledge described in the architecture has been developed.</p><p>Figure <ref type="figure">2</ref> displays the prototype of an explainable CPS build as a part of the BIFROST smart grid simulation engine.</p><p>Behind the user-facing interface is the engine that integrates data coming from the simulation into knowledge graphs in the triple store, deducing causal relations, detecting events, and deriving explanation for the detected events. The explanation is then displayed as shown on the right side of the figure. From this first iteration, we better understand what aspects are needed to build an explainable CPS.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Expected results and their evaluation</head><p>The goal of an explainable CPS is to provide explanations of events for a variety of scenarios. Close collaboration with domain experts is necessary To achieve plausible scenarios that can be used as a basis for further evaluation. The developed scenarios are then implemented in a simulation to generate data. Furthermore, actual measurements will be used to ensure the validity of the simulation data. User studies and empirical analysis are performed to evaluate the research questions. The following describes the outcome and outputs for each research question.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>RQ1</head><p>The outcome for RQ1 is the incorporation of different causality representations to generate an explanation. A vocabulary for different meanings (e.g., relationship weight) of causality will be designed to augment the basic model of causality. Additionally, an algorithm to fuse these different semantics of causality will be developed as part of the explanation generation algorithm.</p><p>RQ2 Answering RQ2 will involve user studies and implementation of algorithms to derive causality knowledge from simulated data. Method to acquire causality knowledge will be developed and evaluated using user-study and empirical analysis.</p><p>RQ3 A set of metrics and ranking algorithms will be developed, and an explanation generation task will be executed based on a prepared scenario. A group of domain experts will be asked to manually create explanations as the gold standard for measuring the algorithm's performance. The evaluation of the algorithm will use metrics such as MRR or Precision@10 as a base and modified to accommodate comparing the graph structure of the explanation.</p><p>RQ4 A qualitative study will be conducted to acquire key characteristics of explanation target or user profiles. A set of user-profiles will be defined, and the explanation generation task will be executed using the scenarios. Another user study will assess whether the customized generated explanation is relevant to the intended explanation recipients. To this end, System Causability Scale (SCS) <ref type="bibr" target="#b8">[9]</ref> will be used as one of the metrics.</p><p>6 Discussion and Future work</p><p>The previous section described an end-to-end prototype of how an explainable CPS should work after the first year of the doctoral study. This initial work helps to identify issues formulated in the research questions for the next iteration. The collection of artifacts from investigating each research question forms a solution framework to build an explainable CPS. Some topics possibly related to explainable CPS are intentionally not addressed considering the limited scope of this research. e.g., such as scalable storage techniques to handle large-scale CPS data. Other topics depend on the mentioned research questions, such as the study of specific presentation forms of explanation (e.g., visualization) or exploratory search systems to explore the explanation hypothesis and history of events. Further research on these topics will enrich the framework for building an explainable CPS and enables explainability in various systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">Acknowledgement</head><p>This work is funded through a grant project titled "Power System Cognification" (PoSyCo) by the Austrian Research Promotion Agency (FFG) with Siemens AG. I am thankful for PhD supervisory team Marta Sabou, Fajar Juang Ekaputra, and Tomasz Miksa for the professional advises and personal supports.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="6,152.06,115.84,311.24,168.84" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_0">Proceedings of the Doctoral Consortium at ISWC 2021 -ISWC-DC 2021</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_1">https://pebbie.org/expcps/ Proceedings of the Doctoral Consortium at ISWC</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2021" xml:id="foot_2">-ISWC-DC 2021</note>
		</body>
		<back>

			<div type="funding">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Supported by Siemens AG and Austrian Research Promotion Agency (FFG) in the PoSyCo project (FFG No. 3036508).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Simulation support for explainable cyber-physical energy systems</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">R</forename><surname>Aryan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">J</forename><surname>Ekaputra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sabou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Hauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Mosshammer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Einhalt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Miksa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rauber</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">8th Workshop on Modeling and Simulation for Cyber-Physical Energy Systems (MSCPES2020)</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Explainable cyber-physical energy systems based on knowledge graph</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">R</forename><surname>Aryan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">J</forename><surname>Ekaputra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sabou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Hauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Mosshammer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Einhalt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Miksa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rauber</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">9th Workshop on Modeling and Simulation for Cyber-Physical Energy Systems (MSCPES2021)</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI</title>
		<author>
			<persName><forename type="first">A</forename><surname>Barredo Arrieta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Díaz-Rodríguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Del Ser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bennetot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tabik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Barbado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Garcia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gil-Lopez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Molina</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.inffus.2019.12.012</idno>
		<ptr target="https://doi.org/10.1016/j.inffus.2019.12.012" />
	</analytic>
	<monogr>
		<title level="j">Information Fusion</title>
		<imprint>
			<biblScope unit="volume">58</biblScope>
			<biblScope unit="page" from="82" to="115" />
			<date type="published" when="2020-06">Jun 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Ben-Daya</surname></persName>
		</author>
		<title level="m">Failure Mode and Effect Analysis</title>
				<meeting><address><addrLine>London, London</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="75" to="90" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Performing FMEA Using Ontologies</title>
		<author>
			<persName><forename type="first">L</forename><surname>Dittmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">T</forename><surname>Rademacher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zelewski</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">18th International Workshop on Qualitative Reasoning</title>
				<imprint>
			<date type="published" when="2004">2004</date>
			<biblScope unit="page" from="209" to="216" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Fault Tree Analysis Primer</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">A</forename><surname>Ericsson</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Explainable Software for Cyber-Physical Systems (ES4CPS)</title>
		<author>
			<persName><forename type="first">J</forename><surname>Greenyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lochau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Vogel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Report from the GI Dagstuhl Seminar 19023</title>
				<meeting><address><addrLine>Schloss Dagstuhl</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">January 06-11 2019. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Design science in information systems research</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">R</forename><surname>Hevner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">T</forename><surname>March</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ram</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">MIS quarterly</title>
		<imprint>
			<biblScope unit="page" from="75" to="105" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Measuring the quality of explanations: the system causability scale (scs)</title>
		<author>
			<persName><forename type="first">A</forename><surname>Holzinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Carrington</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Müller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="s">KI-Künstliche Intelligenz</title>
		<imprint>
			<biblScope unit="page" from="1" to="6" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Explain to whom? putting the user in the center of explainable ai</title>
		<author>
			<persName><forename type="first">A</forename><surname>Kirsch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML 2017 co-located with 16th International Conference of the Italian Association for Artificial Intelligence (AI* IA</title>
				<meeting>the First International Workshop on Comprehensibility and Explanation in AI and ML 2017 co-located with 16th International Conference of the Italian Association for Artificial Intelligence (AI* IA</meeting>
		<imprint>
			<date type="published" when="2017">2017. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">The structure and function of explanations</title>
		<author>
			<persName><forename type="first">T</forename><surname>Lombrozo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Trends in cognitive sciences</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">10</biblScope>
			<biblScope unit="page" from="464" to="470" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">BIFROST: A Smart City Planning and Simulation Tool</title>
		<author>
			<persName><forename type="first">R</forename><surname>Mosshammer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Diwold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Einfalt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schwarz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Zehrfeldt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Intelligent Human Systems Integration</title>
				<editor>
			<persName><forename type="first">W</forename><surname>Karwowski</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Ahram</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="217" to="222" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Adapting Semantic Sensor Networks for Smart Building Diagnosis</title>
		<author>
			<persName><forename type="first">J</forename><surname>Ploennigs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Schumann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Lécué</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-319</idno>
		<idno>-11915- 1 20</idno>
		<ptr target="https://doi.org/10.1007/978-3-319" />
	</analytic>
	<monogr>
		<title level="m">13th International Semantic Web Conference (ISWC)</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="308" to="323" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Asking &apos;Why&apos;in AI: Explainability of intelligent systems-perspectives and challenges</title>
		<author>
			<persName><forename type="first">A</forename><surname>Preece</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Intelligent Systems in Accounting, Finance and Management</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="63" to="72" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">A causality mining and knowledge graph based method of root cause diagnosis for performance anomaly in cloud applications</title>
		<author>
			<persName><forename type="first">J</forename><surname>Qiu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Yin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">L</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Qian</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied Sciences</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page">2166</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Multi-network vulnerability causal model for infrastructure co-resilience</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">M K</forename><surname>Sriram</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">B</forename><surname>Ulak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">E</forename><surname>Ozguven</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Arghandeh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Doctoral Consortium at ISWC 2021 -ISWC-DC 2021</title>
				<meeting>the Doctoral Consortium at ISWC 2021 -ISWC-DC 2021</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="35344" to="35358" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
