<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">ConceptSuperimposition: Using Conceptual Modeling Method for Explainable AI</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Wolfgang</forename><surname>Maass</surname></persName>
							<email>wolfgang.maass@iss.uni-saarland.de</email>
							<affiliation key="aff0">
								<orgName type="institution">Saarland University</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">German Research Center for Artificial Intelligence (DFKI)</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Arturo</forename><surname>Castellanos</surname></persName>
							<email>arturo.castellanosbueso@mason.wm.edu</email>
							<affiliation key="aff2">
								<orgName type="institution">William &amp; Mary</orgName>
								<address>
									<settlement>Williamsburg</settlement>
									<region>VA</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Monica</forename><forename type="middle">Chiarini</forename><surname>Tremblay</surname></persName>
							<email>monica.tremblay@mason.wm.edu</email>
							<affiliation key="aff2">
								<orgName type="institution">William &amp; Mary</orgName>
								<address>
									<settlement>Williamsburg</settlement>
									<region>VA</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Roman</forename><surname>Lukyanenko</surname></persName>
							<email>roman.lukyanenko@hec.ca</email>
							<affiliation key="aff3">
								<orgName type="institution">HEC Montréal</orgName>
								<address>
									<settlement>Montreal</settlement>
									<region>Québec</region>
									<country key="CA">Canada</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Veda</forename><forename type="middle">C</forename><surname>Storey</surname></persName>
							<email>vstorey@bellsouth.net</email>
							<affiliation key="aff4">
								<orgName type="institution">Georgia State University</orgName>
								<address>
									<settlement>Atlanta</settlement>
									<region>GA</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff5">
								<orgName type="institution">Stanford University</orgName>
								<address>
									<addrLine>March 21-23</addrLine>
									<postCode>2022</postCode>
									<settlement>Palo Alto</settlement>
									<region>California</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">ConceptSuperimposition: Using Conceptual Modeling Method for Explainable AI</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">7D4C5D231DF089E9E94687C9E6098DA3</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T09:14+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Artificial Intelligence</term>
					<term>Machine Learning</term>
					<term>Conceptual Modeling</term>
					<term>Model Performance</term>
					<term>Mental Models</term>
					<term>Framework for Mental Models and Conceptual Models for Machine Learning</term>
					<term>ConceptSuperimposition</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Many artificial intelligence (AI) applications involve the use of machine learning, which continues to evolve and address more and more complex tasks. At the same time, conceptual modeling is often applied to such real-world tasks so they can be abstracted at the right level of detail to capture and represent the requirements for the development of a useful information system to support an application. In this research, we develop a framework for progressing from human mental models of an application to machine learning models via the use of conceptual models. Based on the framework we develop a novel ConceptSuperimposition method for increasing explainability of machine learning models. We illustrate the method by applying machine learning to publicly available data from the Home Mortgage Disclosure Act database which contains the 2020 mortgage application data collected in the United States. The machine learning task is to predict whether a mortgage is approved. The results show how the explainability of machine learning applications can be improved by including domain knowledge in the form of a conceptual model that represents a mental model, instead of relying solely on algorithms. Preliminary results show that including such knowledge can help advance the explainability problem.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Machine learning consists of methods that use data and algorithms to build models that make inferences about an application from provided examples <ref type="bibr" target="#b0">[1]</ref>. Both the opportunities and limitations of machine learning are rooted in its reliance on building models from data and, therefore on the quality of the data used to train and test the models <ref type="bibr" target="#b1">[2]</ref>. As our society's dependence on machine learning grows, it is important to ensure that machine learning models perform well and are interpretable and transparent. This is a tradeoff that is often augmented by opaque transformations in the input data (i.e., feature engineering), which makes it challenging to assess the effectiveness of the input data on the outcome <ref type="bibr" target="#b2">[3]</ref>. Numerous challenges persist, including biases, discrimination, lower performance, lack of transparency, and explainability. While popular approaches have emerged for explaining predictions of classifiers (Layerwise relevance propagation, Local Interpretable Model-Agnostic Explanations, and many others) <ref type="bibr" target="#b3">[4]</ref>, there are criticisms about explanations being dependent on the choice of hyperparameters and how these models have different explanations for similar instances in the data. The objective of this research is to investigate how to improve machine learning explainability by incorporating domain knowledge of the application. The contribution is to propose a framework for progressing from human mental models to machine learning models through the use of conceptual models. Based on the framework we develop a ConceptSuperimposition method for increasing explainability of machine learning models. We illustrate the application of this method in the home mortgage domain.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Machine Learning and Conceptual Modeling</head><p>Supervised learning guides the learner in acquiring knowledge in a domain through examples, so new cases can be handled in a manner most appropriate based on the knowledge learned from similar cases. Modern supervised machine learning has been taking advantage of the availability of data, and developing methods and techniques (e.g., deep learning neural networks, reinforcement learning), which rely on large volumes of high-quality data for performance improvement. The increase in the use of complex machine learning models has brought about challenges in explaining the decision logic of these models. Transparency research in AI is a growing societal concern and a growing research area <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref>. A generally overlooked approach to explainability, however, is how to incorporate domain knowledge that a user or designer might possess. This knowledge would manifest itself in mental models which contribute to the development of conceptual models that support the interpretation of machine learning models and outcomes. Conceptual modeling formally describes "some aspects of the physical and social world around us for the purposes of understanding and communication" <ref type="bibr">[6, p. 2]</ref>. Humans use conceptualizations about domains for representing specific or abstract situations in domains. Recent research has proposed combining conceptual modeling with artificial intelligence or, specifically machine learning <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b10">11,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b12">13]</ref>. The main argument is that doing so can provide reliable rules about the domain without being dependent on extracting them from the data. Despite these efforts, conceptual models are rarely used in the process of machine learning. At the same time, machine learning invariably relies on human mental modelsrepresentations of reality in the minds of data scientists or users of machine learning models, who either develop or interpret machine learning solutions, in light of their mental models. Unlike conceptual models, mental models are not explicit, and hence may contain biases.</p><p>Research in psychology and other disciplines (e.g., philosophy, cognitive science), dealing with decision-making, has argued that, when making decisions, humans construct one or more</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Mental Models</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Subjective/Acquired knowledge</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Conceptual Models</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Shared understanding</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Entities and events</head><p>Application Domain  mental models of a domain <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15]</ref>. A mental model is a representation of a possible state of affairs in reality <ref type="bibr" target="#b14">[15]</ref>. Mental models are the central mechanism for coping with diversity and change, allowing someone to act in an informed and effective manner. In a typical machine learning scenario, we may encounter different customers purchasing goods or services from a store. However, we can reduce this complexity by constructing a handful of mental models that segment customers (e.g., loyal repeat customer, high volume customers, one-time buyer). Mental models, are important for problem-solving, including in ML contexts, but are also prone to bias and error. If left unchecked, the biases arising from the suboptimal formation and use of mental models, could affect the judgment of machine learning developers, and result in a variety of machine learning problems. Hence, we seek to augment mental models with conceptual models so the representations will be more formal and externalized and better equipped at mitigating the cognitive biases inherent in mental models. If, in reality, mental models are inconsistent, or even contradictory, with the machine learning model or the data, it is possible that: (1) subjects have selected inadequate conceptual statements for a situation, (2) perception and measurement about reality are distorted, or (3) conceptual statements are flawed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Data Scientists Conceptualizations</head><note type="other">Physical</note><p>In this paper we, therefore, propose improving explainability of complex machine learning models based on the externalizing the mental models via conceptual models. Mental models <ref type="bibr" target="#b13">[14]</ref> can represent components, states and structural relations, basic operational rules and on general scientific principles, events and processes, and ontological knowledge. They represent individual knowledge extracted from either physical or abstract domains that is used for various cognitive tasks, including understanding, communication, navigation, and decision making. Mental models are often understood as surrogate models used for simulation and prediction as well as a means for constructing advanced mental models <ref type="bibr" target="#b15">[16]</ref>. Conceptual models are shared representations expressed in various forms, such as texts and graphics. According to model theory, conceptual models are projected and abbreviated representations made by human experts and used for a purpose <ref type="bibr" target="#b16">[17]</ref>. In contrast, machine learning models are statistical abstractions derived by algorithmic fitting mathematical functions to data according to a purpose given by an objective functions. Data used for model fitting and also for construction of conceptual models are representations of domains themselves. In essence, mental models are individually constructed, conceptual models are socially constructed, and machine learning models are algorithmically constructed. Designing information systems means that all three model types are synchronized and balanced by negotiation for finding a satisfiable equilibrium between these models. (See Figure <ref type="figure" target="#fig_0">1</ref>). A gap exists between mental models and conceptual models versus machine learning models because internal representations of machine learning models are generally inaccessible. Thus, research is needed on interpretability and explainability of machine learning models as a means for alignment of all three model types.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">ConceptSuperimposition Method</head><p>Following the Framework for Mental Models and Conceptual Models for Machine Learning (cf. Figure <ref type="figure" target="#fig_0">1</ref>), we advance a new method -ConceptSuperimposition. This is a method which extends our early work on Superimposition method. Superimposition adds structural semantic information to the outputs of machine learning to support explainability <ref type="bibr" target="#b9">[10]</ref>. While this information is absent in current ML practice, it is routinely employed by humans to understand their day-to-day experiences. As per our Framework, a machine learning model is a model of some domain (e.g., credit card fraud, image classification, online auctions). The machine learning model is a set of rules for estimating a value of interest or discriminating among the cases of interest based on previously provided domain examples.</p><p>Superimposition maps the output of machine learning models (i.e., the features, rules and transformation functions) onto a conceptual model of the domain. The method suggests indicating inside the conceptual model information about the rules of the machine learning models. This step depends on the type of machine learning model. For example, if a regression model is used, these rules can be represented as feature weights or feature coefficients. These coefficients can be appended to the attributes in the conceptual model, or the attributes can be highlighted differently to indicate the different relative importance of each attribute. Features are conceived and defined based on domain knowledge. For instance, the feature 𝑝𝑟𝑜𝑝𝑒𝑟𝑡𝑦_𝑣𝑎𝑙𝑢𝑒 is necessary for evaluating a loan application. A conceptual model clusters features into concepts (e.g., classes, entity types). The relations (e.g., association, type of, part of) provide connections between the concepts. Generally, concepts that are directly connected via a relation are semantically closer to each other than indirectly connected concepts.</p><p>The final step of the Superimposition method involves analyzing the resulting conceptual model to gain a clearer understanding of the underlying rules machine learning models use to makes its decisions, and to identify opportunities to improve the machine learning model further.</p><p>A key limitation of the early version of the Superimposition method is in the lack of representation of concepts and their relationships. The method merely attached feature weights to the conceptual model. Yet, the understanding of the impact of the concepts (e.g., entity types) on the target is also of critical importance. Furthermore, the early formulation of the Superimposition lacked formalization. We address both shortcomings in the formalized and holistic ConceptSuperimposition method. We propose a ConceptSuperimposition method for assessing the alignment between conceptual models and machine learning models. This method consists of three steps: </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Marginal contribution</head><p>Machine learning (ML) models are globally fitted to datasets. Often, ML models are black boxes without direct access to feature contributions to outcomes. Therefore, simpler models (surrogate models) are locally fitted ex-post to ML models. Surrogate models provide information on local contribution of features to outcomes (e.g., LIME or SHAP <ref type="bibr" target="#b17">[18]</ref>). SHAP (Shapley Additive Explanation) values are Shapley values of a conditional expectation function of the machine learning model, i.e., the fitted model is used for determining local contribution of single features to an outcome.</p><p>Shapley values formalize coalition games and determine additive marginal contributions of single players to an overall payoff of the coalition of players. They are defined by an operator 𝜑 that assigns for each game 𝑣 a vector of payoffs 𝜑(𝑣) = (𝜑 1 , ..., 𝜑 𝑛 ). 𝜑 𝑖 (𝑣) is player i's marginal and additive contribution to the outcome of a game over all permutations with all other players <ref type="bibr" target="#b18">[19]</ref>. Shapley values are locally accurate, i.e. match the original model 𝑓 (𝑥), is not affected by missing values and are consistent wrt. unequality relation between two models 𝑓 (𝑥) and 𝑓 ′ (𝑥) <ref type="bibr" target="#b17">[18]</ref>. The Shapley value of a feature 𝑖 is a weighted mean of its marginal value of feature 𝑖, averaged over all possible subsets of features:</p><formula xml:id="formula_0">𝜑 𝑖 = ∑︀ 𝑆⊆𝐹 ∖{𝑖} |𝑆|!(|𝐹 |−|𝑆|−1)! |𝐹 |! [𝑓 𝑆∪{𝑖} (𝑥 𝑆∪{𝑖} ) − 𝑓 𝑆 (𝑥 𝑆 )]</formula><p>with 𝐹 the set of all features and 𝑓 𝑆∪{𝑖} with a model trained with feature 𝑖 and 𝑓 𝑆 (𝑥 𝑆 ) without. SHAP values determine additive contributions of input features 𝑋 on the prediction of an output feature 𝑦, i.e., 𝑦 = 𝑓 (𝑋), agnostic to the machine learning model used with 𝑓 the original model and Φ a vector of all Shapley value 𝜑 𝑖 for all independent features. Input feature 𝑥 ∈ 𝑋 is transformed into simplified vector 𝑧 ∈ 𝑍 with 𝑧 ∈ {0, 1}, i.e., SHAP value 𝜑 𝑖 𝑧 𝑖 is zero if feature 𝑧 𝑖 is missing and 𝜑 𝑖 carries the whole marginal contribution of a feature to an outcome otherwise.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Feature contribution</head><p>We now use marginal contributions for defining conceptually generalized contributions of input features. Given a conceptual model 𝐶𝑀 with a bidirectional mapping of concepts to all features in 𝑂. For each concept 𝑐 in 𝐶𝑀 , a n-ary concept contribution vector 𝑔 𝑜 𝑐 is constructed by Hadamard product of Shapley vector Φ and input vector 𝑥 𝑐 for an output feature 𝑜 in output concept 𝑂. 𝑥 𝑐 has only feature values associated with concept 𝑐 and value 0 everywhere else. Vector 𝑔 𝑜 𝐶 is the contribution of all input concepts on an outcome feature 𝑜.</p><formula xml:id="formula_1">𝑔 𝑜 𝑐 = Φ ∘ 𝑥 𝑐 and 𝑔 𝑜 𝐶 = Π 𝑐∈𝐶 𝑔 𝑜 𝑐 ∘ 1 𝑛</formula><p>First the average of all SHAP values per input feature is calculated. 𝑔 ¯𝑜 𝑐 is the normalized value for all 𝑔 𝑜 𝑐 , ie., relative contribution of feature to an outcome feature. By normalization over all input data 𝑋, 𝑔 ¯𝑜 𝑐 is determined that provides relative contributions of each feature of concept 𝑐 with respect to output feature 𝑜, ie. input features are superimposed on 𝑐 relative to 𝑜.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Concept contribution</head><p>Concept contributions depend on the type of outcome features, i.e. application of classification or regression. For binary classification, we define 𝑓 𝑂 (𝑋) as the sum of contributions of all feature contributions of input concepts on a 𝑜 in output concept 𝑂 except contributions of features in 𝑂 due to implicit strong collinearity with the output feature 𝑜:</p><formula xml:id="formula_2">𝑓 𝑂 (𝑋) = ∑︀ 𝑐∈𝐶∖{𝑂} 𝑔 ¯𝑜⊤ 𝑐 • 1 𝑛</formula><p>For regression, feature contributions are evaluated relative to features of concept 𝑂. The mean of output features except outcome feature 𝑜 are calculated (𝜔). Only those input features are considered with a larger feature contribution than 𝜔.</p><formula xml:id="formula_3">𝑓 𝑂 (𝑋) = ∑︀ 𝑐∈𝐶∖{𝑂} 𝑔 ¯𝑜⊤ 𝑐 • (1/𝜔 𝑛 − 1)</formula><p>Concept contributions are the weighted summation of all summed up feature contributions. The weight 𝑤 is the count of all features minus selected features plus 1. This accounts for decreasing feature contribution values with the number of features. Only features with strong feature contributions are selected. If, for instance, only one input feature is selected, feature contributions are not affected while a large 𝑤 will reduce feature contributions on concept contributions.</p><formula xml:id="formula_4">𝜅 𝑂 (𝑋) = 1/𝑤 * ∑︀ 𝑜∈𝑂 𝑓 𝑂 (𝑥)</formula><p>It shall be noted, that 𝜅 𝑂 (𝑋) depends on the type of prediction. Values for 𝜅 𝑂 (𝑋) for classifications can be compared with one another and for regression respectively but 𝜅 𝑂 (𝑋) cannot be compared between classification and regression.</p><p>𝜅 𝑂 is determined for permutations over all homogeneous concepts 𝑐 ∈ 𝐶, i.e., 𝜅 𝑂 is determined for all concepts 𝑐 ∈ 𝐶. This provides a measure for local contributions of input concepts on a homogeneous concept given input 𝑥. Concept contributions on heterogeneous concepts requires a functional model that integrates features 𝑜 ∈ 𝑂.</p><p>A concept 𝑐 with little concept contribution 𝜅 𝑂 (𝑋) on an output concept 𝑐 𝑝 has a weak conceptual relation with 𝑐 𝑝 , i.e., the outcome is only weakly affected by the presence of 𝑐 𝑖 . Conceptual relations are directed form 𝑐 𝑖 to 𝑐 𝑝 due to the game-theoretic construction of Shapley values. The semantics of conceptual relations are constrained by the concepts. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">ConceptSuperimposition</head><p>Feature contributions provide a directed similarity measure between input features and output features analog to local similarity measures in social networks (e.g., <ref type="bibr" target="#b19">[20]</ref>). Feature contributions are consistent with the properties of additive feature attribution methods, ie. local accuracy, missingness and consistency <ref type="bibr" target="#b17">[18]</ref>.</p><p>Concept contribution abstracts from features to concepts and determines directed contributions between directly connected concepts (cf. Game 1 <ref type="bibr" target="#b20">[21]</ref>). Additive feature attribution properties are not maintained because lack of output values on concept level. Therefore, concept contribution is a score that measures directed local contribution of one concept to another derived by feature contributions.</p><p>We call the attribution of concepts by concept contributions ConceptSuperimposition. It closes a cycle between conceptual models, data and ML models that consists of three steps. First, conceptual models provide constraints on data that is considered for ML model development. Second, data is used for constructing ML models. Concept contributions elevate patterns found by ML models to a conceptual level that is fed back to conceptual models. Thus, concept contributions can be used for confirmation of conceptual models, i.e., for evaluation whether identified patterns from data are consistent with conceptual models and, thus, shared understanding of actors involved.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Example</head><p>To illustrate the application of ConceptSuperimposition, we use publicly available data (10 GB) from the Home Mortgage Disclosure Act (HMDA) website (https://www.consumerfinance.gov/data-research/hmda/). This data contains the 2020 mortgage application data collected in the U.S. under the Home Mortgage Disclosure Act. The dataset consists of a sample of 3,481,348 applications for single-family, principal residence purchases, i.e. 5% of the HMDA dataset. The data is comprised of 99 variables including payment history, credit history, credit mix, demographics, income, and characteristics of the loan (e.g., purpose of the loan, interest rate, total loan costs), and census data (e.g., census tract, tract population).The target variable is to predict whether a mortgage is originated (target = 1) or denied (target = 0). Of the sample applications in the dataset, 44.3% of the applications were for refinancing, 32.42% were for home purchase, 14.96% for cash-out refinancing. 83.56% of the applications were approved. 69.99% of the applications belonged to White applicants and 6.15% to Black or African American applicants (approval rate for Whites was 85.33% and Blacks was 71.51%). (see Figure <ref type="figure" target="#fig_3">3</ref> with a fragment of the conceptual model).</p><p>For comparison of concept contributions, data needs to be standardized. Categorical features are transformed by one-hot-encoding except output feature 𝑦. After data engineering, the dataset contains 52 features from which 20 are associated to the concept applicant, 14 to the concept loan and 18 to the concept 𝑙𝑜𝑎𝑛_𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Feature Contribution</head><p>All five concepts are used for making a binary decision on loan applications, i.e. concept Decision with a feature 𝑎𝑐𝑡𝑖𝑜𝑛_𝑡𝑎𝑘𝑒𝑛. We use XGBoost Classification for predicting 𝑎𝑐𝑡𝑖𝑜𝑛_𝑡𝑎𝑘𝑒𝑛.</p><p>Performance metrics for the model on the test dataset are: accurary (0.995), precision (0.999), recall (0.995), F-measure (0.997).</p><p>For all three concepts, i.e., Applicant, Loan and LoanApplication, we determined feature contributions on the concept Decision modeled by a single feature (𝐴𝑐𝑡𝑖𝑜𝑛𝑇 𝑎𝑘𝑒𝑛) and three permutations, i.e. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Concept contributions</head><p>For each input concept (here: applicant, loan application, and loan), concept contributions are determined. As a result, 𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑛𝑡 provides a strong concept contribution 𝜅 𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑛𝑡 (𝑋) of 0.610 to LoanApplication while 𝑙𝑜𝑎𝑛𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛 provides a relatively high concept contribution 𝜅 𝑙𝑜𝑎𝑛𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛 (𝑋) of 3.177 to 𝑙𝑜𝑎𝑛 (cf. Table <ref type="table" target="#tab_2">2</ref>). We only used one feature per outcome concept for simplicity.</p><p>For the HDMA dataset, concept contributions indicate a strong directed connection from applicant to 𝑙𝑜𝑎𝑛_𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛 (cf. Figure <ref type="figure">4</ref>). This supports the establishment of a relation between these two concepts. Same holds for 𝑙𝑜𝑎𝑛𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛 to 𝑙𝑜𝑎𝑛 and 𝑙𝑜𝑎𝑛 to 𝑎𝑝𝑝𝑙𝑖𝑐𝑎𝑛𝑡. The latter concept contribution is not captured by the original conceptual model (cf. Figure <ref type="figure" target="#fig_3">3</ref>). This is an example how concept contributions 𝜅 𝑝 can be leveraged for automatically scrutinization of conceptual models associated with datasets and database implementations. In this example, we found support for a revised conceptual model with relations between loan towards applicant. It shall be noted that concept contributions do not provide link semantics, such as cardinalities or type of links. Proposals for revisions based ConceptSuperimposition help domain experts and ML developers and business analysts to align conceptual models with data and ML models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>This research proposes that ML models and conceptual models can be used effectively for analyzing prior conceptual knowledge used for data selection. A conceptual model represents agreed-upon domain knowledge. In this work we first provide a theoretical basis for using conceptual models in the domain of explainable AI. Within use the Mental Models Framework to develop a formalized and holistic ConceptSuperimposition method. The method is formalized and its utility is demonstrated in the application to the home mortgage domain.</p><p>The new ConceptSuperimposition method can be used to improve ML explainability and should be especially effective for situations where there is insufficient data to extract relevant domain knowledge in a data-driven manner. Future work is needed to apply the framework and method to other examples in other domains. In future studies we hope to evaluate the increased transparency due to the new method by conducting interviews, focus groups, and laboratory experiments with the stakeholders looking to understand the decision logic behind machine learning models. We also plan to investigate the benefit of this method together with other existing approaches to explainability. We do not position this method as an alternative, rather, we believe it could complement existing methods by extrapolating their outputs onto the conceptual models.</p><p>In future work, we will address the integration of concept contributions for classification and regression.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Framework for Mental Models and Conceptual Models for Machine Learning</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>1 . 3 .</head><label>13</label><figDesc>Marginal contribution: determination of Shapley values for predicting features 2. Feature Contribution: local contribution of outcome features associated with outcome concepts. Concept Contribution: local contributions on outcome concepts.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: ConceptSuperimposition</figDesc><graphic coords="7,189.64,84.19,216.00,173.41" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Conceptual Model for the HMDA loan dataset</figDesc><graphic coords="8,171.64,84.19,251.98,66.51" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>( 1 )</head><label>1</label><figDesc>LoanApplication and Loan on Applicant, (2)) Applicant and Loan on LoanApplication and (3) Applicant and LoanApplication on Loan.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>ML-based Information systems</head><label></label><figDesc></figDesc><table><row><cell>Implicit/tacit knowledge</cell><cell cols="2">Explicit knowledge</cell></row><row><cell></cell><cell></cell><cell>AI/ML Models</cell></row><row><cell></cell><cell></cell><cell>AI / ML</cell></row><row><cell></cell><cell></cell><cell>representations</cell></row><row><cell>Doyle &amp; Ford 98 -Conceptual (vs. mental image) -Representation (vs. cognitive processes)</cell><cell>Wand &amp; Weber 02 -Grammar, method, script, context -Ontologies</cell><cell>-Incl. feature engineering -Architecture design -Model selection and optimization -Performance evaluation</cell></row><row><cell></cell><cell>Data (representations)</cell><cell></cell></row><row><cell></cell><cell>of entities and events</cell><cell></cell></row><row><cell></cell><cell>(Raw) Data on Being</cell><cell></cell></row><row><cell>World</cell><cell>Data Sphere</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1</head><label>1</label><figDesc>Feature contributions for two outcome concepts based on classification (C) and two by regression (R)</figDesc><table><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>Loan</cell></row><row><cell>Concepts</cell><cell></cell><cell cols="3">Decision (C) Applicant (C)</cell><cell>Application</cell><cell>Loan (R)</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>(R)</cell></row><row><cell></cell><cell>Features</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell>(total 21)</cell><cell cols="2">ActionTaken</cell><cell>PurchaseType</cell><cell>PropertyValue LoanAmount</cell></row><row><cell></cell><cell>income</cell><cell></cell><cell>-0.014</cell><cell>-0.014</cell><cell>0.161</cell><cell>0.387</cell></row><row><cell>Applicant</cell><cell>purchaser_type age</cell><cell></cell><cell>0.266</cell><cell>0.265</cell><cell>0.449</cell></row><row><cell></cell><cell>loan_term</cell><cell></cell><cell>0.081</cell><cell>-0.081</cell></row><row><cell>Loan Application</cell><cell cols="2">origination_charges loan_type property_value</cell><cell>0.073 0.034 0.037</cell><cell>0.073 0.034</cell><cell>0.868 0.567</cell></row><row><cell></cell><cell cols="2">loan_product_type</cell><cell></cell><cell></cell><cell>1.742</cell></row><row><cell></cell><cell>loan_amount</cell><cell></cell><cell>-0.025</cell><cell>-0.026</cell></row><row><cell>Loan</cell><cell>rate_spread</cell><cell></cell><cell>-0.026</cell><cell>-0.026</cell></row><row><cell></cell><cell>total_loan_costs</cell><cell></cell><cell>0.160</cell><cell>0.160</cell><cell>-0.838</cell></row><row><cell></cell><cell>𝜅 𝑐 (𝑥)</cell><cell cols="4">Applicant (C) LoanApplication (R) Loan (R)</cell></row><row><cell></cell><cell>Applicant</cell><cell>1</cell><cell></cell><cell>0.610</cell><cell>0.387</cell></row><row><cell></cell><cell>LoanApplication</cell><cell cols="2">0.0262</cell><cell>1</cell><cell>3.177</cell></row><row><cell></cell><cell>Loan</cell><cell cols="2">0.108</cell><cell>mc</cell><cell>1</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2</head><label>2</label><figDesc>Concept contributions 𝜅 𝑐 (𝑥)</figDesc><table><row><cell>LOAN</cell></row></table></figure>
		</body>
		<back>

			<div type="funding">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>(V. C. Storey) https://iss.uni-saarland.de (W. Maass) 0000-0003-4057-0924 (W. Maass); 0000-0002-7477-7379 (A. Castellanos); 0000-0003-1289-6679 (M. C. Tremblay); 0000-0002-8735-1553 (V. C. Storey</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Mccorduck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Cfe</surname></persName>
		</author>
		<title level="m">Machines who think: A personal inquiry into the history and prospects of artificial intelligence</title>
				<imprint>
			<publisher>CRC Press</publisher>
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Get another label? improving data quality and data mining using multiple, noisy labelers</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">S</forename><surname>Sheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Provost</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">G</forename><surname>Ipeirotis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining</title>
				<meeting>the 14th ACM SIGKDD international conference on Knowledge discovery and data mining</meeting>
		<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="614" to="622" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Swarm intelligence for self-organized clustering</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>Thrun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ultsch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">290</biblScope>
			<biblScope unit="page">103237</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Peeking inside the black-box: a survey on explainable artificial intelligence (xai)</title>
		<author>
			<persName><forename type="first">A</forename><surname>Adadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Berrada</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE access</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="52138" to="52160" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">B</forename><surname>Arrieta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Díaz-Rodríguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Del</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bennetot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Tabik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Barbado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>García</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Gil-López</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Molina</surname></persName>
		</author>
		<author>
			<persName><surname>Benjamins</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information Fusion</title>
		<imprint>
			<biblScope unit="volume">58</biblScope>
			<biblScope unit="page" from="82" to="115" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Conceptual modeling and telos</title>
		<author>
			<persName><forename type="first">J</forename><surname>Mylopoulos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Conceptual Modeling, Databases, and CASE: An Integrated View of Information Systems Development</title>
				<editor>
			<persName><forename type="first">P</forename><surname>Loucopoulos</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Zicari</surname></persName>
		</editor>
		<imprint>
			<publisher>Editors John Wiley &amp; Sons</publisher>
			<date type="published" when="1992">1992</date>
			<biblScope unit="page" from="49" to="68" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Pairing conceptual modeling with machine learning</title>
		<author>
			<persName><forename type="first">W</forename><surname>Maass</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">C</forename><surname>Storey</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Data &amp; Knowledge Engineering</title>
		<imprint>
			<biblScope unit="page">101909</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Preface of the first workshop models in ai</title>
		<author>
			<persName><forename type="first">U</forename><surname>Reimer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Bork</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Fettke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tropmann-Frick</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Modellierung (Companion)</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="128" to="129" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Towards a multi-objective modularization approach for entity-relationship models</title>
		<author>
			<persName><forename type="first">D</forename><surname>Bork</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Garmendia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wimmer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ER Forum/Posters/Demos</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="45" to="58" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Superimposition: augmenting machine learning outputs with conceptual models for explainable ai</title>
		<author>
			<persName><forename type="first">R</forename><surname>Lukyanenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Castellanos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">C</forename><surname>Storey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Castillo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>Tremblay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Parsons</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Conceptual Modeling</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="26" to="34" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Using conceptual modeling to support machine learning</title>
		<author>
			<persName><forename type="first">R</forename><surname>Lukyanenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Castellanos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Parsons</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>Tremblay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">C</forename><surname>Storey</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Advanced Information Systems Engineering</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="170" to="181" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Inductive discovery by machine learning for identification of structural models</title>
		<author>
			<persName><forename type="first">W</forename><surname>Maass</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Shcherbatyi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Conceptual Modeling -37th International Conference, ER 2018</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">J</forename><surname>Trujillo</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><forename type="middle">C</forename><surname>Davis</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">X</forename><surname>Du</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Z</forename><surname>Li</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><forename type="middle">W</forename><surname>Ling</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Li</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Lee</surname></persName>
		</editor>
		<meeting><address><addrLine>Xi&apos;an, China</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2018">October 22-25, 2018. 2018</date>
			<biblScope unit="volume">11157</biblScope>
			<biblScope unit="page" from="545" to="552" />
		</imprint>
	</monogr>
	<note>Proceedings</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Data-driven, statistical learning method for inductive confirmation of structural models</title>
		<author>
			<persName><forename type="first">W</forename><surname>Maass</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Shcherbatyi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">50th Hawaii International Conference on System Sciences, HICSS 2017</title>
				<editor>
			<persName><forename type="first">T</forename><surname>Bui</surname></persName>
		</editor>
		<meeting><address><addrLine>Hilton Waikoloa Village, Hawaii, USA; AISeL</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">January 4-7, 2017. 2017</date>
			<biblScope unit="page" from="1" to="10" />
		</imprint>
		<respStmt>
			<orgName>AIS Electronic Library</orgName>
		</respStmt>
	</monogr>
	<note>ScholarSpace /</note>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><surname>Gentner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">L</forename><surname>Stevens</surname></persName>
		</author>
		<title level="m">Mental models</title>
				<imprint>
			<publisher>Psychology Press</publisher>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Mental models: Towards a Cognitive Science of Language, Inference, and Consciousness</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">N</forename><surname>Johnson-Laird</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1983">1983</date>
			<publisher>Harvard Univ Press</publisher>
			<pubPlace>Cambridge, MA</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Comprehension as the construction of mental models, Philosophical Transactions of the Royal Society of</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">N</forename><surname>Johnson-Laird</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">London. B, Biological Sciences</title>
		<imprint>
			<biblScope unit="volume">295</biblScope>
			<biblScope unit="page" from="353" to="374" />
			<date type="published" when="1981">1981</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Stachowiak</surname></persName>
		</author>
		<title level="m">Allgemeine modelltheorie</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="1973">1973</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">A unified approach to interpreting model predictions</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Lundberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S.-I</forename><surname>Lee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 31st international conference on neural information processing systems</title>
				<meeting>the 31st international conference on neural information processing systems</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="4768" to="4777" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">A value for n-person games</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">S</forename><surname>Shapley</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Contributions to the Theory of Games (AM-28)</title>
				<editor>
			<persName><forename type="first">H</forename><forename type="middle">W</forename><surname>Kuhn</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">W</forename><surname>Tucker</surname></persName>
		</editor>
		<imprint>
			<publisher>Princeton University Press</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">II</biblScope>
			<biblScope unit="page" from="307" to="318" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Link prediction in complex networks: A survey</title>
		<author>
			<persName><forename type="first">L</forename><surname>Lü</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Zhou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Physica A: statistical mechanics and its applications</title>
		<imprint>
			<biblScope unit="volume">390</biblScope>
			<biblScope unit="page" from="1150" to="1170" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Efficient computation of the shapley value for game-theoretic network centrality</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">P</forename><surname>Michalak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">V</forename><surname>Aadithya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">L</forename><surname>Szczepanski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Ravindran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">R</forename><surname>Jennings</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Artificial Intelligence Research</title>
		<imprint>
			<biblScope unit="volume">46</biblScope>
			<biblScope unit="page" from="607" to="650" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
