<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshops, October</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Symbolic Vs Sub-symbolic AI Methods: Friends or Enemies?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Eleni Ilkou</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maria Koutraki</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>L3S Research Center</institution>
          ,
          <addr-line>Appelstrasse 9a, 30167 Hannover</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Leibniz University of Hannover</institution>
          ,
          <addr-line>Welfengarten 1, 30167 Hannover</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <volume>1</volume>
      <fpage>9</fpage>
      <lpage>20</lpage>
      <abstract>
        <p>There is a long and unresolved debate between the symbolic and sub-symbolic methods. However, in recent years, there is a push towards in-between methods. In this work, we provide a comprehensive overview of the symbolic, sub-symbolic and in-between approaches focused in the domain of knowledge graphs, namely, schema representation, schema matching, knowledge graph completion, link prediction, entity resolution, entity classification and triple classification . We critically present key characteristics, advantages and disadvantages of the main algorithms in each domain, and review the use of these methods in knowledge graph related applications.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Symbolic methods</kwd>
        <kwd>sub-symbolic methods</kwd>
        <kwd>in-between methods</kwd>
        <kwd>knowledge graph tasks</kwd>
        <kwd>knowledge graph completion</kwd>
        <kwd>schema</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>combination of symbolic and sub-symbolic AI approaches,
which we refer to as in-between methods.</p>
      <p>
        Symbolic and sub-symbolic represent the two main bran- Table 1 shows an overview of some of the basic
diferches of Artificial Intelligence (AI). The AI field saw huge ent characteristics of the symbolic and sub-symbolic AI
progress and established itself in the 1950s, after some of methods. It presents an easy visual comparison between
the most notable and inaugural works of McCulloch and the two AI fields; as it was discussed in [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ] and
accordPittes, who in 1943 established the foundations of neural ing to our thorough analysis of the fields. Apart from the
networks (NN), and Turing’s work, who introduced in core symbolic or sub-symbolic methods, nowadays there
1950s the test of intelligence for machines, known as the are symbolic applications with sub-symbolic
characterisTuring test. tics and vice versa [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. We choose to adopt an annotation
      </p>
      <p>Since its invention, the field has seen ups and downs where a method belongs to symbolic or sub-symbolic if
in its development, which are colloquially known as the it uses only symbolic or sub-symbolic parts respectively;
AI seasons, and are characterised as “summers” and “win- otherwise we categorise it in the in-between methods.
ters”. The exact periods of these ups and downs are The main diferences between these two AI fields are
unclear, however, we adopt an intermediate convention the following: (1) symbolic approaches produce logical
based on Wikipedia and Henry Kautz’s talk1 “The Third conclusions, whereas sub-symbolic approaches provide
AI Summer” in AAAI 2020. We display a timeline of these associative results. (2) The human intervention is
comdevelopments in Figure 1. mon in the symbolic methods, while the sub-symbolic</p>
      <p>The first AI summer, also called the golden years, be- learn and adapt to the given data. (3) The symbolic
methgins a few years after the birth of AI, and it was based ods perform best when dealing with relatively small and
on the optimism in problem solving and reasoning. The precise data, while the sub-symbolic ones are able to
dominant paradigm was symbolic AI until the 1980s. This handle large and noisy datasets.
is when the sub-symbolic AI starts taking the lead and In this paper, we discuss in detail some of the
wellgains attention until the recent years. There is a long and known approaches in each AI domain, and their
appliunresolved debate between the two diferent approaches. cation use-cases in some of the most prominent
downHowever, this grapple between the diferent AI domains stream tasks in the domain of knowledge graphs. We
is approaching to its end, as we are currently experienc- focus on their applicability in the schema
representaing the third AI summer, where the presiding wave is the tion, schema matching, knowledge graph completion
and more specifically in entity resolution, link prediction,
entity and triple classification. In this work, we make the
following contributions:
• A overview of the characteristics, advantages and
disadvantages of the symbolic and sub-symbolic</p>
      <p>AI methods (Sections 2 and 3).</p>
      <p>Turing
Test</p>
      <p>Golden years:
1st summer</p>
      <p>The boom:
2nd summer
1940
Foundations
of NN
1950
1960
1970
1980
1990
2000
2010
2020
Birth of</p>
      <p>AI</p>
      <p>Start of
in-between models</p>
    </sec>
    <sec id="sec-2">
      <title>2. Symbolic Methods</title>
      <p>The rest of this paper is structured as following:
Section 2 presents an overview of the main characteristics of
the symbolic AI methods. Similarly, in Section 3 we
discuss the main characteristics of the sub-symbolic
methods. In Section 4, we present an overview of the
approaches that combine both symbolic and sub-symbolic
methods, namely, the in-between methods. Then, in
Section 5 we present some of the most important
downstream tasks in the field of knowledge graphs and we
analyse the diferent approaches (symbolic, sub-symbolic
and in-between methods) that have been followed in the
literature to tackle these tasks.</p>
      <p>
        • An analysis of the in-between methods and their rule based systems have the advantage of rule
modulardiferent categories as they are presented in the ity, as the rules are discrete and autonomous knowledge
literature, and their general characteristics (Sec- units that can easily be inserted or removed from a
knowltion 4). edge base [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Moreover, they provide knowledge
inter• An overview of the most common applications operability; meaning that in closely related applications,
of the symbolic, sub-symbolic and in-between knowledge transfer is possible. Also, they are better for
methods in knowledge graphs (Section 5). abstract problems as they are not highly dependent on
the input data.
      </p>
      <p>
        On the other hand, the symbolic methods are
typically not well-suitable for cases where datasets have
dataquality issues and might be prone to noise. Under such
circumstances, they are often yielding to sub-optimal
results [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], and they are not possible to conclude
(“brittleness”) [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Further, the rules and the knowledge usually
are hard and hand-coded, creating the Knowledge
Acquisition Bottleneck [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], which refers to the high cost of
human involvement in converting real-world problems
into inputs for symbolic AI systems. Finally, the
maintenance of rule bases is dificult as it requires complex
verification and validation.
      </p>
      <p>In terms of applications, the symbolic methods work
best on well-defined and static problems, and on
manipulating and modelling abstractions. However, traditionally,
they do not have good performance in real-time dynamic
assessments and massive empirical data streams.</p>
      <sec id="sec-2-1">
        <title>Symbolic methods, also known as Good Old Fashioned</title>
        <p>
          Artificial Intelligence (GOFAI), refer to human-readable
and explainable processes. The symbolic techniques are
defined by explicit symbolic methods, such as formal 3. Sub-symbolic Methods
methods and programming languages, and are usually
used for deductive knowledge [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. They consist of first- Contrary to symbolic methods, where the learning
haporder logic rules, while other methods include rules, on- pens through the human supervision and intervention,
tologies, decision trees, planning and reasoning. Accord- sub-symbolic methods establish correlations between
ing to Benderskaya et al [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] the symbolic AI is usually input and output variables. Such relations have high
associated with knowledge bases and expert systems, complexity, and are often formalized by functions that
and it is a continuation of the von Neumann and Turing map the input to the output data or the target variables.
machines. Sub-symbolic methods represent the Connectionism
        </p>
        <p>A key characteristic of symbolic methods is their abil- movement that is trying to mimic a human brain and
ity to explain and reason about the reached conclusion. its complex network of interconnected neurons with the
Furthermore, even their intermediate steps are often Artificial Neural Networks (ANN). The sub-symbolic AI
explainable. The symbolic systems provide a human- includes statistical learning methods, such as Bayesian
understandable computation flow which makes them learning, deep learning, backpropagation, and genetic
easier to debug, explain and control. In particular, the algorithms.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. In-between Methods</title>
      <p>Despite the fundamental diferences between symbolic
and sub-symbolic the last years there is a link between
them with the in-between methods. Since late 1980s,
there is a discussion about the need of cognitive
subsymbolic level [11]. The in-between methods consist of
the eforts to bridge the gap between the symbolic and
sub-symbolic paradigms. The idea is to create a system
which can combine the advantages of both methods: the
ability to learn from the environment and the ability to
reason the results.</p>
      <p>Most of the recent applications use a combination of
symbolic and connectionist parts to create their
algorithms. The used terminology for the range between the
symbolic and sub-symbolic varies, as can be seen in this
Section many methods are found with diferent names.</p>
      <p>Therefore, we refer to them as in-between methods.</p>
      <p>The sub-symbolic methods are more robust against
noisy and missing data, and generally have high comput- 4.1. General characteristics
ing performance. They are easier to scale up, therefore,
they are well suitable for big datasets and large knowl- The advantages of the in-between computations are
eviedge graphs. Moreover, they are better for perceptual dent and measurable to specific applications, with higher
problems, and they require less knowledge upfront. accuracy, eficiency and knowledge readability [ 12]. They</p>
      <p>
        However, connectionist methods have some disadvan- have an explanation capacity with no need for a-priori
astages. The most important one is the lack of interpretabil- sumptions, and they are comprehensive cognitive models
ity in these methods. This presents a big obstacle to their which integrate statistical learning with logical
reasonapplicability in domains where explanations and inter- ing. They also perform well with noisy data [13].
Anpretations are key points. Further, based on the General other advantage is that these systems during learning can
Data Protection Regulation of European Union [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], sub- combine logical rules with data, while fine-tuning the
symbolic techniques are proving to be usually restricted knowledge based on the input data. Overall, they seem
in critical or high-risk decision applications such as the suitable for applications which have large amounts of
hetmedical, legal or military decision applications and the erogeneous data and need knowledge descriptions [14].
autonomous cars. Furthermore, they are highly depen- In the in-between algorithms we find the
Knowledgedant on the training data they process. At first glance, it based Neural Networks (KBNN or KBANN) [16], Hybrid
might not seem like a problem, however, this results in an Expert System (HES) [17], Connectionist Inductive
Learninability to extrapolate results to unseen instances or data ing and Logic Programming (CILP) and Connectionist
which do not follow a similar distribution as the training Temporal Logics of Knowledge (CTLK) [14], Graph
Neudata. Additionally, due to the typically large amount of ral Networks (GNN) [18], Tensor Product
Representaparameters that need to be estimated in sub-symbolic tion [19], in which the core is a neural network that is
models, they require huge computation power and huge loosely-coupled with a symbolic problem-solver. Also,
amounts of data. Another issue arising is the availability we find the Logic Tensor Networks [ 20], Neural Tensor
of high quality data for training the algorithms, which Networks [21] for representing complex logical
strucoften are dificult to find. Data need to be correctly la- tures, and the latter’s extension are the knowledge graph
belled and to have decent representatives of the normal translating embedding models [22].
not to lead to biased outcomes [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The applications of these methods can be found in
      </p>
      <p>
        Most common applications of sub-symbolic methods many domains which combine learning and reasoning
include prediction, clustering, pattern classification and parts according to a specific problem. However, the
exrecognition of objects, and Natural Language Processing isting hybrid models are non-generalizable, they cannot
(NLP) tasks. Further, we find in sub-symbolic applica- be applied in multiple domains; each time the model is
tions the text classification and categorization, as well as developed to answer a specific question. Also, there is
recognition of speech and text. no guide deciding the combinations of symbolic and
subsymbolic parts for computation and representation [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>Recent downstream applications tend to combine
symbolic and sub-symbolic methods for their computation
model, more often than using a strictly only one of the fied approaches, Hilario identifies two main categories
two as can be seen in Section 5. the neuronal and the connectionist symbol processing,
and on the hybrid approaches the translational and
func4.2. Existing Categorisations in tional hybrids respectively. The translational models
translate representations between NNs and symbolic</p>
      <p>
        Literature structures. Furthermore, she creates a visual
continuSome of the in-between methods are found in literature as ous representation from connectionism to symbolism,
connectionist expert systems (or neural network based in which includes the categories she is proposing in the
expert systems) [23], multi-agent systems [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], hybrid range between sub-symbolic and symbolic techniques,
representations [24], neural-fuzzy [25, 26] and neural- as it is illustrated in Figure 2.
symbolic (or neurosymbolic [15]) computing, learning In continuation of Hilario’s model, McGarry et al [30]
and reasoning2, and its sub-type neurules [27]. focuses on hybrid rule-based systems. They are
propos
      </p>
      <p>To the best of our knowledge, there is no report con- ing the categorisation of the symbolic rules and neural
taining all the in-between methods. Moreover, there is networks integrations into unified, transformational and
no standard categorisation or common taxonomy for the adds the modular subcategory. The latter covers the
methods which belong in the range between the sym- hybrid models that they consist of several ANNs and
bolic and sub-symbolic techniques. The used terminology rule-based modules, which are coupled and integrated
varies, therefore, we refer to them as in-between methods, with many degrees. They support that most of hybrid
and not as neural-symbolic, hybrid or unified. In the last models are modular.
years, there is an increased interest about in-between
methods [28], and there are some review works in the 5. Knowledge Graph Tasks
domain each one presenting a taxonomy. Most of them
refer to the in-between methods as neural-symbolic
approaches.</p>
      <p>Garcez et al [14] present a neural-symbolic
computing table that separates the methods to applications of
knowledge representation, learning, reasoning and
explainability. Bader and Hitzler [13] study the
dimensions of neural-symbolic integration and propose the
dimensions of usage, language and interrelation. In the
neural-symbolic techniques, they identify two models,
the hybrid and integrated (also called unified or
translational) [29]. The diference between the two is that hybrid
models combine two or more symbolic and sub-symbolic
techniques which run in parallel, while the integrated
neural symbolic systems consist of a main sub-symbolic
component which uses symbolic knowledge in the
processing.</p>
      <p>Hilario [15] separates neurosymbolic integration to 5.1. Schema Representation
unified and hybrid approaches, each consisted of two
subcategories. Both categories, unified and hybrid, have
a similar description to Bader and Hitzler [13]. In the
uni</p>
      <sec id="sec-3-1">
        <title>There are plenty of symbolic, sub-symbolic and in-between</title>
        <p>applications in diferent domains. Our main focus in this
study will be knowledge graph related applications.</p>
        <p>A knowledge graph (KG) consists of a set of triples
 ⊆  ×  × ( ∪ ), where  is a set of resources
that we refer as entities,  a set of literals, and  a set
of relations. Given a triple (h, r, t) (aka a statement), h is
known as subject, r as relation, and t as object. A KG can
represent any kind of information for the world such as
(Anna_Karenina, writtenBy, Leo_Tolstoy) and
(Leo_Tolstoy, bornIn, Russia), which means that
Anna Karenina is written by Leo Tolstoy, who was born
in Russia. The above notation will help us explain and
analyse the following tasks.</p>
      </sec>
      <sec id="sec-3-2">
        <title>2http://www.neural-symbolic.org/</title>
        <p>
          Schemata are present from the beginning of databases
and data management systems, and stand in for the
structure of the data and knowledge. In the last years,
there is attention towards linking and structuring data
in the web. Connecting information on the web can mapping evolves from specific to generic applications.
also be achieved by schemata; an example is schema.org3 The majority of these methods focus on class alignment,
which focuses on the schema creation, representation and however, there also are works focus on relation
alignmaintenance [31]. When modelling knowledge graphs, ment [39, 40, 41].
schemata can be used to prescribe high-level rules that
the graph should follow [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. Knowledge schema or other- 5.3. Knowledge Graph Completion
wise schema representation contains a conceptual model
of the KG. A schema defines the types of entities and rela- Once a knowledge graph is created, it contains a lot of
tions which can exist in a KG, and an abstract way of com- noisy and incomplete data [42]. In order to fill the missing
bining these entities and relations in (h,r,t) triples. In our information for a constructed knowledge graph, we use
example, a schema representation could exist in the form the task of Knowledge Graph Completion (KGC). KGC,
of triples which will state that (Book, writtenBy, Authosirm)ilar to knowledge graph identification [ 43], is an
inand (Author, bornIn, Country) [32]. telligent way of performing data cleaning. This is usually
        </p>
        <p>Schema representation is traditionally a symbolic task. addressed with filling the missing edges (link prediction),
First-order logic, ontologies, and formal knowledge repre- deduplicating entity nodes (entity resolution) and dealing
sentation languages, such as RDF(S), OWL [33], XML [34] with missing values.
as well as rules have been used for schema formulations. Mostly in-between methods are used for KGC, with
Some of the most representative examples of schema rep- the Knowledge Graph Embeddings (KGEs) to be one of
resentation in terms of knowledge graphs construction the most powerful and commonly used techniques. KGEs
are YAGO [35] and DBpedia [36]. The two of the most aim to create a low dimensional vector representation
frequently used KGs are following a symbolic approach of the KG and model relation patterns, hence reduce the
as they are mostly rely on rule mining techniques used complexity of KG related tasks while achieving high
acto extract knowledge and represent it in RDF(S) terms. curacy. We further analyse the KGC task into the specific
sub-tasks of entity resolution and link prediction.
5.2. Schema Matching
5.3.1. Entity Resolution</p>
      </sec>
      <sec id="sec-3-3">
        <title>Diferent KGs use diferent schemata to represent the</title>
        <p>same information which create the need for schema match- Entity resolution (ER) is also known as record linkage,
ing. Schema matching or mapping, also can be found as reference matching or duplication. It is the process of
schema alignment, is happening between two or more ifnding duplicated references or records in a dataset. It
KGs, when we want to perform data integration or map- is related to data integration as it is one of its
foundaping, and it refers to the process of identifying semanti- tional problems [44]. Based on our example, we will
cally related objects. It is similar to the entity resolution, have to perform entity resolution between the entities “L.
in Section 5.3.1, with the diference that the latter cares Tolstoy” and “Leo Tolstoy” which can exist in the triples
about mapping object references, such as “L. Tolstoy” and (Anna_Karenina, writtenBy, L._Tolstoy) and
(Leo_Tol“Leo Tolstoy”, while the schema matching works on the stoy, bornIn, Russia), and refer to the same
perschema definitions, such as Author and Person. son.</p>
        <p>Over the last decades, many models and prototypes Record linkage was introduced by Halbert L. Dunn in
have been introduced on schema matching. Based on 1946. In 1960s, there are statistical sub-symbolic
moda survey in schema matching [37], each model uses an els describing the process of entity resolution, which
input schema with most common an OWL data model, formulate the mathematical basis for many of the
curthen a RDF, and finally a document type definition. They rent models [45]. In 1990s, machine learning techniques
process the symbolic input by using diferent models, are applied for this assignment and the techniques used
which can be linguistic or language based, constrain- are mostly based on the in-between methods [46].
Combased, and structured-based. The linguistic matchers monly, ER techniques rely on attribute similarity between
combine the symbolic input with sub-symbolic NLP algo- the entities [47]. The algorithms deployed for ER are
inrithms [38]. The constrain-based matchers are exploiting spired by information retrieval and relational duplicate
the constrains in data features, such as the data types elimination [48].
and ranges. The structured-based matchers focus on the
database/graph structure. Both constrain and structured 5.3.2. Link Prediction
based models use mostly symbolic techniques, while
there is also an interesting raise in combinations of
matchers (hybrid models). The application domain for schema</p>
        <p>Link prediction techniques, also known as edge
prediction, have been applied in many and diferent fields. Edge
prediction refers to the task of adding new links to an
existing graph. In practice, this can be used as a
recommendation system for future connections, or as
completioncorrection tool that foresees the missing links between
entities.</p>
        <p>The link prediction is a well studied field consisted of
many and diferent approaches. A survey of link
prediction in complex networks [49] separates the approaches
based on the method they are using. It identifies the edge
prediction problem to a few techniques that belong to
sub-symbolic, such as AI based ANN, probabilistic and
Monte Carlo algorithms. However, most of the solutions
it proposes, belong to the in-between range. The
current state-of-the-art for link prediction tasks is focused
on in-between methods. KGEs play a big role in this
task and they can be found in diferent forms of
translational models [22], neural based KGE with logical rules by
[50], and hierarchy-aware KGEs [51]. In link prediction
tasks, we find the triple classification, entity classification,
and the head (?, writtenBy, Leo_Tolstoy),
relation (Anna_Karenina, ?, Leo_Tolstoy), and tail
(Anna_Karenina, writtenBy, ?) prediction respectively. We
additionally focus on the analysis of entity classification, and
triple classification in the next paragraphs.</p>
        <p>Entity Classification . It can also be found as node
classification or type prediction in the literature. Entity
classification tries to predict the type or the class of an
entity given some characteristics. In our case, the input
triple would be (Leo_Tolsto, isA, ?), which could
give the results (1, Person, 99%) and (2, Author,
98%).</p>
        <p>While most of link prediction tasks are sub-symbolic
based with combination of some symbolic parts, the
entity classification is more related to the schema and
ontology of the KG, hence, the techniques are symbolic
based [52, 53].</p>
        <p>Triple Classification . The triple classification is a
binary problem which answers whether a triple (h,r,t)
is true or not, for example the input (Anna_Karenina,
writtenBy, Leo_Tolstoy)? leads to result (yes,
92%). Systems like the Trans* KGEs [22] use a density
function to make predictions about the triple
classification based on a probability function. Mayank
Kejriwal [54] claims that the correct metric for this task with
the usage of KGEs is accuracy, if the test data are
balanced.</p>
        <p>Triple classification algorithms usually belong to
inbetween methods, with some examples using neural
tensor networks [21] and time-aware [55], latent factor and
semantic matching models.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>6. Conclusions</title>
      <p>We represented the symbolic, sub-symbolic and in-between
methods in AI and analysed the key characteristics, main
approaches, advantages and disadvantages of each
technique respectively. Further, we argued that the current,
and possibly future, area of processes is the application
of the in-between methods. We justified this belief by
discussing principal downstream tasks related to knowledge
graph.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <sec id="sec-5-1">
        <title>This work was partially funded by the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 860801.</title>
        <p>artificial intelligence systems—an introductory sur- [23] S. I. Gallant, S. I. Gallant, Neural network learning
vey, Wiley Interdisciplinary Reviews: Data Mining and expert systems, MIT press, 1993.
and Knowledge Discovery 10 (2020) e1356. [24] M. Moreno, D. Civitarese, R. Brandao, R. Cerqueira,
[11] M. Frixione, G. Spinelli, S. Gaglio, Symbols and sub- Efective integration of symbolic and connectionist
symbols for representing knowledge: a catalogue approaches through a hybrid representation, arXiv
raisonne, in: Proceedings of the 11th international preprint arXiv:1912.08740 (2019).
joint conference on Artificial intelligence-Volume [25] R. Fullér, Neural fuzzy systems (1995).
1, 1989, pp. 3–7. [26] L. Magdalena, A first approach to a taxonomy of
[12] A. d. Garcez, T. R. Besold, L. De Raedt, P. Földiak, fuzzy-neural systems, Connectionist symbolic
inteP. Hitzler, T. Icard, K.-U. Kühnberger, L. C. Lamb, gration (1997).</p>
        <p>R. Miikkulainen, D. L. Silver, Neural-symbolic learn- [27] J. Prentzas, I. Hatzilygeroudis, Neurules-a type of
ing and reasoning: contributions and challenges, in: neuro-symbolic rules: An overview, in:
CombiProceedings of the AAAI Conference on Artificial nations of Intelligent Methods and Applications,
Intelligence, 2015. Springer, 2011, pp. 145–165.
[13] S. Bader, P. Hitzler, Dimensions of neural-symbolic [28] A. d’Avila Garcez, Proceedings of ijcai international
integration-a structured survey, arXiv preprint workshop on neural-symbolic learning and
reasoncs/0511042 (2005). ing nesy 2005 (2005).
[14] A. d. Garcez, M. Gori, L. C. Lamb, L. Serafini, [29] R. Sun, F. Alexandre, Connectionist-symbolic
inteM. Spranger, S. N. Tran, Neural-symbolic com- gration: From unified to hybrid approaches,
Psyputing: An efective methodology for principled in- chology Press, 2013.
tegration of machine learning and reasoning, arXiv [30] K. McGarry, S. Wermter, J. MacIntyre, Hybrid
neupreprint arXiv:1905.06088 (2019). ral systems: from simple coupling to fully
inte[15] M. Hilario, An overview of strategies for neurosym- grated neural networks, Neural Computing Surveys
bolic integration, Connectionist-Symbolic Integra- 2 (1999) 62–93.
tion: From Unified to Hybrid Approaches (1997) [31] J. Ronallo, Html5 microdata and schema. org,
13–36. Code4Lib Journal (2012).
[16] G. Agre, I. Koprinska, Case-based refinement of [32] C. Belth, X. Zheng, J. Vreeken, D. Koutra, What
knowledge-based neural networks, in: Proceed- is normal, what is strange, and what is missing in
ings of the International Conference" Intelligent a knowledge graph: Unified characterization via
Systems: A Semiotic Perspective, volume 2, 1996, inductive summarization, in: Proceedings of The
pp. 20–23. Web Conference 2020, 2020, pp. 1115–1126.
[17] S. Sahin, M. R. Tolun, R. Hassanpour, Hybrid ex- [33] K. Sengupta, P. Hitzler, Web ontology language
pert systems: A survey of current approaches and (owl), Encyclopedia of Social Network Analysis
applications, Expert systems with applications 39 and Mining (2014).</p>
        <p>(2012) 4609–4617. [34] F. Zhang, L. Yan, Z. M. Ma, J. Cheng, Knowledge
[18] L. Lamb, A. Garcez, M. Gori, M. Prates, P. Avelar, representation and reasoning of xml with ontology,
M. Vardi, Graph neural networks meet neural- in: Proceedings of the 2011 ACM symposium on
symbolic computing: A survey and perspective, applied computing, 2011, pp. 1705–1710.
arXiv preprint arXiv:2003.00330 (2020). [35] F. M. Suchanek, G. Kasneci, G. Weikum, Yago: A
[19] P. Smolensky, Tensor product variable binding and large ontology from wikipedia and wordnet,
Jourthe representation of symbolic structures in con- nal of Web Semantics 6 (2008) 203–217.
nectionist systems, Artificial intelligence 46 (1990) [36] J. Lehmann, R. Isele, M. Jakob, A. Jentzsch, D.
Kon159–216. tokostas, P. N. Mendes, S. Hellmann, M. Morsey,
[20] I. Donadello, L. Serafini, A. D. Garcez, Logic tensor P. van Kleef, S. Auer, C. Bizer, Dbpedia - A
largenetworks for semantic image interpretation, arXiv scale, multilingual knowledge base extracted from
preprint arXiv:1705.08968 (2017). wikipedia, Semantic Web 6 (2015) 167–195.
[21] R. Socher, D. Chen, C. D. Manning, A. Ng, Rea- [37] E. Sutanta, R. Wardoyo, K. Mustofa, E. Winarko,
soning with neural tensor networks for knowledge Survey: Models and prototypes of schema
matchbase completion, in: Advances in neural informa- ing., International Journal of Electrical &amp; Computer
tion processing systems, 2013, pp. 926–934. Engineering (2088-8708) 6 (2016).
[22] H. Cai, V. W. Zheng, K. C.-C. Chang, A comprehen- [38] O. Unal, H. Afsarmanesh, et al., Using linguistic
sive survey of graph embedding: Problems, tech- techniques for schema matching., in: International
niques, and applications, IEEE Transactions on Conference on Software Technologies ICSOFT (2),
Knowledge and Data Engineering 30 (2018) 1616– 2006, pp. 115–120.
1637. [39] M. Koutraki, N. Preda, D. Vodislav, Online relation
alignment for linked datasets, in: European Seman- [53] J. Sleeman, T. Finin, Type prediction for eficient
tic Web Conference, Springer, 2017, pp. 152–168. coreference resolution in heterogeneous semantic
[40] M. Koutraki, N. Preda, D. Vodislav, SOFYA: se- graphs, in: 2013 IEEE Seventh International
Conmantic on-the-fly relation alignment, in: Interna- ference on Semantic Computing, IEEE, 2013, pp.
tional Conference on Extending Database Technol- 78–85.</p>
        <p>ogy (EDBT), 2016. [54] M. Kejriwal, Advanced topic: Knowledge graph
[41] R. Biswas, M. Koutraki, H. Sack, Exploiting equiva- completion, in: Domain-Specific Knowledge Graph
lence to infer type subsumption in linked graphs, Construction, Springer, 2019, pp. 59–74.
in: European Semantic Web Conference, Springer, [55] T. Jiang, T. Liu, T. Ge, L. Sha, S. Li, B. Chang, Z. Sui,
2018, pp. 72–76. Encoding temporal information for time-aware link
[42] R. West, E. Gabrilovich, K. Murphy, S. Sun, R. Gupta, prediction, in: Proceedings of the 2016 Conference
D. Lin, Knowledge base completion via search- on Empirical Methods in Natural Language
Processbased question answering, in: Proceedings of the ing, 2016, pp. 2350–2354.
23rd international conference on World wide web,
2014, pp. 515–526.
[43] J. Pujara, H. Miao, L. Getoor, W. Cohen, Knowledge
graph identification, in: International Semantic</p>
        <p>Web Conference, Springer, 2013, pp. 542–557.
[44] S. Thirumuruganathan, M. Ouzzani, N. Tang,
Explaining entity resolution predictions: Where are
we and what needs to be done?, in: Proceedings of
the Workshop on Human-In-the-Loop Data
Analytics, 2019, pp. 1–6.
[45] I. P. Fellegi, A. B. Sunter, A theory for record linkage,</p>
        <p>Journal of the American Statistical Association 64
(1969) 1183–1210.
[46] T. Ebisu, R. Ichise, Graph pattern entity ranking
model for knowledge graph completion, arXiv
preprint arXiv:1904.02856 (2019).
[47] V. Christophides, V. Efthymiou, T. Palpanas, G.
Papadakis, K. Stefanidis, End-to-end entity
resolution for big data: A survey, arXiv preprint
arXiv:1905.06397 (2019).
[48] N. Koudas, S. Sarawagi, D. Srivastava, Record
linkage: similarity measures and algorithms, in:
Proceedings of the 2006 ACM SIGMOD international
conference on Management of data, 2006, pp. 802–
803.
[49] B. Pandey, P. K. Bhanodia, A. Khamparia, D. K.</p>
        <p>Pandey, A comprehensive survey of edge
prediction in social networks: Techniques, parameters
and challenges, Expert Systems with Applications
124 (2019) 164–181.
[50] M. Nayyeri, C. Xu, J. Lehmann, H. S. Yazdi,
Logicenn: A neural based knowledge graphs
embedding model with logical rules, arXiv preprint
arXiv:1908.07141 (2019).
[51] Z. Zhang, J. Cai, Y. Zhang, J. Wang, Learning
hierarchy-aware knowledge graph embeddings for
link prediction., in: Proceedings of the
ThirtyFourth AAAI Conference on Artificial Intelligence,
2020, pp. 3065–3072.
[52] H. Paulheim, C. Bizer, Type inference on noisy rdf
data, in: International semantic web conference,
Springer, 2013, pp. 510–525.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L. R.</given-names>
            <surname>Medsker</surname>
          </string-name>
          ,
          <source>Hybrid neural network and expert systems</source>
          , Springer Science &amp; Business
          <string-name>
            <surname>Media</surname>
          </string-name>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>T. R.</given-names>
            <surname>Besold</surname>
          </string-name>
          , K.-U. Kühnberger,
          <article-title>Towards integrated neural-symbolic systems for human-level ai: Two research programs helping to bridge the gaps</article-title>
          ,
          <source>Biologically Inspired Cognitive Architectures</source>
          <volume>14</volume>
          (
          <year>2015</year>
          )
          <fpage>97</fpage>
          -
          <lpage>110</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>G.</given-names>
            <surname>Marcus</surname>
          </string-name>
          ,
          <article-title>Deep learning: A critical appraisal</article-title>
          , arXiv preprint arXiv:
          <year>1801</year>
          .
          <volume>00631</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Hogan</surname>
          </string-name>
          , E. Blomqvist,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cochez</surname>
          </string-name>
          , C. d'Amato, G. de Melo,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gutierrez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. E. L.</given-names>
            <surname>Gayo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kirrane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Neumaier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Polleres</surname>
          </string-name>
          , et al.,
          <string-name>
            <surname>Knowledge</surname>
            <given-names>graphs</given-names>
          </string-name>
          , arXiv preprint arXiv:
          <year>2003</year>
          .
          <volume>02320</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>E. N.</given-names>
            <surname>Benderskaya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. V.</given-names>
            <surname>Zhukova</surname>
          </string-name>
          ,
          <article-title>Multidisciplinary trends in modern artificial intelligence: Turing's way</article-title>
          ,
          <source>in: Artificial Intelligence, Evolutionary Computing and Metaheuristics</source>
          , Springer,
          <year>2013</year>
          , pp.
          <fpage>319</fpage>
          -
          <lpage>343</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>I.</given-names>
            <surname>Hatzilygeroudis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Prentzas</surname>
          </string-name>
          ,
          <article-title>Neuro-symbolic approaches for knowledge representation in expert systems</article-title>
          ,
          <source>International Journal of Hybrid Intelligent Systems</source>
          <volume>1</volume>
          (
          <year>2004</year>
          )
          <fpage>111</fpage>
          -
          <lpage>126</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D. B.</given-names>
            <surname>Lenat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Prakash</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Shepherd</surname>
          </string-name>
          ,
          <article-title>Cyc: Using common sense knowledge to overcome brittleness and knowledge acquisition bottlenecks</article-title>
          ,
          <source>AI</source>
          magazine
          <volume>6</volume>
          (
          <year>1985</year>
          )
          <fpage>65</fpage>
          -
          <lpage>65</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Cullen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bryman</surname>
          </string-name>
          ,
          <article-title>The knowledge acquisition bottleneck: time for reassessment?</article-title>
          ,
          <source>Expert Systems</source>
          <volume>5</volume>
          (
          <year>1988</year>
          )
          <fpage>216</fpage>
          -
          <lpage>225</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>E.</given-names>
            <surname>Salami</surname>
          </string-name>
          ,
          <article-title>An analysis of the general data protection regulation (eu)</article-title>
          <year>2016</year>
          /679, Available at SSRN 2966210 (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>E.</given-names>
            <surname>Ntoutsi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Fafalios</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Gadiraju</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Iosifidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Nejdl</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.-E. Vidal</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Ruggieri</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Turini</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Papadopoulos</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Krasanakis</surname>
          </string-name>
          , et al.,
          <article-title>Bias in data-driven</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>