<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>SEBD</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Neuro-Symbolic techniques for Predictive Maintenance</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>(Discussion Paper)</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Angelica Liguori</string-name>
          <email>angelica.liguori@dimes.unical.it</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Simone Mungari</string-name>
          <email>simone.mungari@unical.it</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ettore Ritacco</string-name>
          <email>ettore.ritacco@uniud.it</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesco Ricca</string-name>
          <email>francesco.ricca@unical.it</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giuseppe Manco</string-name>
          <email>giuseppe.manco@icar.cnr.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Salvatore Iiritano</string-name>
          <email>salvatore.iiritano@revelis.eu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ICAR-CNR</institution>
          ,
          <addr-line>Via P. Bucci 8-9/C, Rende, 87036</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Revelis S.r.l., Viale della Resistenza</institution>
          ,
          <addr-line>Rende, 87036</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Calabria</institution>
          ,
          <addr-line>Via P. Bucci, Rende, 87036</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Udine</institution>
          ,
          <addr-line>Via Palladio, Udine, 33100</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>31</volume>
      <fpage>02</fpage>
      <lpage>05</lpage>
      <abstract>
        <p>Predictive maintenance plays a key role in the core business of the industry due to its potential in reducing unexpected machine downtime and related cost. To avoid such issues, it is crucial to devise artificial intelligence models that can efectively predict failures. Predictive maintenance current approaches have several limitations that can be overcome by exploiting hybrid approaches such as Neuro-Symbolic techniques. Neuro-symbolic models combine neural methods with symbolic ones leading to improvements in eficiency, robustness, and explainability. In this work, we propose to exploit hybrid approaches by investigating their advantage over classic predictive maintenance approaches.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Predictive Maintenance</kwd>
        <kwd>Neuro-Symbolic</kwd>
        <kwd>Root Cause Analysis</kwd>
        <kwd>Logic programming</kwd>
        <kwd>Data-driven</kwd>
        <kwd>Modelbased</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Nowadays Artificial Intelligence (AI) has attracted a lot of attention in several application
domains, including industry. In particular, predictive maintenance plays a key role, since it
allows industries to preventively avoid internal systems failures, as well as reduce costs for
service interruptions. Model-based and data-driven [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] strategies are currently exploited to
develop design, optimization, diagnostic, and maintenance phases. Model-based techniques
exploit mathematical models as well as background knowledge derived from human experts.
Mathematical models allow describing relationships that govern a determined environment.
Vice versa, data-driven approaches are inductive methods: models are created by generalizing
from the data (i.e., environment observations), and the aim is to define mathematical models
based on them. Since the models are derived from the data, it is crucial to have a huge amount
of them that are representative of the domain. Both approaches have limitations: the former has
scalability and performance issues, while the latter lack of interpretability and partially removes
human interaction. Hence, to exploit the potentiality of the two approaches, while limiting
their weaknesses, we propose to employ hybrid approaches for improving existing predictive
maintenance solutions. In this sense, after a critical review of current approaches, we devised a
list of key advantages as a starting point for our research. In particular, to improve existing
models, the novel ones should have (i) interpretability; (ii) robustness; and (iii) efectiveness
properties. We believe these goals can be achieved by developing Neuro-symbolic approaches
to predictive maintenance.
      </p>
      <p>The paper is structured as follows. Section 2 explains the neuro-symbolic approach and its
potentialities by describing some state-of-the-art models. Section 3 describes the context of
predictive maintenance and its current approaches. In Section 4 we describe our proposal to
exploit neuro-symbolic approaches for improving predictive maintenance existing approaches.
Finally, we conclude the paper in Section 5.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Neuro-Symbolic approaches</title>
      <p>
        Neuro-symbolic approaches are hybrid models exploiting both deductive (symbolic) and
inductive (deep learning) approaches. Combining both approaches yield models that are more
robust and accurate as well as explainable. Kautz [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] devised a taxonomy for neuro-symbolic
integrations by dividing them according to their characteristics and how the combination is
performed. The taxonomy establishes six categories: (i) Symbolic Neuro Symbolic; (ii)
Symbolic[Neuro]; (iii) Neuro|Symbolic ; (iv) Neuro:Symbolic-&gt;Neuro ; (v) Neuro_{Symbolic};
(vi) Neuro[Symbolic]. Neuro-symbolic techniques are steadily growing interest in the
research community, as well as being used in several areas. We then present an overview of
state-of-the-art hybrid models to show their potentialities and their diferent usages according
to the aforementioned taxonomy. Although in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] six categories have been identified, to the
best of our knowledge there are no works about Neuro[Symbolic], i.e., methods in which
neural networks exploit symbolic solvers inside their architecture, as shown in Figure 1.
      </p>
      <p>
        Symbolic Neuro Symbolic Models in this category are mainly used in the context of Natural
Language Processing (NLP), in which given a token (symbol), i.e., a word in a sentence, the aim
is to generate its embedding for predicting the next tokens or for classifying them (sentiment
analysis) or generating new ones. Architecture is shown in Figure 2. Mikolov et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] propose
Documents
g
n
i
s
s
e
c
o
r
p
e
r
P
Classification Task
word2vec, a neural architecture consisting of two sub-networks: CBOW and Skip-Gram. Unlike
word2vec, GLOVE [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] uses matrix factorization for generating token embeddings. In addition,
GLOVE optimizes the loss by combining the similarity of the words based on their occurrences.
      </p>
      <p>
        Symbolic[Neuro] Models within this category exploit neural architecture as sub-networks
invoked by an external symbolic solver. A classical example is AlphaGO [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], a framework that
combines a Monte Carlo search tree strategy and Reinforcement learning models that learn how
to evaluate game positions. The symbolic part, represented by the Monte Carlo strategy, uses
the neural components to compute scores, associated with the nodes of the tree-based structure
and, then, based on the scores continue the solution search. Garg et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] propose SymNet
whose aim is to plan the actions of an agent. SymNet is based on Relational Markov Decision
Process (RMDP) to represent the knowledge and uses Dynamic Bayesian Network (DBN) for
modeling RMDP. DBN is converted into a graph-based structure. Then, the embedding ¯ and ¯
are generated to represent nodes  and states  and fed into two decoder layer for producing
 () and  (), respectively. Action with the maximum probability in  () is chosen. It is crucial
to define methods for representing the knowledge base; for example, in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], it is represented
through sparse matrices.
      </p>
      <p>
        Neuro|Symbolic Diferently from the previous category ( Symbolic[Neuro]) where neural
networks are "sub-network", here models interact as equal in the global architecture. Figures
3a and 3b show the two categories. Models in Neuro|Symbolic are mainly employed for two
tasks: planning and question-answering. Planning. Yang et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] propose a unified framework,
PEORL, that integrates symbolic planning with reinforcement learning for identifying which
actions an agent will do in a given environment. An improved version is presented in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] in
which an intrinsic reward is introduced to optimize the reinforcement learning-based model. In
PLANS [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], neural architecture is used for generating an action list starting from visual data.
Then, the obtained outputs are fed into a rule-based solver that produces the sequences of final
actions that an agent will do. In addition, a filtering system is used to keep only outputs for
which the probability is above a given threshold. Symbolic Options for Reinforcement Learning
(SORL) is proposed in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] in which the authors assume that a function  maps the environment
state in a set of symbolic states manipulated by a meta-controller and a symbolic planner for
producing the actions that an agent will perform. Question-answering. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] propose a hybrid
model named Neural Symbolic Reader (NeRd), whose architecture is an encoder-decoder. The
encoder creates embedding from the input question, while the decoder generates a symbolic
program. The final answer is obtained by giving as input the program to a symbolic solver. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]
the authors presented a neuro-symbolic model for visual question answering. An important
advantage of both [
        <xref ref-type="bibr" rid="ref12 ref13">12, 13</xref>
        ] is the interpretability of the results, since the symbolic program used
for generating the final answers is self-explainable.
      </p>
      <sec id="sec-2-1">
        <title>Symbolic Module</title>
        <p>Symbolic
Module
Neural
Module
(b)</p>
        <p>
          Neuro:Symbolic→Neuro This category includes all architectures in which the knowledge
base is processed by a neural model. Figure 4a shows the architecture of such an approach. In
[
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], the authors propose a language model, named Neural-Symbolic Language Model whose
aim is to improve the inductive bias. Neural Markov Logic Networks (NMLN) is developed in
[
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. Markov Logic Networks are probabilistic models in which the logic is used to represent data
and statistics for prediction tasks. NMLN exploits neural networks for estimating probability
distributions that govern the logic rules through min-max entropy. In [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], a framework, named
KRISP, combining symbolic (explicit) and implicit knowledge is proposed. Given an image as
input, the model extracts symbolic information, called visual concepts, that are then combined
with a knowledge-graph (KG). A Graph Neural Network and Transformer architecture are used
to estimate the probability distributions of answers.
        </p>
        <p>
          Neuro_{Symbolic} Systems that integrate logic rules into neural network weights are
included in this category; an architectural example is shown in Figure 4b. In [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] the authors
propose Logic Tensor Network (LTN) whose aim is to find a way to diferentiate logic rules
based on first-order logic formalism. To diferentiate logic operators, authors consider the
method proposed in [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] in which diferentiable operations are defined instead of using logic
operations. Hoernle et al. [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] propose Multiplexnet that integrates logic constraints into the
neural network computation to guide its training. Assuming that the rules are in Disjunctive
Normal Form (DNF), the goal is to find a data transformation that makes the DNF satisfactory.
To do this, the activation functions are modified and a component representing the grade of
violation of the constraints is added to the loss. [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ] integrate hard constraint into the neural
network. Neural Logic Machines (NLM) [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ] combines inductive and deductive approaches. The
grounding of the predicate is converted into boolean tensors that are manipulated by exploiting
diferentiable operators defined meta-rule. In [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ] the authors develop SATNet to address the
MAXSAT problems by using a neural network.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Predictive Maintenance</title>
      <p>Maintenance policies are fundamental within industries, whose aims are manifold: anticipating
failures, reducing costs associated with unplanned downtime, and methodologies for systems
restoring. They provide data-driven insights that allow organizations to proactively manage their
Neural Network</p>
      <p>(a)</p>
      <sec id="sec-3-1">
        <title>Loss</title>
        <p>(b)
Diagnostic
signals
Measurements</p>
        <p>Residuals
Residual
Generation</p>
        <p>Residual
Evaluation</p>
        <p>Fault
Diagnosis
equipment to manage such issues. In particular, predictive maintenance aims to continuously
monitor the data provided by sensors installed into the system in order to identify failure and take
solutions to restore the operational status. Predictive maintenance can be summarized in three
key steps: (1) monitoring systems’ parameters; (2) setting parameter thresholds for identifying
anomalies; (3) defining methods and tools to recover systems functionalities. Methods and
tools for predictive maintenance are constantly evolving, in particular, state-of-the-art methods
exploit automatic algorithms for (i) preprocessing sensor data, and (ii) evaluating the systems’
correct functioning. In the area of predictive maintenance, the common algorithmic approaches
are: Model-based and Data-driven.</p>
        <p>Model-based. Model-based approaches use mathematical models for identifying failures,
they exploit a knowledge base given by experts within a specific topic. In particular, the overall
mechanism of model-based approaches can be depicted in three sequential steps, as shown
in Figure 5. Given a component , a timestamp , a parameter  describing the regular
performance level of , and a second parameter ̂︀ representing the real performance level, we
can measure the residuals by computing  =  − ̂︀. Then,  is passed through a set of logic
rules (defined by experts) able to identify anomalies. The final stage, Fault Diagnosis, utilizes
the output from the preceding step to obtain additional data in order to examine the present
malfunction, and proactively mitigate any prospective irregularities. Human knowledge may
be represented as symbols by formal languages such as First Order Logic (FOL) or Propositional
Logic. Symbolic systems exploit a deductive approach: starting from a general knowledge base
(KB) within a specific topic, it is possible to infer new knowledge.</p>
        <p>Data-driven. Inductive approaches (such as data-driven) perform bottom-up strategies:
starting from observations, they realize models able to generalize the observation for a larger
population, as well as infer new data. Among data-driven approaches, artificial intelligence
has been exploited for its capability to create models able to analyze large datasets in order to
identify anomalous patterns. In predictive maintenance, the concept of "anomalous patterns"
usually coincides with the concept of "systems ineficiency", hence, the rising of possible
failures. Various kinds of observations are used, such as data coming from sensors (temperature,
humidity, speed), and logic data. To develop eficient data-driven systems, a huge amount of
data is needed, as well as massive computing power.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Weaknesses of existing methods and our proposal</title>
      <p>Predictive maintenance current approaches (model-based and data-driven) have several
limitations, hence, we provide an overview of the main weaknesses. To overcome those issues, we
propose to create predictive maintenance solutions exploiting hybrid techniques based on three
main key points. Moreover, we present a possible use-case scenario.</p>
      <sec id="sec-4-1">
        <title>4.1. Limitations of current approaches</title>
        <p>Data-driven approach yields more high-quality outcomes. Model-based approaches ofer a
straightforward interpretation due to the fact that the parameters responding to physical
phenomena within systems correspond with behavioral models. Nonetheless, generating
accurate models proves dificult in practice, particularly when facing complex systems in
which various types of physical phenomena take place. Even when such a model exists, it
generally represents a particular physical phenomenon that was generated during specific
experimental conditions. In consequence, conducting experiments for diferent operating
conditions can be expensive, limiting the potential use of this approach. However, the widespread
availability of sensors and increased computing power has facilitated the use of artificial
intelligence techniques, leading to the proliferation of data-driven methods. These methods
utilize artificial intelligence tools to transform monitoring data into behavioral models.
Datadriven approaches provide a trade-of in terms of complexity, cost, accuracy, and applicability.
When compared to model-based approaches, data-driven approaches are suitable for systems in
which obtaining monitoring data that represents degradation behavior proves accessible. Yet,
one of the limitations of data-driven methods is the potentially long learning time required.
In terms of accuracy, data-based methods ofer less precise results than model-based methods,
but they are less complex and therefore more applicable. Here we provide a summary of the
aforementioned issues.</p>
        <p>• Model-based: (i) experts can make assumptions that may not always reflect the
complexity of the real-world problems they are trying to solve; (ii) specific and deep knowledge
can lead to high costs for industries; (iii) poor results;
• Data-driven: (i) real data can be noisy, inconsistent, and sparse, this can lead models to
overfit or to develop biases; ( ii) black-box data-driven models are not explainable; (iii)
historical data could not be fully representative of real-world scenarios.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Neuro-symbolic advantages</title>
        <p>
          Interpretability Interpretability is the main characteristic of the deductive approaches as in
such models it is easy to understand the reasoning behind predictions, while the black-box
nature of neural networks and data-driven approaches make them less interpretable, i.e., other
techniques are needed to interpret the generated outputs. Several application scenarios, such
as bioinformatics, robotics, and so on, need efective and interpretable models. For instance,
when a disease is predicted, it is crucial having an accurate prediction (efectiveness) as well
as understand which factors led to that prediction (interpretability). Therefore, it is desirable
to use hybrid approaches, i.e., neuro-symbolic methods, that combine the power of neural
networks and the interpretability of logic formalisms (deductive approaches) [
          <xref ref-type="bibr" rid="ref12 ref13">12, 13</xref>
          ]. Within
the explainability area, the most common representations of the knowledge are (i) Knowledge
Graphs (KGs), i.e., graph-based structure, and (ii) Tree, i.e., tree-based structure. For instance, in
[
          <xref ref-type="bibr" rid="ref23">23</xref>
          ] a neuro-symbolic model is used for stock prediction. The authors utilize a knowledge graph
for representing relationships among financial events. The proposed strategy can be positioned
within the Neuro:Symbolic-&gt;Neuro categorization (see Section 2), it allows for improving
the performance in stock prediction tasks as well as providing reasons for price variations
through the generation of new nodes. Tree-based structures are employed in techniques such
as counterfactuality and surrogate models.
        </p>
        <p>Performance Within model-based approaches, symbolic solvers and reasoning techniques
are widely exploited. They can derive new data given a starting knowledge base, while neural
networks often depend on training data, i.e., they provide high performances on data that are
similar to the training examples, but when dealing with diferent and more complex data, the
performance decrease dramatically. This problem is widely known within the model-based
approaches Integrating knowledge derived from symbolic models into neural network
architectures can increase performances. In this sense, such integration could be exploited for (i) guiding
the neural network training, (ii ) for filtering its output through logic constraints, and ( iii) for
improving performance by integrating symbolic-based data structures (such as knowledge
graphs) with the observations.</p>
        <p>
          Robustness Training data for data-driven models may be incomplete and not fully
representative. This can lead systems to be susceptible to input data, i.e., varying a single pixel within
an image can completely change the result. Narrow intelligence and Pointillistic intelligence
[
          <xref ref-type="bibr" rid="ref24">24</xref>
          ] are both artificial intelligence categories, where models lack robustness from diferent
viewpoints. In Narrow Intelligence models are devised for specific tasks, i.e., computer vision
applications or natural language processing. Although these models exhibit high performances
within their respective categories, their efectiveness may drastically decrease when applied to
similar contexts. Additionally, adapting these models for transfer learning methodologies can be
a challenging endeavor. In Pointillistic Intelligence models show great results but may fail in
unpredictable cases. Lack of generalization and flexibility can lead to such issues. Neuro-symbolic
approaches integrate human expertise and real-world observations in order to accurately create
robust systems, overcoming problems related to noise within data and bias produced by human
assumptions on complex real-world systems.
        </p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Use-cases</title>
        <p>
          Shy approaches for hybridizing techniques have been provided [
          <xref ref-type="bibr" rid="ref25 ref26">25, 26</xref>
          ]. In particular, within the
root cause analysis area, i.e., a sub-area of predictive maintenance whose aim is to analyze and
study root causes of faults or problems, Abele et al. [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ] proposed a neuro-symbolic architecture
for alarm flooding problem, which can be positioned in the Neuro|Symbolic category (see
Section 2). Data are represented by using a Bayesian Network (BA): specifically, a directed
acyclic graph (DAG)  = (, ) is employed in which  represents the set of vertices/nodes
and  is the edge set. Each node  ∈ ,  = 1...,  corresponds to a random variable in BA,
while the edges represent the conditional dependency among nodes. Prolog, a rule engine, is used
to infer the relationships among entities, and for creating . The stages of the neuro-symbolic
process are: (i) starting from knowledge, the graph is generated; (ii) exploiting active learning,
the initial dataset is augmented; (iii) a Bayesian Classifier is trained on the augmented dataset;
(iv) machine learning-based model is used to assess the graph generated at the beginning.
Moreover, we further propose a generic use-case by exploiting the Symbolic[Neuro] strategy
(see Section 2). The aim is to monitor the operational status of a train. Suppose to have a failure,
i.e., a carriage door is blocked. Given an ontology, describing all components of the train, an
expert define the relationships among components by using a logical formalism, modeling them
as a tree-based structure. Thanks to the neuro-symbolic approach, it is possible to define a
"formal" representation of the relationships among components. In addition, a machine/deep
learning-based model, such as an auto-encoder, could be used for anomaly detection in each
component. In this context, a classifier  is used for identifying anomalous behaviors, based on
historical data of a given component, and a tree-based structure  represents the relationships
among the components. The proposed approach allows (i) recognizing anomalies within single
components exploiting ; (ii) to identify which causes have determined the failure through 
and (iii) explain the whole process of root cause analysis.
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>The neuro-symbolic approaches show promise in the field of predictive maintenance. We
aim to further explore and develop these techniques, which have the potential to overcome
some of the limitations of the traditional model-based and data-driven approaches. The final
goal is to efectively deliver novel neuro-symbolic models for predictive maintenance so that
one can leverage new architectures that prioritize interpretability, robustness, and maintain
high-performance levels.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This work was partially supported by the Italian Ministry of Industrial Development (MISE)
under project MAP4ID “Multipurpose Analytics Platform 4 Industrial Data”, N.
F/190138/0103/X44 and by Italian Ministry of Research (MUR) under PNRR project PE0000013-FAIR, Spoke
9 - Green-aware AI – WP9.1." and by the Departmental Strategic Plan (PSD) of the University of
Udine—Interdepartmental Project on Artificial Intelligence (2020-25).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>T.</given-names>
            <surname>Zonta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. A. da Costa</given-names>
            , R. da Rosa Righi,
            <surname>M. J. de Lima</surname>
          </string-name>
          , E. S. da Trindade,
          <string-name>
            <given-names>G. P.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Predictive maintenance in the industry 4.0: A systematic literature review</article-title>
          ,
          <source>Computers &amp; Industrial Engineering</source>
          <volume>150</volume>
          (
          <year>2020</year>
          )
          <fpage>106889</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>H.</given-names>
            <surname>Kautz</surname>
          </string-name>
          ,
          <article-title>The third ai summer: Aaai robert s. engelmore memorial lecture</article-title>
          ,
          <source>AI Magazine</source>
          <volume>43</volume>
          (
          <year>2022</year>
          )
          <fpage>105</fpage>
          -
          <lpage>125</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mikolov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Chen</surname>
          </string-name>
          , G. Corrado,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dean</surname>
          </string-name>
          ,
          <article-title>Eficient estimation of word representations in vector space</article-title>
          ,
          <source>in: ICLR (Workshop Poster)</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Pennington</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Socher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. D.</given-names>
            <surname>Manning</surname>
          </string-name>
          , Glove:
          <article-title>Global vectors for word representation</article-title>
          , in: EMNLP,
          <string-name>
            <surname>ACL</surname>
          </string-name>
          ,
          <year>2014</year>
          , pp.
          <fpage>1532</fpage>
          -
          <lpage>1543</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Silver</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. J.</given-names>
            <surname>Maddison</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Guez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Sifre</surname>
          </string-name>
          , G. van den Driessche, J. Schrittwieser,
          <string-name>
            <given-names>I.</given-names>
            <surname>Antonoglou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Panneershelvam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lanctot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dieleman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Grewe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Nham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kalchbrenner</surname>
          </string-name>
          , I. Sutskever,
          <string-name>
            <given-names>T. P.</given-names>
            <surname>Lillicrap</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Leach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kavukcuoglu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Graepel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hassabis</surname>
          </string-name>
          ,
          <article-title>Mastering the game of go with deep neural networks and tree search</article-title>
          ,
          <source>Nat</source>
          .
          <volume>529</volume>
          (
          <year>2016</year>
          )
          <fpage>484</fpage>
          -
          <lpage>489</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Garg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bajpai</surname>
          </string-name>
          , Mausam, Symbolic network:
          <article-title>Generalized neural policies for relational mdps</article-title>
          , in: ICML, volume
          <volume>119</volume>
          <source>of Proceedings of Machine Learning Research, PMLR</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>3397</fpage>
          -
          <lpage>3407</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>W. W.</given-names>
            <surname>Cohen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Hofer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Siegler</surname>
          </string-name>
          ,
          <article-title>Scalable neural methods for reasoning with a symbolic knowledge base</article-title>
          ,
          <source>in: ICLR</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>F.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lyu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Liu</surname>
          </string-name>
          , S. Gustafson,
          <article-title>PEORL: integrating symbolic planning and hierarchical reinforcement learning for robust decision-making, in: IJCAI, ijcai</article-title>
          .org,
          <year>2018</year>
          , pp.
          <fpage>4860</fpage>
          -
          <lpage>4866</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Lyu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Liu</surname>
          </string-name>
          , S. Gustafson,
          <article-title>SDRL: interpretable and data-eficient deep reinforcement learning leveraging symbolic planning</article-title>
          , in: AAAI, AAAI Press,
          <year>2019</year>
          , pp.
          <fpage>2970</fpage>
          -
          <lpage>2977</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>R.</given-names>
            <surname>Dang-Nhu</surname>
          </string-name>
          ,
          <article-title>PLANS: neuro-symbolic program learning from videos</article-title>
          , in: NeurIPS,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. H.</given-names>
            <surname>Zhuo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <article-title>Creativity of AI: automatic symbolic option discovery for facilitating deep reinforcement learning</article-title>
          ,
          <source>in: AAAI</source>
          , AAAI Press,
          <year>2022</year>
          , pp.
          <fpage>7042</fpage>
          -
          <lpage>7050</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. W.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <article-title>Neural symbolic reader: Scalable integration of distributed and symbolic representations for reading comprehension</article-title>
          , in: ICLR,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>K.</given-names>
            <surname>Yi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Torralba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kohli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tenenbaum</surname>
          </string-name>
          ,
          <article-title>Neural-symbolic VQA: disentangling reasoning from vision and language understanding</article-title>
          ,
          <source>in: NeurIPS</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>1039</fpage>
          -
          <lpage>1050</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>D.</given-names>
            <surname>Demeter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Downey</surname>
          </string-name>
          ,
          <article-title>Just add functions: A neural-symbolic language model</article-title>
          , in: AAAI, AAAI Press,
          <year>2020</year>
          , pp.
          <fpage>7634</fpage>
          -
          <lpage>7642</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>G.</given-names>
            <surname>Marra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kuzelka</surname>
          </string-name>
          ,
          <article-title>Neural markov logic networks</article-title>
          ,
          <source>in: UAI</source>
          , volume
          <volume>161</volume>
          <source>of Proceedings of Machine Learning Research</source>
          , AUAI Press,
          <year>2021</year>
          , pp.
          <fpage>908</fpage>
          -
          <lpage>917</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>K.</given-names>
            <surname>Marino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Parikh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Rohrbach, KRISP: integrating implicit and symbolic knowledge for open-domain knowledge-based VQA</article-title>
          , in: CVPR, Computer Vision Foundation / IEEE,
          <year>2021</year>
          , pp.
          <fpage>14111</fpage>
          -
          <lpage>14121</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>S.</given-names>
            <surname>Badreddine</surname>
          </string-name>
          , A. S.
          <string-name>
            <surname>d'Avila Garcez</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Serafini</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Spranger</surname>
          </string-name>
          ,
          <article-title>Logic tensor networks</article-title>
          ,
          <source>Artif. Intell</source>
          .
          <volume>303</volume>
          (
          <year>2022</year>
          )
          <fpage>103649</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>E. van Krieken</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Acar</surname>
          </string-name>
          ,
          <string-name>
            <surname>F. van Harmelen</surname>
          </string-name>
          ,
          <article-title>Analyzing diferentiable fuzzy logic operators</article-title>
          ,
          <source>Artif. Intell</source>
          .
          <volume>302</volume>
          (
          <year>2022</year>
          )
          <fpage>103602</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>N.</given-names>
            <surname>Hoernle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Karampatsis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Belle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Gal</surname>
          </string-name>
          , Multiplexnet:
          <article-title>Towards fully satisfied logical constraints in neural networks</article-title>
          ,
          <source>in: AAAI</source>
          , AAAI Press,
          <year>2022</year>
          , pp.
          <fpage>5700</fpage>
          -
          <lpage>5709</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>E.</given-names>
            <surname>Giunchiglia</surname>
          </string-name>
          ,
          <string-name>
            <surname>T.</surname>
          </string-name>
          <article-title>Lukasiewicz, Multi-label classification neural networks with hard logical constraints</article-title>
          ,
          <source>J. Artif. Intell. Res</source>
          .
          <volume>72</volume>
          (
          <year>2021</year>
          )
          <fpage>759</fpage>
          -
          <lpage>818</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>H.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <article-title>Neural logic machines, in: ICLR (Poster), OpenReview</article-title>
          .net,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>P.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. L.</given-names>
            <surname>Donti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Wilder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Z.</given-names>
            <surname>Kolter</surname>
          </string-name>
          , Satnet:
          <article-title>Bridging deep learning and logical reasoning using a diferentiable satisfiability solver</article-title>
          , in: ICML, volume
          <volume>97</volume>
          <source>of Proceedings of Machine Learning Research, PMLR</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>6545</fpage>
          -
          <lpage>6554</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>S.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , W. Zhang,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Z.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>Knowledge-driven stock trend prediction and explanation via temporal convolutional network, in: WWW (Companion Volume)</article-title>
          , ACM,
          <year>2019</year>
          , pp.
          <fpage>678</fpage>
          -
          <lpage>685</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>G. Marcus,</surname>
          </string-name>
          <article-title>The next decade in AI: four steps towards robust artificial intelligence</article-title>
          , CoRR abs/
          <year>2002</year>
          .06177 (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>L.</given-names>
            <surname>Abele</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Anic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Gutmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Folmer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kleinsteuber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Vogel-Heuser</surname>
          </string-name>
          ,
          <article-title>Combining knowledge modeling and machine learning for alarm root cause analysis</article-title>
          ,
          <source>in: MIM, International Federation of Automatic Control</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>1843</fpage>
          -
          <lpage>1848</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>M.</given-names>
            <surname>Saez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. P.</given-names>
            <surname>Maturana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Barton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Tilbury</surname>
          </string-name>
          ,
          <article-title>Anomaly detection and productivity analysis for cyber-physical systems in manufacturing</article-title>
          ,
          <source>in: 13th IEEE Conference on Automation Science and Engineering, CASE 2017, Xi'an, China, August 20-23</source>
          ,
          <year>2017</year>
          , IEEE,
          <year>2017</year>
          , pp.
          <fpage>23</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>