<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Tree Visualization of Patient Information for Explainability of AI Outputs</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sandeep Ramachandra</string-name>
          <email>sandeep.ramachandra@ugent.be</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>David Vander Mijnsbrugge</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pieter-Jan Lammertyn</string-name>
          <email>pieter-jan.lammertyn@azdelta.be</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stijn Dupulthys</string-name>
          <email>stijn.dupulthys@azdelta.be</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Femke Ongenae</string-name>
          <email>femke.ongenae@ugent.be</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sofie Van</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hoecke</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>IDLab, Ghent University-imec</institution>
          ,
          <addr-line>Ghent</addr-line>
          ,
          <country country="BE">Belgium</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Radar, AZ Delta</institution>
          ,
          <addr-line>Roeselare</addr-line>
          ,
          <country country="BE">Belgium</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <abstract>
        <p>Knowledge graphs (KG) and ontologies can be leveraged to eficiently convey information and provide a great aid in explaining the outcomes of neural networks in the healthcare domain. In this short research paper, we introduce a novel approach that encodes patient information and expert knowledge of diseases into a single temporal graph which enables seamless integration into neural networks. Furthermore, we present a visualization tool that explains the output generated by these networks, leading to better understanding of the provided decisions by healthcare professionals and other stakeholders.</p>
      </abstract>
      <kwd-group>
        <kwd>Explainable AI</kwd>
        <kwd>Visualizations</kwd>
        <kwd>Dynamic graphs</kwd>
        <kwd>Temporal graphs</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Patient health records are multi-modal in nature, containing a combination of structured
data (e.g. performed procedures and made diagnoses), free text (e.g. doctor notes), and higher
dimensional data, such as images (e.g. X-ray and CAT scans) or time series (e.g. heart beat
measurements). Moreover, patient information is dynamic in nature and constantly evolves over
time, e.g. changing laboratory test results over subsequent hospital visits, and/or the condition
of the patient changing over time. This makes patient information an inherently complex data
source to work with.</p>
      <p>
        To enable proper analysis, e.g. with Machine Learning (ML), it is important to be able to
represent this multi-modal data in a data structure which makes the correlations between
the data explicit. Due to their expressive power, Knowledge Graphs (KGs) and ontologies
have become increasingly popular to encode such patient information [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Moreover, their
graph representation allows visual and easily understandable interpretation of the information.
An ontology enables us to additionally encode the established expert-knowledge about the
healthcare domain and connect it to the data in the KG, allowing for the incorporation of
correlations between symptoms and diseases in the KG. Leveraging the incorporated prior
knowledge empowers more efective data analytics and enhances performance. On the other
hand, the utilization of timing information in KGs, typically represented using date time nodes
attached within the same graph, proves less optimal when analyzing patient information, as
the timing information represents a change in the graphs itself. In order to facilitate this
interpretation, the patient information can be represented as graphs that dynamically change
over time rather than using time nodes in the complete graph.
      </p>
      <p>
        At the same time, there has been a surge in the popularity of machine learning (ML) methods
and graph embedding techniques that empower the execution of advanced data analytics
using these KGs. Prominent examples include Graph Convolutional Networks (GCNs) and
RDF2Vec [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In critical domain like healthcare, eXplainable AI (XAI) is gaining increasing
importance [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. As any erroneous output can be harmful to the patient, the healthcare expert,
and potentially other non-AI experts (e.g. care provider, patient), should be able to clearly trace
the significant input features contributing to the output of the ML. Furthermore, it is essential
that this explanation aligns with, or is substantiated by, the expert knowledge of the domain,
which is captured in the ontology. While some recent graph embedding methods try to retain
the interpretable aspects of KGs, e.g. INK [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], most graph embedding methods are considered
black-box, especially the ones based on deep learning techniques, such as GCNs. The latter are
therefore often combined with post-hoc explanation methods aiming to elucidate the model’s
output for specific inputs, such as SHAP[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and Saliency Maps[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. To do so, these methods
ofer insights into the contribution of each input feature towards generating the final output,
and thus into an importance factor of each feature. This feature contribution can also be used
in KGs to visualize the importance factor of each node in the KG using a color scale over the
post-hoc explanation of the output of the AI for any given task. Prior works on visualization and
post-hoc explanations for KGs do not take into account the dynamic temporal patient data [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>Therefore, in this short research paper, we propose a novel methodology to encode dynamic
temporal patient information in a KG and to visualize the output of the graph embedding model
in a clear and understandable manner by visualizing the real world patient information along
the time axis as well as visualizing the importance factors for each entity in the patient graphs
for ease of comparison for a healthcare expert.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>There are two domains coming together in this paper, namely patient information encoding in
a KG optimized for data analytics, and graph visualization optimized for exploration by domain
experts interpreting the output of these analytics. We therefore explore the state-of-the-art in
these two domains below.</p>
      <p>
        Graph encoding of patients With the design of ontologies, such as SNOMED [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and
OMOP [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], a lot of work has already been performed on representing patient information in KGs.
However, most of these representations ignore the temporal dimension of patient data and fail
to fully harness the intrinsic explainability ofered by graph representations. In Choi et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ],
the authors recognised the EHR data as being multilevel with diagnosis being related to certain
treatments and utilise this property to improve the performance of their model. However, most
concepts are interrelated to each other and lack fixed hierarchical levels. This shortcoming is
solved by [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] with the representation of patient visits being in graph structures. In [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], the
authors combine a patient graph with ontologies to improve the quality of the embeddings of
the concepts. These papers show the potential of graph neural networks with patient data but
fail to explain their output which is critical in the healthcare domain.
      </p>
      <p>
        Graph visualization There is no one size fits all solution for graph visualization [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
Every use case for ontology and KG visualization utilises diferent tools and focuses on diferent
aspects of visualization. For example, VOWLExplain [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] utilised WebVOWL [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], a web based
ontology visualization tool, to visualize patient information from The Cancer Genome Atlas
and visually explain the recommendations of an AI model. Their user study showed that the
graph explanations of AI recommendations regarding the patient were equally accurate and
comprehensible compared to textual explanations. While the paper showcases the explainability
inherent to KGs, they did not explore the temporal aspect of patient information. In [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], the
Neo4j graph database is explored for a healthcare case where they showcase the patient’s
progression along the time axis. The presentation was largely exploratory and did not use the
explainability ofered by graphs. So to conclude, state-of-the-art focuses on either the temporal
KGs or on explainability, where we are the first to tackle the combination of both.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology for representing and visualizing patient information in a KG optimized for explainable AI</title>
      <p>In this section, we first highlight the requirements for the proposed KG representation method
to enable XAI with Graph Neural Networks (GNN), and then dive deeper into the proposed
methodology itself and the accompanying visualization.</p>
      <sec id="sec-3-1">
        <title>3.1. Requirements</title>
        <p>The requirements for the encoding of a patient graph are:
• Dynamic Temporal view: The patient graph should be separable into diferent time
steps since patient data is added incrementally over time, e.g. per visit to the hospital.
Real-world patient data is constantly updated according to the changing condition of
the patient and performed procedures, e.g. new laboratory results, change in medication,
and new diagnoses. The visualization should reflect this. Furthermore, this incremental
nature should result in a much less cluttered representation of the data, especially after
prolonged periods of time.
• Tree view: Ideally, the patient data is structured in a tree-based manner as this accurately
reflects the observatory nature of the data. Namely, each specific diagnosis, treatment,
observation, laboratory observation, etc. should be unique in the representation, as they
are likely to be repeated across visits, e.g. chemotherapy is a treatment that is repeated
many times over a given period where each instance of the treatment carries relevant
information. The patient itself represents the trunk of the tree. Each visit then represents
a branch in the tree, with all the data connected to that visit, e.g. observations, treatments,
(a) Triples generation
(b) Time separation</p>
        <p>and diagnoses made, as subbranches within that branch resulting in a hierarchical tree
structure with variable length branches.
• Ontology: It is important to be able to link the data in the KG to the prior knowledge
encoded in the ontology, as this delivers important information to the XAI. For example,
each diagnosis and treatment in the procedure are linked to each other in the ontology.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Methodology</title>
        <p>
          Our novel method to generate a patient graph containing the needed information meeting the
requirements set out in Section 3.1 consists of the following 4 steps:
1. Triple generation: As a first step, the multi-modal data has to be collected from their
original representation, e.g. a relational database or data lake, into an intermediary
representation so that one patient’s data (Conditions, Treatments, Drugs, Labs, Observations)
is encapsulated in one object along with their associated timestamp. Each concept node
has to be made unique so that the nodes do not get conflated in the graph (See Tree
requirement in Section 3.1). As ontology, the OMOP (Observational Medical Outcomes
Partnership) [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] ontology is used. Using relationships set out in this OMOP schema,
information on each object is converted into a N-Triples (NT) file that is used to generate
the patient graph. Figure 1a shows a toy example for this triple generation.
2. Time separation: Next, each timestamped concept present in the generated NT file has
to be separated along the time axis. To do so, each time step can be defined as either
the time when a discrete change in patient data occurs, or as a fixed change in time. For
this paper, we chose discrete changes as this way the changes are immediately apparent.
Discrete changes are also better when dealing with patient data that extends over years,
for example for chronic (long term) diseases, in the NT file for each patient, we then
create a new NT file for the new graph per time step. A quad file can also be created with
each time step as a disjointed graph as this would behave in a similar manner. Figure 1b
continues with the toy example showing a visualization of the graph separated on the
time dimension.
3. Adding ontology: The ontology of OMOP is contained in two tables in OMOP schema,
i.e. Concept Ancestor and Concept relationship, with both incoming and outgoing
relations defined. To comply with the Tree requirement (see Section 3.1), only the outgoing
relations are taken. In this step, from each patient’s NT file, the leaf nodes are used to
look for matching subject nodes in two tables. The filtered triples are connected to each
leaf node in patient graph, duplicated across the time axis.
4. Visualization: As the visualizations have to be dynamic and compatible with the
frameworks of the neural networks, we chose Plotly [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] due to its interactive features and
ease of use to enable the time axis visualization in the fourth and final step. To do so,
each time step of the patient graph is added as a trace to the Plotly figure. Since the
patient graph is modelled as a tree, we use the Reingold-Tilford layout to spread out
the nodes and layer of the tree for better visibility. Additional information about each
node is added to a hover interaction with the node. For visualizing the explanation, the
contribution of each node to the output of the neural network can be calculated/inferred.
For example, attention blocks [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] can be used to gather this information. As such, an
importance is calculated for every node which can be assigned as a color to each node,
allowing for instant visibility of the importance of each node at each time step. One can
also highlight the time step with the most contribution by surfacing it first while viewing
the visualization. This can be interesting, for example, for healthcare professionals.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Use case: Lung cancer patient representation and visualization</title>
      <p>We applied the presented method on a synthetic lung cancer patient, designed together with
clinical experts from AZ Delta, to empirically prove applicability of our method.</p>
      <p>Figure 2 shows a screenshot of an interactive example patient visualization developed using
the methodology presented in Section 3.2. Due to the example being synthetic patient data,
the importance factors are assigned randomly. A live example can be found at
https://predictidlab.github.io/Tree-Visualization/. The example shows an overview of the patient information
as well as the sub-graphs for related concepts, extracted from the OMOP ontology and attached
to each instance of the concepts. The example visualizes the patient KGs with the temporal
information retained and showcases the explainability possibilities of the patient KGs. The
visualization clearly displays the most important nodes in a darker shade of red so that a
healthcare professional can, without any deep understanding of the model used, immediately
notice the most significant node to the model at that time step. The same conclusion can be
made across the time dimension as the time step that is the most significant, can be highlighted
by displaying it by default.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>We have proposed a novel methodology for the representation and visualization of patient
information from multi-modal patient information while also incorporating the time dimension.
Bringing this time dimension to patient representation and visualization is important to enable
us to utilise the dynamic nature of the patient data. The resulting representation can be used
in a Graph Neural Network (GNN) for various downstream tasks. Our method explains the
output of this GNN using importance factors which are incorporated in the visualization as
node coloring. This provides a visually easily interpretable explanation of the GNN output.
We showcased the correct functioning of our methodology on a lung cancer use case, using
synthetic patient data. In the near future, we hope to add interactive views of the graph with
highlighting paths for important nodes and real time visualization of graphs.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This study was partially funded by the Flanders AI Research Program and the VLAIO O&amp;O
ADAM project with AZ Delta (HBC.2020.3234).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Schrodt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dudchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Knaup-Gregori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ganzinger</surname>
          </string-name>
          ,
          <article-title>Graph-Representation of Patient Data: a Systematic Literature Review</article-title>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Med</surname>
          </string-name>
          . Syst.
          <volume>44</volume>
          (
          <year>2020</year>
          )
          <fpage>86</fpage>
          -
          <lpage>7</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P.</given-names>
            <surname>Ristoski</surname>
          </string-name>
          , H. Paulheim,
          <article-title>Rdf2vec: Rdf graph embeddings for data mining</article-title>
          ,
          <source>in: ISWC</source>
          <year>2016</year>
          ,
          <year>2016</year>
          , p.
          <fpage>498</fpage>
          -
          <lpage>514</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Holzinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Biemann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. S.</given-names>
            <surname>Pattichis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. B.</given-names>
            <surname>Kell</surname>
          </string-name>
          ,
          <article-title>What do we need to build explainable ai systems for the medical domain</article-title>
          ?,
          <year>2017</year>
          .
          <article-title>a r X i v : 1 7 1 2 . 0 9 9 2 3</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B.</given-names>
            <surname>Steenwinckel</surname>
          </string-name>
          , G. Vandewiele,
          <string-name>
            <given-names>M.</given-names>
            <surname>Weyns</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Agozzino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. D.</given-names>
            <surname>Turck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ongenae</surname>
          </string-name>
          ,
          <article-title>Ink: knowledge graph embeddings for node classification</article-title>
          ,
          <source>Data Mining and Knowledge Discovery</source>
          <volume>36</volume>
          (
          <year>2022</year>
          )
          <fpage>620</fpage>
          -
          <lpage>667</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Lundberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-I.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>A unified approach to interpreting model predictions</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>30</volume>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>K.</given-names>
            <surname>Simonyan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vedaldi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zisserman</surname>
          </string-name>
          ,
          <article-title>Deep inside convolutional networks: Visualising image classification models and saliency maps</article-title>
          ,
          <source>arXiv preprint arXiv:1312.6034</source>
          (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>W. S.</given-names>
            <surname>Campbell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pedersen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. C.</given-names>
            <surname>McClay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Bastola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Campbell</surname>
          </string-name>
          ,
          <article-title>An alternative database approach for management of snomed ct and improved patient data queries</article-title>
          ,
          <source>Journal of Biomedical Informatics</source>
          <volume>57</volume>
          (
          <year>2015</year>
          )
          <fpage>350</fpage>
          -
          <lpage>357</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>B.</given-names>
            <surname>Aldughayfiq</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ashfaq</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. Z.</given-names>
            <surname>Jhanjhi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Humayun</surname>
          </string-name>
          ,
          <article-title>Capturing semantic relationships in electronic health records using knowledge graphs: An implementation using mimic iii dataset and graphdb</article-title>
          ,
          <source>Healthcare</source>
          <volume>11</volume>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>E.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. F.</given-names>
            <surname>Stewart</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          , Mime:
          <article-title>Multilevel medical embedding of electronic health records for predictive healthcare</article-title>
          ,
          <year>2018</year>
          .
          <article-title>a r X i v : 1 8 1 0 . 0 9 5 9 3</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>E.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. W.</given-names>
            <surname>Dusenberry</surname>
          </string-name>
          , G. Flores,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Dai</surname>
          </string-name>
          ,
          <article-title>Graph convolutional transformer: Learning the graphical structure of electronic health records</article-title>
          ,
          <year>2019</year>
          .
          <article-title>a r X i v : 1 9 0 6 . 0 4 7 1 6</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <article-title>Patient similarity via joint embeddings of medical knowledge graph and medical entity descriptions</article-title>
          ,
          <source>IEEE Access 8</source>
          (
          <year>2020</year>
          )
          <fpage>156663</fpage>
          -
          <lpage>156676</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Dudáš</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lohmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Svátek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pavlov</surname>
          </string-name>
          ,
          <article-title>Ontology visualization methods and tools: a survey of the state of the art</article-title>
          ,
          <source>The Knowledge Engineering Review</source>
          <volume>33</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>F.</given-names>
            <surname>Serrano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nunes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Pesquita</surname>
          </string-name>
          ,
          <article-title>Vowlexplain: Knowledge graph visualization for explainable artificial intelligence</article-title>
          ,
          <source>VOILA - ISWC</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>S.</given-names>
            <surname>Lohmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Negru</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Haag</surname>
          </string-name>
          , T. Ertl,
          <article-title>Visualizing ontologies with vowl</article-title>
          ,
          <source>Semantic Web</source>
          <volume>7</volume>
          (
          <year>2016</year>
          )
          <fpage>399</fpage>
          -
          <lpage>419</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>J.</given-names>
            <surname>Roemer</surname>
          </string-name>
          ,
          <article-title>Improving patient outcomes with graph algorithms</article-title>
          ,
          <year>2020</year>
          . URL: https://neo4j. com/blog/improving
          <article-title>-patient-outcomes-algorithms-graphconnect.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <source>[16] OMOP CDM v5.4</source>
          ,
          <year>2023</year>
          . URL: https://ohdsi.github.io/CommonDataModel/cdm54.html.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Plotly</surname>
            <given-names>Technologies Inc.</given-names>
          </string-name>
          ,
          <source>Collaborative data science</source>
          ,
          <year>2015</year>
          . URL: https://plot.ly.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Parmar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Gomez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kaiser</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Polosukhin</surname>
          </string-name>
          , Attention is all you need,
          <year>2017</year>
          .
          <article-title>a r X i v : 1 7 0 6 . 0 3 7 6 2</article-title>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>