<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>AI in Real-World Medical Imaging Using the SimpleMind Software Environment</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Matthew S. Brown</string-name>
          <email>mbrown@mednet.ucla.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>M. Wasil Wahi-Anwar</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Youngwon Choi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Morgan Daly</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Liza Shrestha</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Koon-Pong Wong</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jonathan G. Goldin</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dieter R. Enzmann</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences</institution>
          ,
          <addr-line>David Geffen</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>NeSy 2023, 17th International Workshop on Neural-Symbolic Learning and Reasoning, Certosa di Pontignano</institution>
          ,
          <addr-line>Siena</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>School of Medicine at UCLA, University of California</institution>
          ,
          <addr-line>Los Angeles</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Trustworthy AI, Medical Imaging</institution>
          ,
          <addr-line>Knowledge Base, Machine Reasoning</addr-line>
          ,
          <country>Deep Neural Networks</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Deep neural networks (DNNs) have good overall performance in medical imaging, but they are susceptible to obvious mistakes that violate common sense concepts. Unexplained errors have reduced trust and prevented widespread adoption in real-world clinical practice. We introduce SimpleMind, an open-source Cognitive AI software environment for medical image understanding. It uses a hybrid Neurosymbolic AI approach that integrates both DNNs and machine reasoning from a knowledge base. We demonstrate its use in building trustworthy AI for checking endotracheal tube (ETT) placement on chest X-rays (CXRs). The AI was integrated into clinical practice and the correctness of the ETT misplacement alerts were compared with radiology reports as the reference. 214 CXRs were ordered by ICU physicians to check ETT placement with AI assistance. ETT alert messages had a positive predictive value (PPV) of 42% and a negative predictive value (NPV) of 98%. Physicians indicated that they agreed with the AI outputs, had increased confidence in their decisions, and were more effective with AI assistance.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>2020 Copyright for this paper by its authors.
for intervention if the ETT is misplaced, ICU physicians often take a preliminary look at the CXR at
the bedside and immediately adjust a misplaced tube. However, assessment of tube placement can be
challenging, especially for non-radiologists.</p>
      <p>
        In this paper we introduce SimpleMind, an open-source Cognitive AI software environment for
medical image understanding [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. We demonstrate its use to build trustworthy AI to assist in checking
ETT placement on CXRs in clinical practice and evaluate its real-world performance and physician
acceptance.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Methods</title>
      <p>A SimpleMind application was developed to automatically identify the ETT, trachea, and carina in
CXRs. In SimpleMind, an application is built by specifying a knowledge base that describes expected
characteristics and relationships between image objects. To check the ETT tip placement, a “safe zone”
is defined in the knowledge base as the region inside the trachea and 3 - 7 cm above the carina.
SimpleMind computes this region using spatial inferencing for explainable decisions regarding ETT
placement. During image understanding, SimpleMind uses the knowledge base to guide DNN
segmentation agents and machine reasoning agents that evaluate the results. It enables reasoning on
multiple detected objects to ensure consistency, providing cross-checking between DNN outputs. This
machine reasoning improves the reliability and trustworthiness of DNNs through an interpretable model
and explainable decisions. The CXR application was integrated and evaluated in the clinical imaging
workflow at our institution.
2.1.</p>
    </sec>
    <sec id="sec-3">
      <title>Knowledge Representation</title>
      <p>
        The knowledge base for a SimpleMind application is created as a semantic network (SN) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] where
each node represents an object, object component, or object state. The SimpleMind environment
provides a human-readable intuitive language to specify a semantic network. Each node contains
attributes that describe expected object characteristics relating to size, shape, pixel intensity, and
relative position. Spatial relationships that can be described between objects, include part of, right of,
left of, above, below, inside, etc. Attributes are derived from a vocabulary that defines the name of the
attribute and its associated parameters. Relational attributes form the links between nodes in the
semantic network. For example, the vocabulary defines “RightOf”, which includes two parameters: (1)
the related node (forming a relational link between nodes), and (2) the expected distance to the right.
Fuzzy sets are used to represent prior expectations for object characteristics using a confidence function
over the range of possible parameter values [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], so the expected distance to the right is represented in
the knowledge base as a fuzzy membership function. The fuzzy functions can be set initially by a human
expert and refined by learning from data.
      </p>
      <p>The semantic network attributes can also represent procedural knowledge used by processing agents,
including DNN architectures (e.g., U-Net, ResNet, or any user-defined architectures), learning hyper
parameters, and image pre and post processing parameters. Crucially, all attribute parameters in the
semantic network are exposed (separate from the processing code) and human readable, so they can be
both specified by a human and auto optimized by SimpleMind. The SimpleMind environment allows a
DNN agent to train weights using the above attribute parameters from a given SN node. The DNN
weights are then stored with the node, embedding the DNN within the semantic network. Thus, a
SimpleMind knowledge base can include both declared knowledge (that it is “taught”) and learned
knowledge from examples (acquired through machine learning), i.e., we can actively teach the
Cognitive AI as well as have it learn passively from data.</p>
      <p>The CXR knowledge base is derived from the medical literature that states that the tip of the
endotracheal tube should be 5 ± 2 cm above the carina, where the trachea bifurcates into the two main
stem bronchi. The semantic network shown in Fig 1 includes DNNs for the trachea (trachea_cnn),
carina (carina_cnn), and ETT (et_tube_1_cnn and et_tube_2_cnn). It defines a “safe zone” for the ETT
tip using spatial concepts:
 part of the trachea: Line 3 of the et_zone_1 node (Fig 1B)
 3 - 7 cm above the carina: Line 4 of the et_zone_1 node - based on the y-coordinate of the
centroid (Fig 1B)
 ETT tip must be inside the safe zone: et_tip_correct node describes this relative to the et_zone
node and represents the state of the ETT tip (Fig 1C)
 the ETT path must be within the trachea (and thus not going into the esophagus):
et_path_incorrect node describes this relative to the trachea node and represents the state of
the ETT path
 for the ETT position to be correct the two criteria above must be met - this requirement is
defined in the et_tube_correct node which represents the final decision of the system based on
its machine reasoning (Fig 1D)</p>
      <p>The knowledge base also demonstrates checking of DNN outputs for consistency. The carina_cnn
outputs a coordinate for the position of the carina, represented by the carina_1 node. The carina location
can also be derived from the inferior portion of the trachea where it branches into the two main stem
bronchi, represented by the carina_2 node. The carina_3 node indicates that these two should
correspond and refines the final result. Accurate detection the carina is necessary for ETT position
checking. Crucially, if the alternate carina locations do not correspond, then the system will report that
it is unable to reliably identify the carina rather than outputting an incorrect result. Using knowledge to
identify interpretation errors is an important benefit of machine reasoning that allows the system to
determine when it is likely to be wrong rather than failing silently.</p>
    </sec>
    <sec id="sec-4">
      <title>Multi-Agent Thinking</title>
      <p>
        SimpleMind provides a Think Module for computer vision, i.e., recognizing objects (nodes) from
the knowledge base in a given image. Multiple software agents work together to segment the image into
candidate regions, then select the best candidate based on object attributes described in the knowledge
base. Software agents collaborate to solve the vision problem by reading from, and writing to, a global
Blackboard data structure [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. The Blackboard is the working space of SimpleMind during the
“thinking” process, i.e., during comparison and matching of the image to a knowledge base for image
understanding. An agent can read information from the Blackboard generated by other agents and add
or update information. Agents operate independently and collaborate only via the Blackboard, giving
them a degree of autonomy and making the system more flexible and scalable. Agent types provided in
SimpleMind are shown in Fig 2.
      </p>
      <p>For each node in the semantic network, a data structure called a Solution Element is created on the
Blackboard, corresponding to an object to be recognized in the image. The Solution Element stores all
agent contributions while recognizing the object. Knowledge base attributes and candidate image
regions are transformed into a common feature space for selection of the best candidate. A Knowledge
Agent accesses the knowledge base and creates each Solution Element, initializing it with prior
expectations for object feature values. Thus, the objects and their relationships represented in the
semantic network are transformed into a directed graph of Solution Elements on the Blackboard, with
the direction of the link reflecting the dependency of an object’s attribute upon another object.</p>
      <p>Solution Elements are processed sequentially by agents. Objects are recognized in order based on
the directed links between their Solution Elements. When a particular Solution Element is scheduled
for processing (by a Scheduling Agent), a Reasoning Agent computes an image search area using spatial
inferencing from the relationships to previously recognized objects. This search area is provided as a
mask to guide a Segmentation Agent that generates candidate image regions for the object.
Segmentation is typically performed by a DNN agent that generates multiple connected components as
candidate regions. Feature values are computed for each candidate region and compared against the
expected values by a Reasoning Agent. Feature values are computed for each candidate according to
the attributes provided in the knowledge base and the corresponding fuzzy membership function yields
a confidence value for that attribute. The overall confidence for a candidate region is then computed as
the minimum confidence of any attribute. Thus, by pattern classification, the candidate that best matches
these expectations from the knowledge base is selected.</p>
      <p>Agents are activated iteratively, one at a time. At each iteration an activation score is computed for
each registered agent. The agent with the highest score is activated and can contribute to the solution
on the Blackboard. Each agent provides a function to compute its activation score based on the contents
of the Blackboard, in particular whether the Solution Element being processed has the relevant attributes
and necessary data required by the agent. The process repeats until all activation scores are zero and no
further agents activate. The system control is simple, yet highly flexible, with agent priorities
determined through their activation functions.</p>
      <p>The contents of the Blackboard reflect what SimpleMind is thinking at any point in time and its
current understanding of the image. Once all Solution Elements have been processed, the Blackboard
contains an instantiation of the general knowledge base to a particular image. The object attributes from
the general knowledge base are now instantiated with actual numerical feature values from the image,
enabling further high-level reasoning.
2.3.</p>
    </sec>
    <sec id="sec-5">
      <title>Machine Learning</title>
      <p>
        SimpleMind provides a Learn Module for machine learning within a knowledge base, in particular
for training the weights of embedded DNNs. For the CXR AI, we used 2000 images collected
retrospectively from ICU patients between April 2018 and September 2019. All of the DNN nodes are
trained with 1488 images, and the application was initially tested experimentally on 512 images [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        Although not used in this application, the SimpleMind environment also includes a Knowledge
Network Learning and Optimization (KNoLO) method. It comprehensively co-optimizes all attribute
parameters from all nodes simultaneously, including: object expected characteristics, DNN input
channels and image preprocessing options, and DNN learning hyper parameters. The parameter
optimization is performed using a genetic algorithm and details can be found in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
2.4.
      </p>
    </sec>
    <sec id="sec-6">
      <title>Implementation and Evaluation in Clinical Practice</title>
      <p>The CXR AI is currently deployed within our institution’s clinical workflow for investigational use
only as a quality improvement (QI) tool. A new cloud-based computing infrastructure was designed to
integrate the CXR AI system with the clinical Picture Archiving and Communications System (PACS).
An image router was configured to push CXRs to an on-premise Azure AI/ML platform where the AI
system is deployed. The CXR AI processes the image, detects tubes and anatomic landmarks on the
image, and generates an enhanced CXR image with overlays and an alert/informational message that is
pushed back to the PACS. Both the original and AI CXR images are available in PACS viewers for the
radiologist or ICU physician. The total turnaround time from CXRs reaching PACS to the AI output
being available in PACS is within 3 to 4 minutes, ensuring that AI outputs are available to ICU
physicians at the point of care during their CXR review. A specific order code was set up for CXR with
AI processing, providing a limited deployment on identifiable cases to be reviewed by a selected pool
of ICU physicians and radiologists. From June 11, 2021 to November 3, 2022, 214 CXRs were ordered
by ICU physicians through this specific order code for checking ETT placement with AI assistance.</p>
      <p>
        The AI displays one of the three possible ETT messages: (1) “Found” (ETT tip was determined to
be in the safe zone), (2) “Position Alert” (ETT tip was not in the safe zone or the AI could not determine
the safe zone), (3) “Not Found” (no ETT was detected by the AI). The AI alerts were evaluated against
the findings in the radiology report in which the radiologists were asked to include the following
statement: “An investigational endotracheal tube AI overlay was available and was/was not consistent
with my interpretation”. We evaluated the AI performance by defining a positive output (alert) as
messages (2) and (3), and a negative output (no alert) as message (1). When the AI output was positive,
a true positive (TP) required that the ETT be misplaced per the radiology report or that the ETT was
missing (since cases being routed to AI were expected to have an ETT), otherwise it was a false positive
(FP). When the output was negative, a false negative (FN) required the ETT to be misplaced, otherwise
it was a true negative (TN). When alerts were issued, follow-up CXRs were reviewed and radiology
reports checked to confirm repositioning of the tube. Positive predictive value (PPV = TP/(TP+FP))
and negative predictive value (NPV=TN/(TN+FN)) metrics were computed to give a sense of
trustworthiness of the AI from a physician perspective. In previous experimental testing, the PPV, NPV,
and sensitivity to misplaced tubes were 42%, 99%, and 95%, respectively [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. The AI system was
designed to be highly sensitive to avoid missed alerts when ETTs were misplaced, thus higher NPV and
lower PPV were considered sufficient for the system to be deployed in clinical practice and further
evaluated as described in this paper.
      </p>
      <p>A survey was conducted to qualitatively evaluate the ICU physicians and radiologists experience in
using the CXR AI in their clinical workflow. They were asked to provide ratings for usefulness and
satisfaction with the AI clinical application.</p>
    </sec>
    <sec id="sec-7">
      <title>3. Results</title>
      <p>For the 214 CXR images ordered to check ETT placement with AI assistance by ICU physicians, a
confusion matrix is shown in Table 1. The AI alert messages had a positive predictive value (PPV) of
42% (21 / (21 + 29)) and a negative predictive value NPV of 98% (161 / (161 + 3)) based on the
radiology reports. These performance metrics were consistent with clinical requirements and previous
experimental testing.</p>
      <p>The AI generates CXR overlays, showing the ETT path and distance from the ETT tip to the carina
as shown in Figs 3 and 4. Fig 3 is a case with correct ETT position showing the internal results of
SimpleMind (Fig 3b,c) that explain why it thinks the position is correct (tip inside the safe zone) and
the final output of the system as presented to the ICU physician (Fig 3D). Fig 4 shows an example of
incorrect ETT placement, with the tip too low relative to the carina (outside of the safe zone).</p>
      <p>Seven clinicians completed the user survey: three were radiologists with 9 - 26 years of experience,
four were physicians with 1 - 10 years of experience in critical care medicine. Five of the seven
clinicians had reviewed over 20 CXRs with AI, and two had reviewed over 50. Table 2 summarizes the
frequency and median ratings. Users indicated that they agreed with the AI outputs, had increased
confidence in their decisions, and were more effective with AI assistance. The trust and willingness to
adopt the system was further confirmed in weekly user group meetings.
Usefulness and Satisfaction
It helps me be more effective
It helps me be more productive
It works the way I want it to work
I am satisfied with it</p>
    </sec>
    <sec id="sec-8">
      <title>4. Discussion</title>
      <p>SimpleMind brings explainability and trustworthiness to ETT placement checking on CXRs using a
knowledge base that describes not only the ETT but also relevant anatomic landmarks and includes
relational attributes to cross-check multiple DNNs and ensure consistency and overall reliability of the
system. Rather than attempting to learn misplacement of the ETT indirectly from examples, the
SimpleMind knowledge base can directly describe when an alert should be given based on the tip
location relative to the carina.</p>
      <p>
        SimpleMind is a Cognitive AI software environment that enables users to build applications for
image understanding by specifying a knowledge base in the human-readable language of SimpleMind
and then tuning its parameters. Developing a SimpleMind application is like teaching or instructing a
human at a cognitive level, it allows non-programmers to build a medical application directly and
completely using their domain knowledge without knowing the details of the processing code. At
runtime, the knowledge base is applied to recognize objects. SimpleMind is open source
(https://gitlab.com/sm-ai-team/simplemind) and the environment can be extended through application
programming interfaces (APIs) whereby developers can expand the vocabulary and implement new
processing algorithms as agents. SimpleMind automatically handles the aggregation and chaining of
many processing agents, enabling a multi-DNN Cognitive AI system. It has also been applied to
segmentation of the kidney on CT [
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ] and the prostate on MRI [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>
        SimpleMind can be considered a “hybrid learning system” that brings together features from
connectionism and symbolic AI. Four key advantages have been suggested as arising from this
combined approach [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]: (1) interpretability, (2) error recovery, (3) out of distribution (OOD) handling,
and (4) learning from small data; and SimpleMind supports DNNs accordingly:
 It allows explicit knowledge to be applied systematically to improve performance and reliability.
 Computing a search area in which to apply DNN segmentation using spatial relationships in
the semantic network.
 Selection of the best candidate image region outputted by the DNN based on expected
characteristics defined in the knowledge base, or conversely, rejection of the output if it does
not meet expectations. It enables reasoning on multiple detected objects, providing
crosschecking between DNN outputs for more robust the image interpretation. Rejecting
candidates for an object does not preclude recognition of subsequent objects based on other
knowledge and avoids propagating errors. This gives SimpleMind applications more
resilience in handling OOD cases and error recovery.
 It provides a high degree of interpretability and explainability.
      </p>
      <p> The knowledge base makes explicit the knowledge that was previously implicit in pre and
post processing code and makes it easier to apply more knowledge intuitively.
 The thinking of SimpleMind as it processes an image is captured in the Blackboard. A human
can know what it was thinking by reviewing the Blackboard contents.
 Using a human-provided knowledge base, SimpleMind can perform object recognition with little
or no training data.
 When there is insufficient data to train a DNN, other segmentation agents (e.g., intensity
thresholding or edge detection) can use the knowledge base to generate initial segmentation
results. Little or no training data is needed since the initial semantic network can be
constructed using declarative knowledge rather than machine learning. These initial results
can be used with manual editing to generate training sets for DNN learning. When an OOD
situation arises it can be added to the knowledge base and handled without training data being
initially available.</p>
      <p>
        These benefits are also consistent with goals of trustworthy AI according to the High-Level Expert
Group on AI from the European Commission [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], in particular the following guidelines:
 Transparency: AI systems and their decisions should be explained.
 Technical Robustness and safety: AI systems need to be resilient with a fall back plan in case
something goes wrong.
 Human agency and oversight: AI systems should empower human beings, allowing them to
make informed decisions with proper oversight mechanisms.
      </p>
    </sec>
    <sec id="sec-9">
      <title>5. Conclusion</title>
      <p>SimpleMind is a Neurosymbolic AI environment for medical imaging that supports DNNs with a
knowledge base and machine reasoning. It was used to build an AI for checking ETT placement on
CXR that was adopted and evaluated as trustworthy in real-world clinical practice. We believe that
there is strong potential utility for broader research and commercial applications in building trustworthy
AI. The open source software allows for knowledge base expansion and agent aggregation by a
community of developers.</p>
    </sec>
    <sec id="sec-10">
      <title>6. Acknowledgements</title>
      <p>The authors wish to thank their friends and colleagues at the UCLA Center for Computer Vision and
Imaging Biomarkers.</p>
    </sec>
    <sec id="sec-11">
      <title>7. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>T.</given-names>
            <surname>Panch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Mattie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.A.</given-names>
            <surname>Celi</surname>
          </string-name>
          .
          <article-title>The “inconvenient truth” about AI in healthcare</article-title>
          .
          <source>NPJ digital medicine</source>
          .
          <year>2019</year>
          ;
          <volume>2</volume>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>3</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>N.</given-names>
            <surname>Hasani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.A.</given-names>
            <surname>Morris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rahmim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.M.</given-names>
            <surname>Summers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Siegel</surname>
          </string-name>
          , et al.
          <article-title>Trustworthy artificial intelligence in medical imaging</article-title>
          .
          <source>PET clinics</source>
          .
          <year>2022</year>
          ;
          <volume>17</volume>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Holzinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dehmer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Emmert-Streib</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cucchiara</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Augenstein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Del</given-names>
            <surname>Ser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Samek</surname>
          </string-name>
          , I. Jurisica,
          <string-name>
            <given-names>N.</given-names>
            <surname>Díaz-Rodríguez</surname>
          </string-name>
          .
          <article-title>Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence</article-title>
          .
          <source>Information Fusion</source>
          .
          <year>2022</year>
          :
          <volume>79</volume>
          , (
          <issue>3</issue>
          ),
          <fpage>263</fpage>
          --
          <lpage>278</lpage>
          , doi:10.1016/j.inffus.
          <year>2021</year>
          .
          <volume>10</volume>
          .007
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L.R.</given-names>
            <surname>Goodman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.A.</given-names>
            <surname>Conrardy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Laing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.M.</given-names>
            <surname>Singer</surname>
          </string-name>
          .
          <article-title>Radiographic evaluation of endotracheal tube position</article-title>
          .
          <source>AJR Am J Roentgenol</source>
          .
          <year>1976</year>
          ;
          <volume>127</volume>
          (
          <issue>3</issue>
          ):
          <fpage>433</fpage>
          -
          <lpage>434</lpage>
          . doi:
          <volume>10</volume>
          .2214/ajr.127.3.433.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.W.</given-names>
            <surname>Wahi-Anwar</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.S. Brown.</surname>
          </string-name>
          <article-title>SimpleMind adds thinking to deep neural networks</article-title>
          .
          <year>2022</year>
          . arXiv:
          <volume>2212</volume>
          .00951 [cs.
          <source>AI].</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.R.</given-names>
            <surname>Quillian</surname>
          </string-name>
          .
          <article-title>Semantic Networks</article-title>
          . In:
          <string-name>
            <surname>Minsky</surname>
            <given-names>ML</given-names>
          </string-name>
          , editor.
          <source>Semantic Information Processing</source>
          . MIT Press;
          <year>1968</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L.A.</given-names>
            <surname>Zadeh</surname>
          </string-name>
          .
          <article-title>Fuzzy sets</article-title>
          .
          <source>Information and Control</source>
          .
          <year>1965</year>
          ;
          <volume>8</volume>
          (
          <issue>3</issue>
          ):
          <fpage>338</fpage>
          -
          <lpage>353</lpage>
          . doi:https://doi.org/10.1016/S0019-
          <volume>9958</volume>
          (
          <issue>65</issue>
          )
          <fpage>90241</fpage>
          -
          <lpage>X</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D.</given-names>
            <surname>Corkill</surname>
          </string-name>
          . Collaborating Software. In: International Lisp Conference, New York. vol.
          <volume>44</volume>
          ;
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.S.</given-names>
            <surname>Brown</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.P.</given-names>
            <surname>Wong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Shrestha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wahi-Anwar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Daly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Foster</surname>
          </string-name>
          , et al.
          <source>Automated Endotracheal Tube Placement Check Using Semantically Embedded Deep Neural Networks. Academic Radiology</source>
          Vol.
          <volume>30</volume>
          , No. 3, March 2023 pp
          <fpage>412</fpage>
          -
          <lpage>430</lpage>
          . PMID: 35644754 DOI: 10.1016/j.acra.
          <year>2022</year>
          .
          <volume>04</volume>
          .022.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>G.</given-names>
            <surname>Melendez-Corres</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.W.</given-names>
            <surname>Wahi-Anwar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Coy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.S.</given-names>
            <surname>Raman</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.S. Brown.</surname>
          </string-name>
          <article-title>Accelerating training data annotation via a continuous ai-assisted, human-supervised feedback loop in kidney segmentation in ct</article-title>
          .;
          <year>2021</year>
          . Available from: http://archive.rsna.org/
          <year>2021</year>
          /704158.html.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>G.</given-names>
            <surname>Melendez-Corres</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.W.</given-names>
            <surname>Wahi-Anwar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Coy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.S.</given-names>
            <surname>Raman</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.S. Brown.</surname>
          </string-name>
          <article-title>Machine reasoning for segmentation of the kidneys on CT images: improving CNN performance by incorporating anatomical knowledge in post-processing</article-title>
          . Available from: https://spie.org/medicalimaging/presentation/Machine-reasoning
          <article-title>-for-segmentation-of-the-kidneys-</article-title>
          <string-name>
            <surname>on-</surname>
          </string-name>
          CT-images/
          <fpage>12465</fpage>
          -
          <lpage>63</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Garcia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.S.</given-names>
            <surname>Raman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.R.</given-names>
            <surname>Enzmann</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.S. Brown.</surname>
          </string-name>
          <article-title>AI-human interactive pipeline with feedback to accelerate medical image annotation</article-title>
          .
          <source>In: Medical Imaging</source>
          <year>2022</year>
          :
          <article-title>Computer-Aided Diagnosis</article-title>
          . vol.
          <volume>12033</volume>
          . SPIE;
          <year>2022</year>
          . p.
          <fpage>741</fpage>
          -
          <lpage>747</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>M.K. Sarker</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Eberhart</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Hitzler.</surname>
          </string-name>
          Neuro-Symbolic Artificial Intelligence: Current Trends;
          <year>2021</year>
          . Available from: https://arxiv.org/abs/2105.05330.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>P.</given-names>
            <surname>Ala-Pietila</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bonnet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Bergmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bielikova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bonefeld-Dahl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Bauer</surname>
          </string-name>
          <string-name>
            <surname>W</surname>
          </string-name>
          , et al.
          <article-title>The assessment list for trustworthy artificial intelligence (ALTAI)</article-title>
          .
          <source>European Commission</source>
          ;
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>