<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>R. Buongiorno, D. Germanese, C. Romei, L. Tavanti,</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Exploring the potentials and challenges of Artificial Intelligence in supporting clinical diagnostics and remote assistance for the health and well-being of individuals</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Andrea Berti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rossana Buongiorno</string-name>
          <email>rossana.buongiorno@isti.cnr.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gianluca Carloni</string-name>
          <email>gianluca.carloni@isti.cnr.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Claudia Caudai</string-name>
          <email>claudia.caudai@isti.cnr.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giulio Del Corso</string-name>
          <email>giulio.delcorso@isti.cnr.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Danila Germanese</string-name>
          <email>danila.germanese@isti.cnr.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eva Pachetti</string-name>
          <email>eva.pachetti@isti.cnr.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maria Antonietta Pascali</string-name>
          <email>maria.antonietta.pascali@isti.cnr.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sara Colantonio</string-name>
          <email>sara.colantonio@isti.cnr.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Information Engineering, University of Pisa</institution>
          ,
          <addr-line>Via Caruso 16,56122, Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Institute of Information Science and Technologies, ISTI, National Research Council of Italy</institution>
          ,
          <addr-line>CNR, via G. Moruzzi, 1, Pisa, 56124</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Visual intelligence, Medical imaging</institution>
          ,
          <addr-line>Radiomics, Imaging bio-banks, Assistive technologies, Trustworthy AI</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>1</volume>
      <issue>0</issue>
      <fpage>29</fpage>
      <lpage>31</lpage>
      <abstract>
        <p>Innovative technologies powered by Artificial Intelligence have the big potential to support new models of care delivery, disease prevention and quality of life promotion. The ultimate goal is a paradigm shift towards more personalized, accessible, efective, and sustainable care and health systems. Nevertheless, despite the advances in the field over the last years, the adoption and deployment of AI technologies remains limited in clinical practice and real-world settings. This paper summarizes the activities that a multidisciplinary research group within the Signals and Images Lab of the Institute of Information Science and Technologies of the National Research Council of Italy is carrying out for exploring both the potential of AI in health and well- being as well as the challenges to their uptake in real-world settings.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The health and care landscape is changing significantly,
thanks to the continuous advances of scientific
discoveries and diagnostic and therapeutic procedures. Though
undeniably advantageous for the health outcome of
individuals, this progress may increase clinicians’ and
physicians’ workload and thus afect the quality of their
professional life.</p>
      <p>
        Computerised technologies powered by Artificial
Intelligence (AI) have the potential to relieve this issue,
thanks to their ability to integrate multi-modal data and
ering those topics whose medical knowledge is not yet
well-consolidated as remote monitoring of individuals.
nEvelop-O
of Computer-Aided Diagnosis (CAD) systems [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <sec id="sec-1-1">
        <title>Similarly, several factors seem to hinder the uptake</title>
        <p>in real-life settings of AI-powered technologies for
life-logging and remote assistance to individuals [6].
The classical usability analyses have demonstrated to
fall short when considering the eventual acceptance
and adoption of assistive technologies by their end
beneficiaries (i.e., assisted subjects/patients and their
caregivers).</p>
      </sec>
      <sec id="sec-1-2">
        <title>The need to take into account other individual concerns, such as trust in technology, data security and privacy, has lately become evident [7].</title>
      </sec>
      <sec id="sec-1-3">
        <title>This paper summarizes the activities that a multidis</title>
        <p>ciplinary research group within the Signals and Images
Lab of the Institute of Information Science and
Technologies of National Research Council of Italy is carrying
out for exploring both the potential of AI in health and
well-being as well as the challenges to their uptake in
real-world settings.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Visual AI for clinical diagnostics</title>
      <p>In the following sections, we briefly overview the works
done in the field of AI supporting clinical diagnostics. In
most cases, the focus is on medical imaging, as radiology
is expected to be the discipline that will most benefit
from the late progresses of AI in visual perception. A
discussion of the challenges in this domain and the works
done to address them concludes this chapter.</p>
      <sec id="sec-2-1">
        <title>2.1. Visual AI for the prediction of</title>
        <p>prostate cancer aggressiveness in</p>
      </sec>
      <sec id="sec-2-2">
        <title>Magnetic Resonance Imaging</title>
        <p>Prostate cancer (PCa) is the most frequent male neoplasm
in European men. Assessing PCa aggressiveness, is key and with an attention mechanism to predict PCa
aggresto steer patient management. Currently, the gold stan- siveness from diferent pre-processed data, namely on
dard for determining tumour aggressiveness of is biopsy, lesion-centred cropped T2w and ADC images, and on
which is unfortunately an invasive and uncomfortable lesion- selected T2w and ADC images. Imaging data were
procedure. Before the biopsy, physicians recommend an acquired in diverse time frame, in accordance with two
investigation by multi-parametric Magnetic Resonance PI-RADS protocols (i.e., 2.0 and 2.1). We adopted a robust
Imaging (mpMRI), which may serve the radiologist to framework to train and test the models, based on nested,
gather an initial assessment of the tumour, based on the stratified, multiple split and bootstrap cross-validation,
visual inspection and evaluation based on the PI-RADS leaving aside a test set of 14 cases. The DL model with
standard. attention trained on lesion-centred cropped T2w images</p>
        <p>Quantitative assessment of mpMRI might provide achieved the overall best performance. Nevertheless, the
the radiologist with a repeatable and non-invasive tool performance consistently dropped when applying the
decreasing intra- and inter-reader variability. In this model to data acquired with a diferent PI-RADS
protoview, in collaboration with a team from CNR Institute col, thus showing limited generalization capacity of the
of Physics “Nello Carrara” and the University Hospital model [9].
of Careggi in Florence, we have initially investigated For further investigating the potential of the attention
the potential of high dimensional radiomics analyses to mechanism, we designed and trained a 3D Vision
Transidentify the phenotypic diferences of tumour traits [ 8]. former (ViT) able to process volumetric scans, and we
We extracted radiomic features of diferent orders from optimized it, via a grid search, on the freely available
T2w and ADC map images (see “f i g u r e ” 1), and applied a ProstateX-2 challenge dataset by training it from scratch
wrapper, feed-forward feature selection method to select [10]. As a term of comparison, we also designed a 3D
the most relevant ones for distinguishing non aggressive Convolutional Neural Network (CNN), and we optimized
(i.e., low grade according to the biopsy Gleason score) it in a similar fashion. The results obtained by our
prelimfrom aggressive (i.e., high grade according to the biopsy inary investigations showed that Vision Transformers,
Gleason score) PCa. A non-linear SVM classifier, trained even without extensive optimization and customization,
in cross-validation on the 57 cases, was able to achieve can ensure an improved performance with respect to
an accuracy of 93% (sens 90.2%, spec 100%, F-score 94.9%) CNN and might be comparable with other more
fineand an AUC of 99%. tuned solutions. Trained on 5-fold cross-validation, the</p>
        <p>After increasing the sample size to 104 patients, we ViT reached an average AUC of 77.5% (sens 75%, spec
designed and trained Deep Learning (DL) models without 56.7%, F2-score 52.3%) on the test set.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.2. Radiomics analyses for discriminating parotid gland tumours</title>
        <p>In collaboration with a team from Pisa University
Hospital, we investigated the potential of radiomics
analyses also for predicting the malignancy of parotic
malignant tumours from MRI data [11]. Salivary gland
tumours are fortunately rare, with an annual worldwide
incidence ranging from 0.05 to 2 cases per 100,000
individuals. Almost 80% of tumours afect parotid glands Figure 2: A clip (i.e., frame) taken from an ultrasound
examiand most of them are benign (80%), being the pleomor- nation of an healthy subject.
phic adenoma the most frequent neoplasm, then followed
by the Warthin tumour. In our study, we evaluated 75
T2-weighted images of parotid gland lesions, of which
61 were benign tumours (32 pleomorphic adenomas, 23 images (see an example in “f i g u r e ” 2), a fat-liver score
Warthin tumours and 6 oncocytomas) and 14 were malig- aligned with the Hepatic fat fraction currently estimated
nant tumours. A receiver operating characteristics (ROC) from the Magnetic Resonance Spectroscopy (i.e., H-MRS
curve analysis was performed to find the threshold val- index). More than 22,000 ultrasound images obtained
ues for the most discriminative features and determine from a multi-centre dataset of 150 subjects were used
their sensitivity, specificity and area under the ROC curve to train three regression networks, which were able to
(AUROC). The most discriminative features were used predict the fat fraction with a root mean square error
to train an SVM classifier, which was able to distinguish of 1.11 in the best case, thus showing to be an efective
a pleomorphic adenoma from a Warthin tumour (with instrument that might replace the much more expensive
sensitivity, specificity and a diagnostic accuracy as high MRS [12].
as 0.8695, 0.9062 and 0.8909, respectively) and from a
malignant tumours (sensitivity, specificity and a diagnostic 2.4. Visual AI supporting the
accuracy of 0.6666, 0.8709 and 0.8043, respectively). Our management of Idiopathic
work, though preliminary, showed that radiomics analy- Pulmonary Fibrosis
ses on lesions extracted from conventional T2-weighted
MR images may be a viable instrument to discriminate A key step of the diagnosis of Idiopathic Pulmonary
Fibropleomorphic adenomas from Warthin tumours and ma- sis (IPF) is the examination of high-resolution computed
lignant tumours with a high sensitivity, specificity and tomography images (HRCT). IPF exhibits a typical
radidiagnostic accuracy. ological pattern, named Usual Interstitial Pneumoniae
(UIP) pattern, which can be detected in non-invasive
2.3. Visual AI for Hepatic Steatosis HRCT investigations, thus avoiding surgical lung biopsy.</p>
        <p>Unfortunately, the visual recognition and quantification
Estimation from Ultrasound Imaging of UIP pattern can be challenging even for experienced
radiologists due to the poor inter and intra-reader
agreement.</p>
        <p>In collaboration with the radiology unit of Cisanello
Hospital in Pisa, we designed and developed a tool for the
semantic segmentation and the quantification of UIP
pattern in patients with IPF using a deep-learning method
based on a Convolutional Neural Network (CNN), called
UIP-net [13]. o train and evaluate the CNN, a dataset
of 5000 images, derived by 20 CT scans of diferent
patients, was used. The network performance yielded 96.7%
BF-score and 85.9% sensitivity. Once trained and tested,
the UIP-net was used to obtain the segmentations of
other 60 CT scans of diferent patients to estimate the
volume of lungs afected by the UIP pattern. The
measurements were compared with those obtained using the
reference software for the automatic detection of UIP
pattern, named Computer Aided Lungs Informatics for
Hepatic steatosis is the major histologic feature of
Metabolic Disfunction-Associated Fatty Liver Disease
(MAFLD), and is due to the accumulation of fat within
the liver. When associated with inflammation, steatosis
may cause the progression of fibrosis to cirrhosis and
hepatocellular carcinoma. An early detection and accurate
quantification of steatosis is an essential tasks for
preventing disease progression and monitoring its evolution
over time.</p>
        <p>Ultrasound examinations are the most used technique
to non-invasively identify liver steatosis in a screening
settings. However, the diagnosis is operator dependent,
since quantitative and repeatable image processing
techniques have not yet entered clinical practice. In this
frame, in collaboration with a team from the IFC-CNR
and Pisa University Hospital, we designed and trained
a simple CNN model able to predict, from ultrasound
ogy. To do this, the Project plans to employ quantitative
imaging and multi-omics analyses towards a better
understanding of cancer biology, cancer care, and cancer
risks [14].</p>
        <p>The building block of the bio-bank design is the
definition of the data model, which name and organize the
relationship between the data elements and real-world
entities’ properties. For NAVIGATOR, we designed and
implemented three separate data models utilized for the
storage of imaging and clinical data about colorectal,
prostate and gastric cancer [15].</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.6. Challenges to the uptake of AI in clinical practice</title>
        <sec id="sec-2-4-1">
          <title>Realizing the full potential and benefit of AI solutions</title>
          <p>in high-stake domains, such as clinical diagnostics,
manFigure 3: Segmentation results of UIP-net. Top: the ground- dates high-quality scientific foundations, technical
rotruth highlighted in yellow; bottom: UIP-net results in red. bustness, and responsible development. This vision is
at the core of the European strategy for AI, promoting
excellence and trust as the main drivers of a beneficial
Pathology Evaluation and Rating (CALIPER), through impact of AI. Undeniably, only those applications that
the Bland-Altman plot. The network performance as- guarantee reliability, stakeholders’ trust and acceptance,
sessed in terms of both BF-score and sensitivity on the and total patients’ safety can be expected to have a real
test-set and resulting from the comparison with CALIPER impact and uptake in clinical practices. Transparency is
demonstrated to reliably detect and quantify UIP pattern, a key pillar of trustworthiness. Transparency entails to
thus having the potential to become a supportive tool document the entire life-cycle of an AI system as well as
for radiologists. See “f i g u r e ” 3 for an example of the the underlying principles of its functioning [16].
segmentation results. Making an AI system transparent by design is key to</p>
          <p>Thanks to its promising performance, the UIP-net is avoid any grey area in its functioning and use by decision
being applied also in the detection of COVID-19 radio- makers in clinical practice. Therefore, it is an
overarchlogical manifestations, which are very similar to the UIP ing principle of the FUTURE-AI guidelines [17], notably
pattern. touching upon the Traceability, Explainability and
Usability principles. Transparency also ensures that the
2.5. Imaging bio-banking in the quest of AI system is reproducible and auditable by design, thus
laying the bases for accountability and liability.</p>
          <p>FAIR AI research Our group was actively involved in the definition of
The availability of large volumes of high-quality data is the FUTURE-AI guiding principles and is currently
workessential in today’s AI data-driven research. Bio-banks ing in cooperation with FORTH, within the EU H2020
play a central role in this scenario, as they serve the man- ProCAncer-I project (GA 952159) on the definition of
agement and more efective usage of large volumes of an AI Model Passport, which is going to include all the
data. Besides the more common collections of body fluids relevant of information to document the development
and tissues, bio-banks are nowadays advancing to inte- lifecycle of AI models.
grate also collections of medical imaging data. Imaging In cooperation with IMATI-CNR and the Poznań
Unibio-banks are organized repositories of medical images, versity of Technology, within the EU NoE TAILOR (GA
usually associated with imaging bio-markers. Most of the 952215), we contributed to a recent survey of the
termiexisting imaging bio-banks focus on cancer-related data nology, recommendations and open issues of the
reproand oncology imaging bio-markers collections. Their ducibility of Machine Learning [18].
goal is to exploit the wealth of information hold in imag- Moreover, we worked on efective approaches to
ining data to discover novel diagnostic and prognostic bio- crease the transparency of AI and ML models’ decisions,
markers, especially when considering cancer phenotypes. especially in the Explainable AI for visual AI models. In</p>
          <p>The NAVIGATOR Project, funded by the Tuscany Re- this regard, the ProtoPNet model, which breaks down
gion, aims to establish the first regional imaging bio-bank, an image into prototypes and uses evidence gathered
with the goal of boosting precision medicine in oncol- from the prototypes to classify an image, represents an
appealing approach. We explored the applicability of
Patch on
test image</p>
          <p>Prototype from
training image
taken from
taken from
taken from
taken from
taken from
malignant
malignant
malignant
malignant
activates
these
regions
activates
these
regions
activates
these
regions
activates
these
regions
activates
these
regions</p>
          <p>Activation on test
image</p>
          <p>with
similarity
score:
2.052
with
similarity
score:
0.709
with
similarity
score:
0.672
with
similarity
score:
0.630
with
similarity
score:
0.607</p>
          <p>Test image of a malignant mass
predicted as malignant by ProtoPNet</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. AI for assistive technologies and health promotion</title>
      <p>The ability of AI techniques to mine and correlate large
amount of data plays a key role in delivering solutions
that may support individuals in their daily-life
activities through remote monitoring, assistance and care. The
goals spam for encouraging individuals towards healthier
lifestyles, to assisting them in daily-life activities, to
preventing and managing chronic or multi-morbidity health
conditions. In the most advanced settings, these systems
use diferent approaches to learn about their users and
make automated decisions, for personalizing their
services and optimise outcomes. In the following sections,
we briefly overview our works in the field, also with
respect to the challenges that prevent the acceptance of AI
in real-world environments.</p>
      <sec id="sec-3-1">
        <title>3.1. AI for disease prevention</title>
        <p>Active and Assisted Living (AAL) technologies usually
address older adults or people in needs with diverse types
of sensorised AI-powered applications. A comprehensive
review of the AAL technologies taking advantage of AI
techniques has been recently published by the team [20]
as part of the activities within the Cost Action
GoodBrother (CA 19121). Within this Action, a collaboration
with a team from the University of Castilla-La Mancha
delivered a survey about AI-powered solutions for bedtime
monitoring to prevent falls in older adults [21].</p>
        <p>In this field, we investigated also the use of thermal
imaging for stress discrimination [22, 23] and the use of
an e-nose for monitoring severe liver impairment (see
“f i g u r e ” 5) [24].</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions</title>
      <p>AI has a big potential to ameliorate care and health
systems. Nevertheless, future research in the field should
involve health care professionals and caregivers as
designers and users, comply with health-related regulations,
improve transparency and privacy, integrate with health
care technological infrastructure, explain their decisions
to the users, and establish evaluation metrics and design
guidelines [6].</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>This publication is partially based upon work from COST
Action GoodBrother—Network on Privacy-Aware
Audioand Video-Based Applications for Active and Assisted
Living (CA19121), supported by COST (European
Cooperation in Science and Technology); upon the work carried
within the EU H2020 projects ProCAncer-I (GA 952159)
and TAILOR (GA 952215); and upon the work carried out
within the PAR FAS Tuscany Region NAVIGATOR.</p>
      <sec id="sec-5-1">
        <title>3.3. Challenges to the uptake of AI AAL in real-world settings</title>
        <p>AI-powered AAL technologies provide promising
solutions for the health and social care challenges,
nevertheless they are not exempt from ethical, legal and social
issues [25]. From a technical perspective, they need to
guarantee robust, accurate, reliable and unobtrusive data
acquisition and interpretation in daily-life settings as well
as security, privacy-preservation, safety, and usability
that may ensure long-term engagement [26].
Nevertheless, an ethical approach and a thorough understanding
of all issues pertaining to ethics, social equality, legality,
and fairness need to be integrated at their early
development phases [25].</p>
        <p>Within the Cost Action GoodBrother, we surveyed
existing literature for analysing the specific AI models used
in AАL systems, the target domains of the models, the
technology using the models, and the major concerns
from the end-user perspective. Our goal was to
consolidate research on this topic and inform end users, health
care professionals and providers, researchers, and
practitioners in developing, deploying, and evaluating future
intelligent AAL systems. Older adults were the primary
beneficiaries, followed by patients and frail persons of
various ages. Availability was a top beneficiary concern
[6, 27].</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Scheetz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rothschild</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>McGuinness</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Hadoux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. P.</given-names>
            <surname>Soyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Janda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Condon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Oakden-Rayner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. J.</given-names>
            <surname>Palmer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Keel</surname>
          </string-name>
          ,
          <string-name>
            <surname>P. van Wijngaarden</surname>
          </string-name>
          ,
          <article-title>A survey of clinicians on the use of artificial intelligence in ophthalmology, dermatology, radiology and radiation oncology</article-title>
          ,
          <source>Scientific Reports</source>
          <volume>11</volume>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          . doi: h t t p s : / / d o i .
          <source>o r g / 1 0 . 1 0 3 8 / s 4 1</source>
          <volume>5 9 8 - 0 2 1 - 8 4 6 9 8 - 5</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Voskens</surname>
          </string-name>
          , Frank J.and
          <string-name>
            <surname>Abbing</surname>
            ,
            <given-names>A. T.</given-names>
          </string-name>
          <string-name>
            <surname>Ruys</surname>
            ,
            <given-names>J. P.</given-names>
          </string-name>
          <string-name>
            <surname>Ruurda</surname>
            ,
            <given-names>I. A. M. J.</given-names>
          </string-name>
          <string-name>
            <surname>Broeders</surname>
          </string-name>
          ,
          <article-title>A nationwide survey on the perceptions of general surgeons on artificial intelligence</article-title>
          ,
          <source>Artificial Intelligence Surgery</source>
          <volume>2</volume>
          (
          <year>2022</year>
          ).
          <source>doi:1 0 . 2 0 5 1 7 / a i s . 2 0</source>
          <volume>2 1 . 1</volume>
          <fpage>0</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Born</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Beymer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Rajan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Coy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Mukherjee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Manica</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Prasanna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ballah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Guindy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Shaham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. L.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Karteris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Robertus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gabrani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rosen-Zvi</surname>
          </string-name>
          ,
          <source>On the role of artificial intelligence in medical imaging of covid-19, Patterns</source>
          <volume>2</volume>
          (
          <year>2021</year>
          )
          <article-title>100269</article-title>
          . URL: https://www.sciencedirect.com/ science/article/pii/S2666389921000957. doi:h t t p s : / / d o i .
          <source>o r g / 1 0 . 1 0</source>
          <volume>1 6</volume>
          / j . p
          <source>a t t e r . 2 0</source>
          <volume>2 1 . 1 0 0 2 6 9 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B. W.</given-names>
            <surname>Israelsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. R.</given-names>
            <surname>Ahmed</surname>
          </string-name>
          , “
          <fpage>dave</fpage>
          ...
          <article-title>i can assure you ...that it's going to be all right</article-title>
          ...
          <article-title>” a definition, case for, and survey of algorithmic assurances in human-autonomy trust relationships</article-title>
          ,
          <source>ACM Comput. Surv</source>
          .
          <volume>51</volume>
          (
          <year>2019</year>
          ). URL: https://doi.org/10.1145/ 3267338.
          <source>doi:1 0 . 1 1</source>
          <volume>4 5 / 3 2 6 7 3 3 8 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Yanase</surname>
          </string-name>
          ,
          <string-name>
            <surname>E. Triantaphyllou,</surname>
          </string-name>
          <article-title>The seven key challenges for the future of computer-aided</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>