<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Hepatic steatosis, characterized by the accumulation of
Ital-IA</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>From Covid-19 detection to cancer grading: how medical-AI is boosting clinical diagnostics and may improve treatment</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Andrea Berti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rossana Buongiorno</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gianluca Carloni</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Claudia Caudai</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesco Conti</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giulio Del Corso</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Danila Germanese</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Davide Moroni</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eva Pachetti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maria Antonietta Pascali</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sara Colantonio</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Information Engineering, University of Pisa</institution>
          ,
          <addr-line>Via Caruso 16,56122, Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Mathematics, University of Pisa</institution>
          ,
          <addr-line>Largo B. Pontecorvo, 56126, Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Institute of Information Science and Technologies, ISTI, National Research Council of Italy</institution>
          ,
          <addr-line>CNR, via G. Moruzzi, 1, Pisa, 56124</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>3</volume>
      <fpage>29</fpage>
      <lpage>31</lpage>
      <abstract>
        <p>The integration of artificial intelligence (AI) into medical imaging has guided an era of transformation in healthcare. This paper presents the research activities that a multidisciplinary research group within the Signals and Images Lab of the Institute of Information Science and Technologies of the National Research Council of Italy is carrying out to explore the great potential of AI in medical imaging. From the convolutional neural network-based segmentation of Covid-19 lung patterns to the radiomic signature for benign/malignant breast nodule discrimination, to the automatic grading of prostate cancer, this work highlights the paradigm shift that AI has brought to medical imaging, revolutionizing diagnosis and patient care.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Visual intelligence</kwd>
        <kwd>Medical imaging</kwd>
        <kwd>Radiomics</kwd>
        <kwd>Convolutional Neural Networks</kwd>
        <kwd>Deep Neural Networks</kwd>
        <kwd>Trustworthy AI</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        eficiency, increasing diagnostic accuracy, and fostering
greater patient satisfaction. However, it is important to
Medical imaging modalities such as computed tomogra- strike a delicate balance between promoting the benefits
phy (CT), magnetic resonance imaging (MRI), positron of AI in clinical practice, which are evident, and
addressemission tomography (PET), and ultrasound (US) play ing concerns about the transparency, trustworthiness,
a key role in providing healthcare professionals with and potential bias of AI algorithms.
detailed and exhaustive visual data of the human body. This paper summarises the ongoing activities of a
mulThese imaging techniques generate significant amounts tidisciplinary research group within the Signals and
Imof data that require eficient analysis and interpretation. ages Lab of the Institute of Information Science and
TechThis is where Artificial Intelligence (AI) comes in. nologies of the National Research Council of Italy. The
AI may emulate human cognitive processes in analyz- group aims to explore the potential applications of AI in
ing and understanding healthcare data. By focusing on promoting and supporting health and well-being, while
the analysis of biomedical images using computational also addressing the challenges related to algorithms’
extechniques such as object detection, segmentation and plainability and transparency.
registration, AI has the potential to enhance
diagnostic and prognostic accuracy by identifying patterns and
correlations that may be dificult for humans to observe 2. AI for clinical diagnosis
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        In the past, the use of AI in medicine was constrained In the following, we provide a brief overview of the
reby technological limitations until 1998, when the US search conducted in the field of AI supporting clinical
Food and Drug Administration (FDA) approved the first diagnostics. The primary focus is on medical imaging,
computer-aided detection (CAD) system for mammogra- given that radiology is expected to benefit most from
phy [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Since then, there has been exponential growth recent advancements in AI.
in the use of AI techniques in the medical field.
      </p>
      <p>Today, hospitals are actively exploring AI solutions
to support operational eforts aimed at improving cost</p>
      <sec id="sec-1-1">
        <title>2.1. AI for Fatty Liver Content Estimation from US Imaging</title>
        <p>
          fraction assessment) are crucial tasks for predicting the Clinical Management of COVID-19 Patients", funded by
disease progression. Magnetic Resonance Spectroscopy Tuscany Region).
is the gold standard for the fat fraction assessment, while We conducted a comparison between them to ascertain
US imaging is commonly used to identify liver steatosis insights into the cognitive mechanisms that can drive
during screenings. Despite being non-invasive, US is a neural model towards optimal performance for this
highly operator-dependent [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. task, as well as to identify the optimal balance between
        </p>
        <p>
          In collaboration with a team from the IFC-CNR and the volume of data, time, and computational resources
Pisa University Hospital, we conducted a systematic com- necessary. From the results of the analysis, it can be
parison between three Deep Learning (DL) models in concluded that Attention-UNet outperforms the other
estimating, from US images, the fat fraction [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. The models by achieving the best performance of 81.93%, in
compared models were the following: (i) a determinis- terms of 2D Dice score on the test set.
tic Convolutional Neural Network (CNN), similar to the
one in [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], (ii) an MC Dropout CNN model, and (iii) a 2.3. AI for Alzheimer disease detection
Bayesian CNN with probabilistic output.
        </p>
        <p>
          In comparison to [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], the multi-center dataset
increased up to 186 subjects.
        </p>
        <p>Regression results showed good prediction
performance for all architectures on the 5-fold test sets
(Normalized RMSE 5.87%, 5.35%, and 5.82% for deterministic,
MC Dropout, and Bayesian CNN, respectively).
However, the introduction of uncertainty quantification (UQ),
contributes to decreasing the percentage of mispredicted
cases (from 32.4% for classical CNN to less than 9% for
Bayesian one). Furthermore, the possibility of having
access to information about the confidence with which
the network produces its outputs is a great advantage,
especially from the point of view of physicians who want
to use neural networks as computer-aided diagnosis.</p>
        <p>On top of the work [13], the cerebrospinal fluid of 21
subjects who received a clinical diagnosis of Alzheimer’s
disease (AD) as well as of 22 pathological controls has been
collected and analysed by Raman Spectroscopy (RS). The
aim of this research is to understand if the Raman spectra
could be used to distinguish AD from controls, after a
preprocessing procedure. We applied machine learning to a
set of topological descriptors extracted from the spectra,
achieving a high classification accuracy of 86% (the best
performing combination is the Ridge classifier applied
to the persistence landscapes vectorization).Our
experimentation indicates that RS and topological analysis may
be efective to confirm or disprove a clinical diagnosis of
Alzheimer’s disease. Also, it opens the way to possibly
increasing and/or confirming the knowledge about the
precise molecular events and biological pathways behind
the Alzheimer’s disease, e.g., by identifying the bands of
the Raman spectrum relevant for AD detection.</p>
      </sec>
      <sec id="sec-1-2">
        <title>2.2. AI for Covid-19 Pulmonary Patterns</title>
      </sec>
      <sec id="sec-1-3">
        <title>Identification</title>
        <sec id="sec-1-3-1">
          <title>During the Coronavirus Disease 2019 (COVID-19) pan</title>
          <p>
            demic, High-Resolution Computed Tomography (HRCT) 2.4. AI for the Diagnosis of Eosinophilic
of the chest has been adopted as a method to visually iden- Esophagitis
tify two distinct abnormal pulmonary patterns: Ground
Glass Opacity (GGO), characterized by increased attenu- Eosinophilic esophagitis (EoE) is a chronic disease
characation and hazy density in lung lobes, and Consolidation, terized by esophageal symptoms and eosinophilic
inflamindicated by bilateral areas of lung tissue filled with fluid mation of the esophagus. Among patients with
dysphainstead of air [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ]. However, these patterns appear scat- gia, EoE and non-EoE patients should receive diferent
tered with undefined contours and often lack contrast therapies and therefore must be timely and correctly
idenwith surrounding healthy tissue. tified from the clinical history or by using more invasive
          </p>
          <p>Consequently, the segmentation and quantification of procedures (endoscopic and/or histological information).
pathological lung regions from HRCT data have proven In [14], an RDF-based ML model was trained on a
multito be very challenging. center international database (273 EoE and 55 non-EoE</p>
          <p>
            In [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ], we compared four state-of-the-art CNNs based dysphagia patients clinical and endoscopic data, collected
on the encoder-decoder paradigm for the binary seg- from Guy’s and St. Thomas’ Hospital NHS Foundation
mentation of COVID-19 infections (UNet [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ], Attention- Trust (GSTT, London, United Kingdom), Pisa Univ.
HosUNet [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ], Recurrent–Residual UNet (R2-UNet) [
            <xref ref-type="bibr" rid="ref11">11</xref>
            ], R2- pital (Pisa, Italy), and Padua Univ. Hospital (Padua, Italy))
Attention UNet [12]), after training and testing them on to provide indications for the investigation of EoE in
90 HRCT volumetric scans of COVID-19 patients. The adults reporting dysphagia or to inform point-of-care
images were collected from the database of the Pisa Uni- decision-making for performing esophageal biopsies in
versity Hospital (in the framework of the regional project adults with dysphagia.
"Optimised - An Optimised Path for the Data Flow and The model was further evaluated on an independent
cohort of 93 consecutive patients with dysphagia,
resulting in an AUC of 0.90 (using clinical data) and an AUC able procedure. Multi-parametric Magnetic Resonance
of 0.94 (using a combination of clinical and endoscopic Imaging (mpMRI) is frequently employed to get an initial
data) The model, re-trained on the whole dataset, has assessment of the tumor. To this end, numerous studies
been integrated into an open-access online tool (https: have explored ML/DL models for automatic PCa grading
//webapplicationing.shinyapps.io/PointOfCare-EoE/). from mpMRI images [17].
          </p>
          <p>However, developing accurate and generalizable DL
models for medical imaging, where data is often scarce,
3. AI for cancer grading presents a significant challenge. Few-shot learning (FSL)
ofers a promising solution, particularly since the
adAI algorithms are showing potential in improving the vancements in meta-learning [18]. For this reason, we
current protocol for grading various cancers, such as investigated FSL techniques for assessing PCa
aggresbreast and prostate cancer. In the following sections, we siveness from mpMRI images. We proposed a two-step
provide a brief description of our research in this area. approach: a disentangled self-supervised learning (SSL)
pre-training step for robust feature extraction, followed
3.1. AI for the discrimination between by meta-fine-tuning utilizing finer-grained classes and
benign/malignant breast nodules in the coarser-grained ones in meta-testing for enhanced
ABVS and DBT images generalization [19]. Our approach achieved a mean
AUROC of 0.821 for a 4-way (ISUP 2-5) 5-shot setting. We
Although imaging techniques are commonly used for further explored enhancing FSL models performance by
breast cancer screening, biopsy is the only method avail- leveraging synthetic image generation, employing a
Deable to categorize a breast lesion as benign or malignant. noising Difusion Probabilistic Model (DDPM).
However, biopsies are invasive and costly procedures Also, we proposed a new technique to discover and
that can cause discomfort in patients [15]. exploit causality signals from images via neural networks</p>
          <p>Radiomic analysis of biomedical images shows promise for classification purposes [ 20, 21]. We model how the
in addressing various clinical challenges, such as early presence of a feature in one part of the image afects
detection and classification of breast tumors. the appearance of others in diferent parts of the image.</p>
          <p>In the P.I.N.K study [16], 66 women were enrolled. Our method consists of a convolutional backbone and
Their paired Automated Breast Volume Scanner (ABVS) a causality-factors extractor computing weights to
enand Digital Breast Tomosynthesis (DBT) images, an- hance feature maps according to their causal influence
notated with cancerous lesions, populated the first in the scene. We evaluated our method on a dataset of
ABVS+DBT dataset. This allowed for radiomic analy- prostate MRI images for cancer diagnosis and studied its
sis to diferentiate between malignant and benign breast efectiveness of our module both in fully-supervised and
cancer. 1-shot learning. On the binary classification of cancer</p>
          <p>Three Machine Learning (ML) methods were em- versus no-tumor cases, our method led to a maximum
ployed: Random Decision Forests (RDF), Support Vector test accuracy of 0.72, representing a 5 % increase to the
Machines (SVM), and Logistic Regression (Logit). They baseline [21]. On distinguishing ISUP grades in 1-shot
were trained and validated using an ad hoc nested LOO learning, we obtained a 0.71 AUROC for the classification
cross-validation procedure to ensure a minimally biased ISUP 2 vs. all the others, with 13 % increasing to the
baseestimation of the model’s generalization ability, even line [20]. Our attention-inspired improved the overall
with a limited sample size. The study’s main finding classification and produced more robust XAI predictions
highlights the superior efectiveness of RDF model in focusing on relevant parts of the image.
accurately predicting tumor classification using radiomic
features in both ABVS and DBT acquisitions. It achieved 3.3. AI for chondrosarcoma grading from
AUC-ROC values of 89.9% with a subset of 19 features.</p>
          <p>Additionally, promising outcomes were achieved using Raman Spectroscopy
solely textural radiomic features to train RDF model, with
AUC-ROC values of 71.8% and 74.1% for ABVS and DBT,
respectively. This suggests the potential for integrating
virtual biopsy into routine medical practice.</p>
        </sec>
        <sec id="sec-1-3-2">
          <title>Raman Spectroscopy (RS) allows for the observation of</title>
          <p>changes in biochemical constituents (such as proteins,
lipid structures, DNA, and vitamins) among diferent
tissues by obtaining their biochemical maps. Recently,
RS has been applied to chondrogenic tumor classification</p>
        </sec>
      </sec>
      <sec id="sec-1-4">
        <title>3.2. AI for prostate cancer grading from with excellent results [22].</title>
        <p>MRI acquisitions Chondrogenic tumors are the second largest group of
bone tumors worldwide. They are generally classified as
Current methods for determining Prostate cancer (PCa) primary chondrosarcomas when they occur in previously
aggressiveness rely on biopsy, an invasive and uncomfort- normal bone. Secondary chondrosarcomas result from
tracted from 134 T2-weighted Magnetic Resonance
Imaging (MRI) images of patients who underwent
radiotherapy. The MRI scans were obtained from ProstateNet
(https://prostatenet.eu), the repository designed within
the framework of the EU H2020 ProCAncer-I project.</p>
        <p>Data regarding the presence and severity of rectal and
urinary side efects after treatment were also included.</p>
        <p>The results demonstrated that radiomics-based
approaches can be efective in predicting
radiotherapyinduced side efects, achieving an AUROC of 70.8%. Also,
a set of simplified model variants was used to estimate
epistemic uncertainty and provide a reliability score to
complement the main model’s prediction.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>5. AI for the newborn and infant</title>
      <sec id="sec-2-1">
        <title>5.1. Thermal imaging for stress and well-being</title>
        <p>the malignant transformation of a benign cartilaginous
lesion and are classified into three grades: CS G1, CS G2 In this field, we investigated also the use of thermal
imagand CS G3. Enchondroma (EC) is a non-cancerous tumor. ing for stress discrimination [27, 28], to the aim of
detectDistinguishing between EC and CS G1 is a critical issue ing stress in adults under stress stimuli, and of assessing
for pathologists, as it generates many false positive and the eficacy of the hortotherapy for female adolescence
affalse negative diagnoses [24]. fected by anorexia nervosa. Notably, we are moving to a</p>
        <p>In [23] we showed that the combination of persistent more challenging task: deepen the understanding of
therhomology and ML techniques can support the classifica- mal profiles in the newborn (possibly pre-term) in order
tion of Raman spectra extracted from cancerous tissues to develop or improve new treatment techniques related
to achieve a reliable chondrosarcomas grading. to the maturation of the newborn thermo-regulation
sys</p>
        <p>A total of 410 Raman spectra from 10 patients with tem. A study protocol, joint work with the lab NINA
primary chondrogenic tumors of the skeleton, treated and the NICU of Santa Chiara Hospital in Pisa, is under
at Azienda Ospedaliera Universitaria Pisana (Pisa), were review.
used to train the machine learning models. Despite the
small size of the experimental dataset, the results show
that the method not only achieved high accuracy on 5.2. AI for baby facial gestures
previously unseen data samples; also such a methos can recognition
be easily integrated into a Raman spectroscopic system as
an automatic tool to assist clinicians in grading tumors.</p>
        <p>One open issue related to children’s research concerns
neonatal imitation (NI), namely the primitive ability of
infants to mirror the actions of others [29]. The question
4. AI for predicting of whether imitation is present from birth is of great
importance as it can foster a deeper understanding of how
radiotherapy-induced toxicity in it contributes to later developmental outcomes, which is
prostate cancer crucial for the preterm newborn.</p>
        <p>Computer vision methods may unobtrusively detect
Radiotherapy is a commonly used treatment for prostate and analyze the most relevant facial features, thus
providcancer (PCa). In recent years, there has been a surge of in- ing clinicians (or parents, caregivers, etc.) with objective
terest in leveraging ML methods to analyze radiomic fea- data about children’s health status [30]. However, for
intures derived from multiparametric MRI (mpMRI) scans fants, this is a challenging task, due to significant changes
of PCa. However, little attention has been given to pre- in their facial morphology compared to adults, and to
dicting radiation-induced toxicity [25] before starting the increased complexity in data collection caused by
radiotherapy. In the work carried out in the framework unpredictable variations in their facial poses [31].
of the EU H2020 ProCAncer-I project [26], we aimed to In [32], we analyzed videos of 10 newborns (8 preterms,
predict radiotherapy-induced side efects, including both 2 at term, ≤ 4 weeks post-term equivalent age),
performgenito-urinary and rectal toxicity. ing tasks such as tongue protrusion and mouth opening,</p>
        <p>A RDF model was trained on radiomic features ex- to classify open/closed mouths. The videos were analyzed
at frame-level, for a total of 41000 labeled frames. In each
frame, we identified mouth landmarks and cropped the
images around the mouth, then we applied an image
preprocessing pipeline (which included mouth orientation,
resizing, brightness and contrast enhancement, see
Figure 2) to improve classification performance. A CNN was
trained using a ten-fold cross-validation, which resulted
in highly reliable results: accuracy, precision, and recall
over 92% on unseen data.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>6. Conclusions</title>
      <p>AI has a big potential to improve care and health
systems, specially for diagnostic tasks, even if facing very
important technical issues like unbalance dataset, data
drift, heterogeneous acquisition protocols, and input data
and annotations of variable quality. Also, future research
should involve healthcare professionals and caregivers as
designers and users, comply with health-related
regulations, improve transparency and privacy, integrate with
healthcare technological infrastructure, explain their
decisions to the users, and establish evaluation metrics and
design guidelines.</p>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <sec id="sec-4-1">
        <title>This publication is based upon the work carried out</title>
        <p>within the COST Action GoodBrother (CA19121), the
EU H2020 projects ProCAncer-I (GA 952159) and FAITH
(GA 101135932), the PAR FAS Tuscany Region
NAVIGATOR, PRAMA and OPTIMISED.
network based on u-net (r2u-net) for medical im- tribution of raman spectroscopy to diagnosis and
age segmentation, arXiv preprint arXiv:1802.06955 grading of chondrogenic tumors, Scientific Reports
(2018). 10 (2020) 2155.
[12] Q. Zuo, S. Chen, Z. Wang, R2au-net: Attention [23] F. Conti, M. D’Acunto, C. Caudai, S. Colantonio,
recurrent residual convolutional neural network for R. Gaeta, D. Moroni, M. A. Pascali, Raman
specmultimodal medical image segmentation, Security troscopy and topological machine learning for
canand Comm. Networks 2021 (2021) 1–10. cer grading, Scientific reports 13 (2023) 7282.
[13] F. Conti, M. Banchelli, V. Bessi, C. Cecchi, F. Chiti, [24] C. D. Savci-Heijink, A. H. Cleven, J. V. Bovée,
S. Colantonio, C. D’Andrea, M. de Angelis, D. Mo- Benign and low-grade cartilaginous tumors: An
roni, B. Nacmias, et al., Alzheimer disease detection update on diferential diagnosis, Diagnostic
from raman spectroscopy of the cerebrospinal fluid Histopathology 28 (2022) 501–509.
via topological machine learning, Eng. Proc. 51 [25] H. Abdollahi, S. R. Mahdavi, B. Mofid, M.
Bakhshan(2023) 14. deh, A. Razzaghdoust, A. Saadipoor, K. Tanha,
Rec[14] P. Visaggi, G. Del Corso, F. B. Svizzero, M. Ghisa, tal wall mri radiomics in prostate cancer patients:
S. Bardelli, A. Venturini, D. S. Donati, B. Barberio, prediction of and correlation with early rectal
toxiE. Marciano, M. Bellini, et al., Artificial intelligence city, Int. j.l of rad. biology 94 (2018) 829–837.
tools for the diagnosis of eosinophilic esophagitis in [26] G. Del Corso, E. Pachetti, R. Buongiorno, A. C.
Roadults reporting dysphagia: development, external drigues, D. Germanese, M. A. Pascali, J. Almeida,
validation, and software creation for point-of-care N. Rodrigues, M. Tsiknakis, N. Papanikolaou,
use, The J. of Allergy and Clinical Immunology: In D. Regge, K. Marias, P.-I. Consortium, S.
ColanPractice (2023). tonio, Radiomis-based reliable predictions of side
[15] J. M. Hemmer, J. C. Kelder, H. P. van Heesewijk, efects after radiotherapy for prostate cancer, in:
Stereotactic large-core needle breast biopsy: anal- Accepted to ISBI2024- the 21st Int. Symp. on Biom.
ysis of pain and discomfort related to the biopsy Imaging, 2024.</p>
        <p>procedure, European rad. 18 (2008) 351–354. [27] F. Gioia, M. A. Pascali, A. Greco, S. Colantonio, E. P.
[16] G. Del Corso, D. Germanese, C. Caudai, G. Anastasi, Scilingo, Discriminating stress from cognitive load
P. Belli, A. Formica, A. Nicolucci, S. Palma, M. A. using contactless thermal imaging devices, in: 2021
Pascali, S. Pieroni, et al., Adaptive machine learning 43rd Ann. Int. Conf. of the IEEE Eng. in Med. and
approach for importance evaluation of multimodal Biology Soc. (EMBC), 2021, pp. 608–611.
breast cancer radiomic features, J. of Imaging Inf. [28] O. Curzio, L. Billeci, V. Belmonti, S. Colantonio,
in Med. (2024) 1–10. L. Cotrozzi, C. F. De Pasquale, M. A. Morales, C. Nali,
[17] M. He, Y. Cao, C. Chi, X. Yang, R. Ramin, S. Wang, M. A. Pascali, F. Venturi, A. Tonacci, N. Zannoni,
G. Yang, O. Mukhtorov, L. Zhang, A. Kazantsev, S. Maestro, Horticultural therapy may reduce
psyet al., Research progress on deep learning in mag- chological and physiological stress in adolescents
netic resonance imaging based diagnosis and treat- with anorexia nervosa: A pilot study, Nutrients 14
ment of prostate cancer: a review on the current (2022).
status and perspectives, Frontiers in Oncology 13 [29] A. N. Meltzof, M. K. Moore, Imitation of facial and
(2023) 1189370. manual gestures by human neonates, Science 198
[18] Y. Wang, Q. Yao, J. T. Kwok, L. M. Ni, Generaliz- (1977) 75–78.</p>
        <p>ing from a few examples: A survey on few-shot [30] D. Germanese, S. Colantonio, M. Del Coco,
learning, ACM computing surveys (csur) 53 (2020) P. Carcagnì, M. Leo, Computer vision tasks for
1–34. ambient intelligence in children’s health,
Informa[19] E. Pachetti, S. A. Tsaftaris, S. Colantonio, tion 14 (2023).</p>
        <p>Boosting few-shot learning with disentangled [31] D. Kuefner, V. Macchi Cassia, M. Picozzi, E. Bricolo,
self-supervised learning and meta-learning for Do all kids look alike? evidence for an other-age
medical image classification, arXiv preprint efect in adults., J. of Exp. Psychology: Human
arXiv:2403.17530 (2024). Perception and Performance 34 (2008) 811.
[20] G. Carloni, E. Pachetti, S. Colantonio, Causality- [32] G. Del Corso, D. Germanese, M. A. Pascali,
driven one-shot learning for prostate cancer grad- S. Bardelli, A. Cuttano, F. Festante, A. Guzzetta,
ing from mri, in: Proc. of the IEEE/CVF Int. Conf. L. Rocchitelli, S. Colantonio, Facial landmark
idenon Computer Vision, 2023, pp. 2616–2624. tification and data preparation can significantly
im[21] G. Carloni, S. Colantonio, Exploiting causality sig- prove the extraction of newborns’ facial features,
nals in medical images: A pilot study with empirical in: Submitted, 2023.</p>
        <p>results, Expert Sys. with Appl. (2024) 123433.
[22] M. D’Acunto, R. Gaeta, R. Capanna, A. Franchi,
Con</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Koul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Singla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Ijaz</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence in disease diagnosis: a systematic literature review, synthesizing framework and future research agenda</article-title>
          ,
          <source>J. of ambient intell. and humanized comp</source>
          .
          <volume>14</volume>
          (
          <year>2023</year>
          )
          <fpage>8459</fpage>
          -
          <lpage>8486</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J. E.</given-names>
            <surname>Goldberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Reig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Lewin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Heacock</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. L.</given-names>
            <surname>Heller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Moy</surname>
          </string-name>
          ,
          <article-title>New horizons: artificial intelligence for digital breast tomosynthesis</article-title>
          ,
          <source>RadioGraphics</source>
          <volume>43</volume>
          (
          <year>2022</year>
          )
          <article-title>e220060</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Han</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Byra</surname>
          </string-name>
          , E. Heba,
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Andre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. W.</given-names>
            <surname>Erdman</surname>
          </string-name>
          <string-name>
            <surname>Jr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Loomba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. B.</given-names>
            <surname>Sirlin</surname>
          </string-name>
          ,
          <string-name>
            <surname>W. D. O'Brien</surname>
            <given-names>Jr</given-names>
          </string-name>
          ,
          <article-title>Noninvasive diagnosis of nonalcoholic fatty liver disease and quantification of liver fat with radiofrequency ultrasound data using one-dimensional convolutional neural networks</article-title>
          ,
          <source>Radiology</source>
          <volume>295</volume>
          (
          <year>2020</year>
          )
          <fpage>342</fpage>
          -
          <lpage>350</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mancini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Prinster</surname>
          </string-name>
          , G. Annuzzi,
          <string-name>
            <given-names>R.</given-names>
            <surname>Liuzzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Giacco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Medagli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cremone</surname>
          </string-name>
          , G. Clemente,
          <string-name>
            <given-names>S.</given-names>
            <surname>Maurea</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Riccardi</surname>
          </string-name>
          , et al.,
          <article-title>Sonographic hepaticrenal ratio as indicator of hepatic steatosis: comparison with 1h magnetic resonance spectroscopy</article-title>
          ,
          <source>Metabolism</source>
          <volume>58</volume>
          (
          <year>2009</year>
          )
          <fpage>1724</fpage>
          -
          <lpage>1730</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G.</given-names>
            <surname>Del Corso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Pascali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Caudai</surname>
          </string-name>
          , L. De Rosa,
          <string-name>
            <given-names>A.</given-names>
            <surname>Salvati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mancini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Ghiadoni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bonino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Brunetto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Colantonio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Faita</surname>
          </string-name>
          ,
          <article-title>Ann uncertainty estimates in assessing fatty liver content from ultrasound data</article-title>
          ,
          <source>Submitted</source>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Colantonio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Salvati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Caudai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bonino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Rosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Pascali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Germanese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Brunetto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Faita</surname>
          </string-name>
          ,
          <article-title>A deep learning approach for hepatic steatosis estimation from ultrasound imaging</article-title>
          , in: K. Wojtkiewicz,
          <string-name>
            <given-names>J.</given-names>
            <surname>Treur</surname>
          </string-name>
          , E. Pimenidis, M. Maleszka (Eds.),
          <source>Adv. in Comp. Collective Intelligence</source>
          , Springer International Publishing, Cham,
          <year>2021</year>
          , pp.
          <fpage>703</fpage>
          -
          <lpage>714</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Ai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Lv</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Tao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <article-title>Correlation of chest ct and rt-pcr testing for coronavirus disease 2019 (covid19) in china:</article-title>
          <source>A report of 1014 cases, Radiology</source>
          <volume>296</volume>
          (
          <year>2020</year>
          )
          <fpage>E32</fpage>
          -
          <lpage>E40</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>R.</given-names>
            <surname>Buongiorno</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Del Corso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Germanese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Colligiani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Python</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Romei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Colantonio</surname>
          </string-name>
          ,
          <article-title>Enhancing covid-19 ct image segmentation: A comparative study of attention and recurrence in unet models</article-title>
          ,
          <source>J. of Imaging</source>
          <volume>9</volume>
          (
          <year>2023</year>
          )
          <fpage>283</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>O.</given-names>
            <surname>Ronneberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Fischer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Brox</surname>
          </string-name>
          , U-net:
          <article-title>Convolutional networks for biomedical image segmentation</article-title>
          , in: N.
          <string-name>
            <surname>Navab</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Hornegger</surname>
            ,
            <given-names>W. M.</given-names>
          </string-name>
          <string-name>
            <surname>Wells</surname>
            ,
            <given-names>A. F.</given-names>
          </string-name>
          <string-name>
            <surname>Frangi</surname>
          </string-name>
          (Eds.),
          <source>Med. Image Comp. and ComputerAssisted Intervention - MICCAI 2015</source>
          , Springer International Publishing,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>O.</given-names>
            <surname>Oktay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schlemper</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. L.</given-names>
            <surname>Folgoc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Heinrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Misawa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Mori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>McDonagh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. Y.</given-names>
            <surname>Hammerla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kainz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Glocker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Rueckert</surname>
          </string-name>
          ,
          <article-title>Attention u-net: Learning where to look for the pancreas</article-title>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>M. Z. Alom</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Hasan</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Yakopcic</surname>
            ,
            <given-names>T. M.</given-names>
          </string-name>
          <string-name>
            <surname>Taha</surname>
            ,
            <given-names>V. K.</given-names>
          </string-name>
          <string-name>
            <surname>Asari</surname>
          </string-name>
          ,
          <article-title>Recurrent residual convolutional neural</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>