<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Medical image interpretation challenges and research activities of the tAImedIA group at UniBS</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alberto Signoroni</string-name>
          <email>alberto.signoroni@unibs.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mattia Savardi</string-name>
          <email>mattia.savardi@unibs.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Davide Farina</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sergio Benini</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Edoardo Coppola</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Damiano Ferrari</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mauro Massussi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Salvatore Curello</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michele Svanera</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giuseppe D'Ancona</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ASST Spedali Civili di Brescia, Department of Cardiology</institution>
          ,
          <addr-line>Brescia</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Deep learning, Magnetic Resonance Imaging</institution>
          ,
          <addr-line>Chest X-ray, Brain segmentation, Cortical thickness, COVID-19 prognosis</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Brescia, Department of Information Engineering</institution>
          ,
          <addr-line>Brescia</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Brescia, Department of Medical and Surgical Specialities, Radiological Sciences and Public Health</institution>
          ,
          <addr-line>Brescia</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>University of Glasgow, School of Psychology &amp; Neuroscience</institution>
          ,
          <addr-line>Glasgow</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff5">
          <label>5</label>
          <institution>Vivantes Klinikum, Department of Cardiology and Cardiovascular Clinical Research Unit</institution>
          ,
          <addr-line>Berlin</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff6">
          <label>6</label>
          <institution>diferent MR scanner models</institution>
          ,
          <addr-line>acquisition parameters</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>The Trustworthy-AI Medical Image Analysis group at the University of Brescia is a team dedicated to advancing the field of medical image analysis through collaborative research activities. The group's eforts are concentrated on the development of innovative systems and solutions to address complex image interpretation challenges, specifically within two imaging modalities: Brain MRI and Chest X-ray, and their corresponding anatomical districts. The group's research eforts are aimed at improving the accuracy, speed, and eficiency of image interpretation, with a focus on ensuring the reliability and safety of AI-assisted medical decision-making processes. By leveraging advanced deep learning techniques, the group aims to develop cutting-edge algorithms that can accurately and eficiently analyze medical images, aiding in the detection, diagnosis, and treatment of various medical conditions.</p>
      </abstract>
      <kwd-group>
        <kwd>Cardiovascular risk factors</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>© 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License referred to as the scanner efect , we propose L O D - B r a i n , a
Attribution 4.0 International (CC BY 4.0).</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>The research on deep learning architectures and methods
represents the mainstream in the medical image analysis
domain, with countless academic contributions and an
increasingly relevant market sector in the field of digital
healthcare management.</p>
      <p>In this report, we summarize some of the activities
of our research group in the fields of Brain MRI and
Chest X-rays, emphasizing the motivation of the adopted
approaches, the main results, and the collaborative nature
of the works.</p>
      <p>All the described activities have in common the fact
that they involve some challenging aspects related to the
presence of unmet needs on both new and consolidated
diagnostic image interpretation tasks.</p>
      <p>On Brain MRI volumes we fight the scanner efect
to obtain fast and robust multi-site brain segmentation
(Sec.2), presenting preliminary activities tackling some
open issues about cortical thickness estimation (Sec.3).
(M. Savardi)
nized by CINI, May 29–31, 2023, Pisa, Italy
∗Corresponding author.
unique artefacts. To mitigate this site-dependency, often
3D convolutional neural network with progressive levels- from all levels to produce an accurate and fast
segmentaof-detail (LOD), able to segment brain data from any site tion.
[1]. Coarser network levels are responsible for learning
a robust anatomical prior helpful in identifying brain 2.2. Results
structures and their locations, while finer levels refine
the model to handle site-specific intensity distributions
and anatomical variations. We ensure robustness across
sites by training the model on an unprecedentedly rich
dataset aggregating data from open repositories: almost
27,000 T1w volumes from around 160 acquisition sites, at
1.5 - 3T, from a population spanning from 8 to 90 years
old. Extensive tests demonstrate that L O D - B r a i n produces
state-of-the-art results, with no significant diference in
performance between internal and external sites, and
robust to challenging anatomical variations. Its
portability paves the way for large-scale applications across
diferent healthcare institutions, patient populations, and
imaging technology manufacturers.</p>
      <p>L O D - B r a i n shows outstanding generalisation capabilities,
as it performs better than other state-of-the-art solutions
on almost every novel site, with no need for retraining
nor fine-tuning, and with no relevant performance
ofset in segmenting either internal or external sites.
Furthermore, it proves to be general and robust across sites
against diferent population demographics, anatomical
challenges, clinical conditions, and technical
specifications (e.g., field strength, manufacturer).</p>
      <p>
        As an open-source tool, L O D - B r a i n can be used
of-theshelf on unseen scans from novel sites. Segmentation
masks are returned very quickly (a few seconds on a
GPU) thanks to a reduced number of model parameters
(300 ), if compared to other state-of-the-art solutions.
2.1. Methods A comparative assessment of our method against
stateof-the-art techniques (we use FreeSurfer[2] as silver GT
We introduce L O D - B r a i n , a progressive level-of-detail net- reference) is proposed here in terms of both brain
segwork for training a robust brain MRI segmentation model mentation performance and model complexity. The
confrom a huge variety of multi-site and multi-vendor data. sidered benchmark methods are: QuickNat [
        <xref ref-type="bibr" rid="ref8">3</xref>
        ], SynthSeg
L O D - B r a i n architecture is organised on multiple levels of [4], 3D-UNet [5], CEREBRUM [6], FastSurferCNN [7].
detail (LOD), as shown in Fig. 1. Each level is a convo- Fig. 2 shows the obtained results on the whole testing
set grouped by segmented brain structure. Obtained
reInput Output sults highlight L O D - B r a i n as one of the most competing
1 32 128 + + 32 + 1 methods on all brain labels, as it yields the best scores in
25639 @A 2563 2563 2563@A D 9 almost all target structures and on the majority of
exter4 MaxPooling 1283 32 4 UpSampling nal datasets with good-quality ground truth labels. The
@HG×&lt;H@×@@A 32 64 + 32 number of parameters for each model is also reported,
      </p>
      <p>GA×EE×AHEG 643 AE 64A3E 323 128 + 323 AE32 643AE AE vgbwerahansiyatterlmigcmlaaetansttgetrleiar ohfigphelrigfohrtminagnLcOeD--toBr-ca oimnapsletxhietybersattioov.eMraallnmyomdoelreindteetramilss
Legend &lt;@×AAF×G@E FG FG 16F3G 64 cberareinbsetleumm aabreouatvmaileatbhloedos,nrethsueltpsraosjewctelwlaesbscioted.e,Fmorodeexla, manpdled, eitmios
LOD LOD 3xC3oxn3v3DSK1 Norm Act xN Dropout 2Cxo2nxv23DS42K Act 2Cx2oxn2vT3DS42K Act AreBleCvDandtattoasneott,edtehsepihteigiht
pinecrlfuodrmesavnocleuamcehsiefvreodmo3n2thdeiverse scanners, previously skull-stripped and aligned to
MNI152 reference space (a common procedure in this
domain).
lutional neural network (CNN) that processes 3D brain
data at a diferent scale obtained via progressively
downsampling the input volume. Thanks to the rich variability
of brain samples coming from 70 datasets from diferent
MRI acquisition sites, the proposed architecture learns,
at lower levels, a robust brain anatomical prior.
Concurrently, higher levels handle site-specific intensity
distributions and scanner artefacts. Through inter-level
connections between networks and a bottom-up
training procedure, such architecture integrates contributions</p>
    </sec>
    <sec id="sec-3">
      <title>3. A method for estimating cortical thickness in Brain MRI</title>
      <p>Studying brain anatomical deviations from normal
progression along the lifespan is essential to understand
inter-individual variability and its relation to the onset
and progression of several clinical conditions [8]. Among
available quantitative measurements, mean cortical
thickness across the brain has been associated with normal
ageing and neurodegenerative conditions like mild
cognitive impairment, Alzheimer’s disease, frontotemporal
0.95
0.90
0.85
0.80
0.75
gray matter basic ganglia white matter ventricles
cerebellum
brainstem
dementia, Parkinson’s disease, amyotrophic lateral
sclerosis, and vascular cognitive impairment. Automatic
techniques, such as FreeSurfer[2] and CAT12 Toolbox
[9] ofer out-of-the-box cortical thickness estimates, but
with an excessively long computational time (up to 10
hours per volume). Moreover, comparison studies have
found systematic diferences between these approaches
[10], with discrepancies particularly pronounced in
clinical data [11], questioning the reliability of these CT
estimations. As more and more studies in medicine and
neuroscience analyse hundreds to thousands of brain
MRI scans, there is a growing need for automatic, fast,
and reliable tools for cortical thickness estimation.</p>
      <sec id="sec-3-1">
        <title>3.1. Methods</title>
        <p>5.0 mm
1.5 mm
(a) Visual results</p>
        <p>FreeSurfer</p>
        <p>ODeuerpDTNhNickmnetshsod
[5mm] LH entorhinal
4
3
2
1
05 LH lateral occipital
4
3
2
1
50 LH pericalcarine
4
3
2
1
05 LH superior parietal
4
3
2
1
0
[m5m] LH inferior parietal [5mm] LH inferior temporal
4 4
3 3
2 2
1 1
50 LH lingual 05 LH middle temporal
4 4
3 3
2 2
1 1
05 LH posterior cingulate 50 LH precuneus
4 4
3 3
2 2
1 1
05 LH superior temporal 50 LH frontal pole
4 4
3 3
2 2
1 1
LH temporal pole 0 RH entorhinal 0 RH inferior parietal
(b) Cortical thickness maps comparison
We propose a method for estimating cortical thickness Figure 4: (a) - Visual results of FreeSurfer mesh and CT
overfrom MRI in just a few seconds [12]. The proposed frame- lay, FreeSurfer mesh and our DNN method overlay, and our
work, shown in Figure 3, exploits our recent achieve- DNN method mesh and overlay. (b) - Comparison of the
distriments in deep learning segmentation methods [6, 1] for rbeugtiioonnss foofrthFreeceoSrutricfearl t(hbilcuken)easnsdvoaluureDsoNfN12mleeftthheomdi(soprhaenrgee)
extracting grey and white matter segmentation masks on one testing subject in mm. Dotted lines represent
averand the related probability maps from an MRI T1w vol- age values; higher symmetry in distributions denotes higher
ume. All these volumes are given as inputs to a Con- region-wise cortical thickness similarity. Similar results are
volutional Neural Network trained to compute both the obtained for the right hemisphere and other subjects.
external grey matter surface (or pial) and the related
thickness.</p>
        <p>T1w</p>
        <p>DNN
LOD-Brain
segmentationand
probabilitymaps</p>
        <p>Pial
surface
levelset
CoWrtMical
tdhisctkanecses
set</p>
        <p>MeshCreation
+post-processing</p>
        <p>Trilinear
interpolation</p>
        <p>The supervised model is trained, with volumes
obtained by FreeSurfer [2] as ground truth. The network
architecture resembles a 3D U-Net, with 4 levels of
convolutional layers, and two output branches predicting the
pial surface and the cortical thickness. Training,
validation, and testing volumes are obtained from the AOMIC
dataset, counting 1311, 100, and 500 volumes respectively.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Results</title>
        <p>In Figure 4-(a), we show qualitative results highlighting
how our method performs with respect to FreeSurfer,
in both the mesh generation and the cortical thickness
estimation. In Figure 4-(b), we compare numerically the
cortical thickness estimation distributions obtained with
FreeSurfer and our method on a testing subject. Our DNN
method [12] is the first DL-based approach for cortical
thickness estimation on structural MRI. The extraction
of cortical thickness distributions in just a few seconds
unlocks the ability to quickly draw population
trajectories for thousands of healthy subjects’ data, creating an
atlas with diferent distributions for diferent brain areas.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Trustworthy AI for COVID-19 severity estimation and prognosis</title>
      <p>Although during the COVID-19 pandemic, the AI-based
interpretation of CXRs focused largely on COVID-19
diagnosis, few studies addressed other relevant tasks such
as severity estimation, deterioration, and prognosis also
trying to explain the models’ decisions. A recent
international hackathon sponsored by CINI Lab AIIS during
the Dubai Expo 2020 sought to develop machine learning
(ML) models to predict COVID-19 prognosis and explain
their predictions in a clinically interpretable manner. The
hackathon dataset included CXRs and clinical features
collected during triage for a large number of subjects. To
calculate the prognostic value, a deep learning model
estimated the lung compromise degree from the CXRs, which
was considered alongside the clinical features. Then, we
trained and evaluated multiple models to identify the
best-performing, fine-tuning them before inference and
generating visual and numerical explanations to justify
their predictions. Our model achieved high accuracy,
ranking second in the final rankings with 75% and 73.9%
in sensitivity and specificity. In terms of explainability, it
was agreed to be the most interpretable by health
professionals and was ranked first. Our study [ 13] highlights
the potential of ML models in helping physicians
formulate trustworthy COVID-19 prognoses, contributing to
the eforts to improve the allocation of limited healthcare
resources.</p>
      <sec id="sec-4-1">
        <title>4.1. Methods</title>
        <p>The dataset included a blind test set and a training set
with more than 1100 subjects, characterized by 38 clinical
features and a CXR image. After imputing missing values
in the former and improving the quality of the latter, we
exploited BSNet [14] to predict the multi-regional lung
compromise index Brixia-score [15] for each training
subject from its CXR. A posthoc trustworthy assessment,
called Z-Inspection® [16], was applied to this network
and its deployment in the radiology department of the
ASST Spedali Civili clinic in Brescia, Italy during the
pandemic time. The predicted Brixia-score and other
parameters were found as clinically significant by a model-based
feature extraction procedure and constituted the feature
set on which multiple models were trained on. Once
identified the best-performing on an internal validation
set, we employed it to predict the prognosis for the
subjects in the test set. Finally, we produced both visual and
numerical explanations to justify the model’s predictions
from both a global and a patient-specific perspective.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Results</title>
        <p>The best-performing model was a Random Forest (RF).
The RF proved to be accurate on the test set and was
ranked second in the final rankings with 75% and 73.9%
in sensitivity and specificity, respectively. From a global
perspective, the most important features to our RF to
make its decisions were blood pressure, Brixia-score, and
LDH enzyme concentration. Conversely, from a
patientspecific perspective, we used SHapley Additive
exPlanations (SHAP [17]) values-based charts to justify the RF’s
predictions. Such charts, of which an example is depicted
in Fig. 5, show which clinical features pushed the RF to
predict a certain prognosis, how “strongly”, and which
ones pushed it to predict the opposite prognosis.
Finally, the last patient-specific explanation, shown in
Fig. 6, was provided by the explainability map produced
by BSNet highlighting which regions of the lungs
contributed most to which local severity score.</p>
        <p>All these explanations were agreed to be highly
interpretable by a panel of health professionals and
radiologists. For this reason, our model was ranked first in the
ifnal clinical explainability ranking.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. AI to predict cardiovascular risk factors from Chest X-rays</title>
      <p>Coronary artery disease (CAD) is the single leading cause
of mortality, premature death, and morbidity worldwide.</p>
      <p>Artificial intelligence (AI) could help identify markers
present within first-line diagnostic imaging routinely
performed in patients referred for suspected angina, such as</p>
      <p>Attention maps were created considering the DCNN with the highest
performance. Heat map activations are primarily localized toctlhienc-ardiac
CXRs. The obsijlehocuteitvtee, loefft oveuntrriwculoarrkapies, tpoultmroanianry, tbeassets,, paunlmdonary
parenically validatcehydmeae,pcoslteopahrrneniinc gsin(uDsesL,
)puallmgoonarriythhimla,sthfoorarcidceatoertca,t-supraing the preseanocrteic vessels, and clavicle regionb(Faisg.e3d, poannelsCAX-FR). s [ 18].</p>
      <p>of significant CAD
The CXR mo4d. aDliistcyussiisonubiquitous and carries a plethora FigucFraiegti.o71n.:oRfRtehceeecipvereerisveonpecererasotiignpngeificcrahnaatrtaiccntoegrroisnctaihcry(aRarrOtaeCrc)ytcdeuirrsveieassstefio(crCAt(hDRe)ObdienCmaro)yncsctluraasrtsiivnfige-s for
of informationStcuodynfincdeinrgnsi ncagn btehseumpmaartiizeednats’sfolhlowesa:l1t)hOusrtDaLt-ubass,ed sota-hrteebramyirnoeanaddurinFsydoeercrareltssahteeesr;sc(u0iCrf.7vie3Aca(fDAotrUi)ACoIsnd)aoneofdm0f0..7ot77h0nfefosortrpArDIraiae+tmsianoennnggdincFaaeorTrrsyeepisagtee;nr0u..i7nf6idcfoaernrAtIt+hcoeDriaoc-nuarvrye
irnitchlumdicnagndpirreeloufdctitioscenavtcen,arenwdpCiirteAndhDdic;iht,r2fie)rgocRmhteasssuioelgtnsnenspasirroetoeivcfctoiiContnfiryAm,ceDhtdehs.teOrwapiduthiroregisDrnaevpLanhs,icavtheelegcopoorfre-osennacr(yeAUCs) of 0.77 for AI + angina Type; 0.76 for AI + Diamond
ForreTsatbeler;4 0.73 for AI and 0.70 for Diamond Forrester.
severe CAD iTnabplea2tients referred for suspected angina. It Predictors of CAD at logistic regression analysis.
could be usedoDnetmthooegDrpaCprNheNic--/btcaelsinesditccahlseiinsgtfornarmdiiaofictgiaorannpihtnypCparteAidenicDttsiownphoorf otseesbvteeardebnCeiAglDiatt(iyovpeeiarnnadtinpgospiotiivnet Age 1O.R01(3CI(19.50%04)-1.023) p0v.0a0lu6e
outpatient cllienadiincgst,oethmemearimguemncseynsirtiovitoy-mspecsificeityttisnumg)s.C,oantninduouCsvAarDiables re- BMI 1.003 (0.977–1.029) 0.842
screening in mporoterdeasemxeatneanndsisvtaenAdlasgroedrittdhtemivnitaetgsitoesnd.. FurtAhlgeorrithsmttuestdedies arpevalue ihnatderanBADpaiIioaprllmoroegovdinccidaacetlFilSoosienrdsre(isamntetargliesoc)otnriemseetoofnaba111o...w000u224220to(((011r2...700k0813382s–––st111a...300t324to368i)))oannawlyitshe at0&lt;&lt;h.00n8..7e00300N11wvhidoilae
required to externally validantegeatitvhee algorithpmositivaend develop
a clinically appBBAiMgolelIiocgiacablSlee(mtoaleo)l. 723557%±± 143 727477%±± 142 0&lt;&lt;.001..70020011TmietamnaBonMgVrIi=oygG.braoArdpyhatympabahsnsiidnicnadasrere;PyiAnrItole=orcngaeartlislisficysitaivilnacligindrtaeeUtlelgidgnreoneicntse.sa(GiloaPrngeU,dt)ahtwaeseiDtthcCov1Ner2iNnGg
Bproef5.1. MethoSTAAeydtbvypsepeicrinsceatalCAlAAAnnDgngiginnianaa 24329334%%%% 63333271%%%% &lt;&lt;&lt;&lt;0000....000000001111sdeicvteirityonieeptnalrerinsCvwsoeiAcfnadcgDsloeinrntiaec(hcrapauleli&lt;irszoiats0uibott.iin0rlnisote0yn)0ian;r1ge3a;)emtysAOapttdiecRiinainln:teldbra5nore0agthple.7vAhea;oPnlsiadpCdnaitdetIai:lonPnsA2te,tp4ttdhri.noe0ege,c-tDte1wCior0hNnmeN7sre.(ic0gmnoo)uu.oalldd-nAt goef
Data from patDiiaemnontdsFourrenstedr erg4o7.i2n±g29.c4hest rad61i.3o±g2r9a.2phy an&lt;d0.001(p=0d.0if0fe6re;ntOiatRe:se1ve.r0it1y;anCdIe:te1n.s0io-n1.o0f 2CA)Da;4n)dAltDhoiuagmhthoenadcc-uFraocyrorfester
score the DCNN prediction is already superior to that achieved by accepted
coronary anBgMioI=gbroadypmhayss inwdee.re retrospectively analysed. scorerisk( psc&lt;or0es. 0su0c0h1as;thOeDRia:m1on.0d-2Fo2r;resCteIr,:a1dd.i0n1g8th-e1a.n0g2in6a)type teorthee also
w
A deep convolutional neural network (DCNN) was independently related to CAD, although with lower
sigdesigned to detect significant CAD from the patient nificance and odds-ratios. Using an operating cut-point
posteroanterior/anteroposterior chest radiograph. The with high sensitivity, the DCNN had a sensitivity of 0.90
DCNN was trained for binary classification of severe and specificity of 0.31 to detect significant CAD in the
CAD absence/presence (at least one diseased coronary internal validation group (AUC 0.73; 95% CI DeLong,
0.69vessel with ≥ 70% stenosis). Coronary angiography re- 0.76). Adding to the AI chest radiograph interpretation,
ports were used as the ground truth. Sensitivity, speci- patient age and angina status improved the prediction
ifcity, and area under the receiver operating characteristic (AUC 0.77; 95% CI DeLong, 0.74-0.80). ROC curves for the
curve (AUC) of the DCNN were calculated. Multivariate binary CAD classification are reported in Fig. 7.
Attenanalysis was performed to identify independent correla- tion maps were created considering the DCNN with the
tion among the presence of significant CAD (dependent highest performance. Heat map activations are primarily
variable), DCNN prediction, and CAD risk factors. localized to the cardiac silhouette, left ventricular apex,
pulmonary bases, pulmonary parenchyma, costophrenic
5.2. Results ssienlsu,saens,dpculalmviocnlearryeghiiolna, (tFhiogr.8a,cpicaanoerlstaA,s-uF)p.ra-aortic
vesInformation of 7728 patients referred for suspected
angina was reviewed. Severe CAD was present in 4482
patients (58%; 1% left main, 28% one vessel, 16% two vessels,
and 12% 3 vessels). Patients were randomly divided for
training (70%; n = 5454) and fine-tuning/testing (10%; n =
773) of the algorithm. Internal validation was performed
with the remaining patients (20%; n = 1501). The DCNN</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <article-title>lifespan”</article-title>
          .
          <source>In: Nature</source>
          <volume>604</volume>
          .7906 (
          <year>2022</year>
          ), pp.
          <fpage>525</fpage>
          -
          <lpage>533</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          62.2 (
          <issue>2012</issue>
          ), pp.
          <fpage>791</fpage>
          -
          <lpage>800</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <article-title>of FreeSurfer and the CAT12 toolbox in patients</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <source>Journal of Neuroimaging 28.5</source>
          (
          <issue>2018</issue>
          ), pp.
          <fpage>515</fpage>
          -
          <lpage>523</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <source>tiers in Neuroscience</source>
          <volume>14</volume>
          (
          <year>2020</year>
          ), p.
          <fpage>598868</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>OHBM 2023 Annual</surname>
          </string-name>
          <article-title>Meeting</article-title>
          . accepted presenta-
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>tion.</surname>
          </string-name>
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          Fig. 3.
          <string-name>
            <surname>(A-F)</surname>
          </string-name>
          .
          <article-title>Heat-maps of 6 patients affected by severe CAD. Areas suggestive of CAD presence are highlighted in degrees of green tonality. (For interpret3ati]on of E</article-title>
          . Coppola,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ferrari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Savardi</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <surname>A</surname>
          </string-name>
          . Sig-
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <article-title>the references to coFloiurginuthris efigur8e:leg(eAnd,-thFe)re</article-title>
          .
          <article-title>adHer iesraefterremdtoathpe swebovfers6ion pof athitsiaertinclet.)s afected by severe [1</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <source>the DCNN predicCts sAevDere.CAADrinedaepsendseuntlgygofethsetriisvkefactoorfs</source>
          , sCugA-Dimpplraentasteionn,
          <article-title>cweeidaenrteifiedhniogshecloingdahrytceondfouinnding factors in the noroni. “Explainable AI for COVID-19 prognosis</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <article-title>suchAsasepaencgteinda, stthateusalgfroormithtmheccahnensottreaadsiiolygrparpehd</article-title>
          .icTthcislininicfaolrmvaartiiaobnleiss tCrAaiDnibnyg daentdectteinstgintghepdhiarseecst, swigenshaovfethiencplruedveidoucshienstterrvaedniotigornasp.hIsnpoeurr- In
          <source>: Proceedings of SPIE Medical Imaging</source>
          <year>2023</year>
          :
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <article-title>crucial in identifying patients at risk of having severe CAD. Adding the know that non-invasive investigations, like the[treeaedmsisll</article-title>
          .strIesVs ]m. ffrsouooaefrrtdttmihilonoeegggdprsiasa,itptnaiihcnetaindhclt/eismmotsrahaigctagathteinnisunigznpeadsratoeinaodrdvgnuaoacsiltelcaadahbnnebldesdyitninprngaiandoptieiuoeorsdngiitIrtf'niafsosepntrshie,atynufienttwtipoyheonimrlr.
          <source>eeteIaanirbsngoloeebnunesart.dnfaBdadnecosdcitliaaitetutysliseioemmcntatpiohvrlsyyeet pCroemssp.u2t0e2r3-A.ided Diagnosis</source>
          . Vol.
          <article-title>SPIE 12465</article-title>
          . in
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <article-title>iinmfporromvaetitohne accocnucrearcnyiningptrheediacntignningeasetsvtweartueosCAtroDkt(hAeUtDCrCfaNroNimna0l</article-title>
          .
          <year>le7o3wdteod0ou</year>
          .
          <article-title>7sn7t)o, lti-site data</article-title>
          .
          <year>2022</year>
          . arXiv:
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <source>withe sensitivity values around 920%2.11.02400</source>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <article-title>when targeting pati[en2ts]with a BCA.DFpriesvaclehncle</article-title>
          .of“2F5%
          <fpage>re3e0</fpage>
          .STuherpfeer-r”.
          <source>rpIaednrfioo:rlomgNyadebeeupdarrrtaomdeiIonmgtrahpaahsgyaweseitph6ara2atp.</source>
          <article-title>eo2rfatac(bil2liety0d,e1ivti2cise)ai,nt ttihmee[esm1eea4rsgi]eernctoy</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Signoroni</surname>
          </string-name>
          et al. “
          <article-title>BS-Net: Learning COVID-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <article-title>test, already have a sensitivity and specificity of approimately 75%</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          ftadoineerpdnmltoosapynfteocidmreaaoiznnfaadtthliywoessniiels.l.IttueMscntoadsnrehenroagovsotebbroee,pesctntipormamecsippzhsaait.etreiveso7detnidnt7aogaffot4pteuerrr-
          <fpage>reasyDece7CnuaNtir8rssNinso1ogtfhmc</fpage>
          .
          <fpage>aleaitndhliodacgiastiliusoptsinrtcaablclitempiecaine</fpage>
          --
          <source>rroEaouCtorGomrr-yepo.saeretacdhrcierhhseacwsvteleetytrrieiennmvgitoswhivebeelcdehaaaervnvdeediorbmyloeiaegmnyyahvdgaeeevrpyewarcshtuamergreeegfneuctslatobeinrdlecetshacftleohurpedmatientoirgeinzniattatosdirotdiannittguliaosa.bnnIoand-l 1d9atapsneetu”.mIonn:iMaesdeivcearlitIymoagneaAlnaarglyesicsh7e1st (X20-2ra1y)</source>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <article-title>tations, reuires add[it3io]nal orgaAniza</article-title>
          .tioGnal.effRortos, yan,d cSann.
          <source>ot Cbe ousned jineti</source>
          ,soeflNtehceti.opnatNbieianasteasvntdhaaletbamd,atyhCheasvy.estseWumggteoasdtecedrhitvoeitthnheeDgpCreeNsNern,tcheeocflisneivcearlestCaAtuDs p.
          <fpage>102046</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          tphraetveanlltowtreadinuinsgtoaencdlusdeelecptiaotniewnbtiasoswesirt.hkTphraefnvokiosrutsocqtahrueditaiicmcienkltyersavcernneetindoinnsg,accurate segmentation of neu- [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Borghesi</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Maroldi</surname>
          </string-name>
          . “COVID-19 outbreak
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>