=Paper=
{{Paper
|id=Vol-3762/566
|storemode=property
|title=Towards AI-driven Next Generation Personalized Healthcare and Well-being
|pdfUrl=https://ceur-ws.org/Vol-3762/566.pdf
|volume=Vol-3762
|authors=Fatih Aksu,Alessandro Bria,Alice Natalina Caragliano,Camillo Maria Caruso,Wenting Chen,Ermanno Cordelli,Omar Coser,Arianna Francesconi,Leonardo Furia,Valerio Guarrasi,Giulio Iannello,Clemente Lauretti,Guido Manni,Giustino Marino,Domenico Paolo,Filippo Ruffini,Linlin Shen,Rosa Sicilia,Paolo Soda,Christian Tamantini,Matteo Tortora,Zhuoru Wu,Loredana Zollo
|dblpUrl=https://dblp.org/rec/conf/ital-ia/AksuBCCCCCFFGIL24
}}
==Towards AI-driven Next Generation Personalized Healthcare and Well-being==
Towards AI-driven Next Generation Personalized Healthcare
and Well-being
Fatih Aksu1 , Alessandro Bria2 , Alice Natalina Caragliano3 , Camillo Maria Caruso3 ,
Wenting Chen4 , Ermanno Cordelli3,* , Omar Coser3,5 , Arianna Francesconi3 , Leonardo Furia3 ,
Valerio Guarrasi3 , Giulio Iannello3 , Clemente Lauretti5 , Guido Manni3,5 , Giustino Marino3 ,
Domenico Paolo3 , Filippo Ruffini3 , Linlin Shen6 , Rosa Sicilia3 , Paolo Soda3,7 ,
Christian Tamantini5 , Matteo Tortora3 , Zhuoru Wu6 and Loredana Zollo5
1
Department of Biomedical Sciences, Humanitas University, Milan, Italy
2
Department of Electrical and Information Engineering, University of Cassino and Southern Latium, Cassino, Italy
3
Unit of Computer Systems and Bioinformatics, Department of Engineering, University Campus Bio-Medico of Rome, Italy
4
City University of Hong Kong
5
Unit of Advanced Robotics and Human-Centered Technologies, Department of Engineering, University Campus Bio-Medico of Rome, Italy
6
Shenzhen University
7
Department of Diagnostics and Intervention, Radiation Physics, Biomedical Engineering, Umeå University, Sweden
Abstract
In the last few years Artificial Intelligence (AI) is emerging as a game changer in many areas of society and, in particular,
its integration in medicine heralds a transformative approach towards personalized healthcare and well-being, promising
significant improvements in diagnostic precision, therapeutic outcomes, and patient care. Our research explores the cutting-
edge realms of multimodal AI, resilient AI, and healthcare robotics, aiming to harness the synergy of diverse data modalities
and advanced computational models to redefine healthcare paradigms. This multidisciplinary effort seeks to bridge technology
and clinical practice, advancing AI-driven next generation personalized healthcare and well-being.
Keywords
Artificial Intelligence, Multimodal Learning, Precision Medicine, Stress Detection, Resilient AI, Healthcare Robotics
1. Introduction 2. Multimodal AI enables precision
Artificial Intelligence (AI) has proven itself as enabling medicine
factor for triggering great transformations of society [1,
The evolution of precision medicine marks a paradigm
2, 3, 4]. However on the verge of the fifth industrial
shift from the traditional "one-size-fits-all" approach in
revolution, there are several challenges that involve the
healthcare towards tailored therapeutic strategies that
consolidation of AI arrival in sectors as medicine and
account for individual variability in genes, environment,
people well-being. Indeed this paradigm shift towards AI-
and lifestyle. In this context, leveraging the variety of pa-
driven healthcare is not just a technological revolution;
tient generated data (as images, clinical data, electronic
it represents a comprehensive reimagining of medical
health records etc.) can provide a significant boost to
practices, enhancing the quality, efficiency, and accessi-
unlocking a holistic view of the patient. Towards this
bility of healthcare services. In this scenario our efforts
end multimodal AI provides the ultimate tool [5, 6]: the
are directed towards four research paths: (i) multimodal
integration is not merely additive, it’s transformative,
AI for precision medicine (section 2); (ii) multimodal AI
enabling the extraction of insights that would remain
to foster wellbeing (section 3); (iii) resilient AI (section 4);
obscured under traditional, unimodal analysis. We are
(iv) AI in robotics for healthcare (section 5). For each of
currently studying the potential of multimodal AI for
these routes we provide a brief description of the devel-
precision medicine facing different challenges in differ-
oped solutions, highlighting solved problems and open
ent application domains: in the oncological domain we
challenges.
face challenges regarding data fusion and representa-
tion, with two projects on Non-Small Cell Lung Can-
cer (NSCLC) (sections 2.1 and 2.2); in augmenting the
Ital-IA 2024: 4th National Conference on Artificial Intelligence, orga-
nized by CINI, May 29-30, 2024, Naples, Italy diagnosis and prognosis of Alzheimer (section 2.3) we
*
Corresponding author. tackle the problem of imbalance in multimodal datasets;
$ e.cordelli@unicampus.it (E. Cordelli) and in COVID-19 prognosis (section 2.4) we attempt to
0000-0001-6062-7575 (E. Cordelli) solve issues related to scarcity of large, labelled datasets
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License
Attribution 4.0 International (CC BY 4.0).
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
with compatible tasks for training deep learning models slices crucial for predicting OS outcome.
without leading to overfitting.
2.2. PICTURE
2.1. AIDA
PICTURE stands for "Pathological response AI-driven
AIDA stands for "explAinable multImodal Deep learning prediCTion after neoadjUvant theRapiEs in NSCLC". This
for personAlized oncology". This project faces the chal- project is based on the central hypothesis that heteroge-
lenge of advancing Multimodal Deep Learning (MDL), neous medical data (i.e. radiological images, histology
studying how to learn shared representations between images, cytology and molecular data and EHRs) are con-
different modalities, by investigating when to fuse the sistent with the pathological complete response (pCR),
different modalities and how to embed in the training any so their combination using artificial intelligence (AI) can
process able to learn more powerful data representations. provide accurate pCR prediction in NSCLC patients. In-
All this is directed towards facing the association between deed, albeit treating locally advanced NSCLC surgically
radiomic, pathomic and Electronic Health Records (EHRs) is the mainstay, it is important to prevent post-surgery
in precision oncology to predict the patient outcomes in recurrence, and neoadjuvant therapy (NAT) has shown
terms of progression free survival, overall survival, re- potential in enhancing overall survival rates and achiev-
lapse time and response rate NSCLC, which represents ing a complete pathological response, that, if correctly
the 85% of all lung cancer cases. To pursue these objec- evaluated before the treatment, can even avoid non nec-
tives we started from learning unimodal representation essary surgical resections.
of EHRs and medical imaging. PICTURE pursues three objectives: (i) pCr prediction
As a prior contribution, EHRs are vital resources for through radiology imaging, histology, citology, molec-
documenting patient clinical history and procedures, but ular data, EHRs, and their combination; (ii) leveraging
are often challenging to process due to their unstructured multimodal deep learning to make the performance of
nature. Natural Language Processing (NLP) tools, par- AI resilient and robust for pCR prediction signature; (iii)
ticularly Named Entity Recognition (NER) with the use improving trust and transparency using explainable AI
of Transformer-based models, have proven effective in models. PICTURE also has the exploratory aim of trans-
extracting meaningful information from EHRs [7]. Trans- ferring trained models to predict pCR for patients under-
formers excel at capturing contextual relationships be- going chemoimmunotherapy, tailoring treatments to the
tween words and the still not thoroughly explored con- individual needs of patients.
textual embedding they create can enhance the under-
standing of the content itself. We propose the Hieararchi- 2.3. Facing imbalance in Alzheimer’s
cal Embedding Attention for overall survivaL (HEAL), a
methodology that leverages multi-class NER-driven rep-
Disease diagnosis and prognosis
resentations from EHRs by weighting them with atten- Alzheimer’s disease (AD) is a progressive neurodegen-
tional mechanisms. The ability of emphasizing clinically erative condition with decline in cognitive function,
relevant information within unstructured data, operating and because of the lack of a cure, its early detection is
both at word and sentence levels, makes HEAL more in- paramount. Despite the recent progress in AI, challenges
terpretable for medical applications. In a NSCLC Overall such as class imbalance, integration of multimodal data,
Survival (OS) prediction case study, HEAL achieved an and robust generalization remain pervasive. In response
average 𝐶 𝑡𝑑 -index of 0.639 and a low standard deviation to this we introduce a novel methodology that leverages
of 0.014 over 5 runs, showing a statistically significant the strengths of ensemble learning while incorporating
superiority with respect to manually extracted clinical advanced fusion techniques. For each of the 4 modali-
features. ties of the tabular ADNI database, we train a series of
Our second contribution, even if still at its prelimi- classifiers on varied class distributions followed by a late
nary steps, grounds on the fact that deep learning (DL) fusion strategy that integrates the different modalities to
approaches have demonstrated significant value in au- improve the results.
tomatically learning potentially relevant patterns from Our framework is evaluated on two diagnostic tasks
medical images, such as computed tomography (CT) [8]. (binary and ternary) and four binary prognostic tasks (at
Hence, in this study we explore a novel methodology 12, 24, 36, and 48 months) and compared with 12 state-
for predicting OS in NSCLC patients using only CT im- of-the-art imbalanced data algorithms, achieving 97.04%
ages, aiming at a multitask architecture that encompasses g-mean on the binary diagnostic task and 90.81% g-mean
prognostic factors like Progression-Free Survival (PFS) on the 48-month prognostic task.
beyond predicting OS alone. The first steps in this direc-
tion include producing a soft attention weighted feature
map for each input slice and highlighting the relevant
Physiological Environment DRL Agent
2.4. Multi-Dataset Multi-Task Learning •
Preprocessing
Oversampling Critic Actor
for COVID-19 Prognosis •
•
•
Standardization
Class Balancing
Data Augmentation
Estimated
discounted reward
In COVID-19 context [9] in order to fight the scarcity of TD
0,1
0,1
0,1
large, labelled chest radiographic images (CXR) datasets,
Growing
Window error 𝑺𝒐𝒇𝒕𝒎𝒂𝒙
Physiological signals
we introduce a novel multi-dataset multi-task (MDMT)
#$%"&'( #$%"&'(
𝒎𝒂𝒙𝟏 𝑝" 𝒎𝒂𝒙𝟐 𝑝"
Reward Function
training framework, by integrating correlated datasets False #$%"&'(
Δ = 𝒎𝒂𝒙𝟏 𝑝"
R = −α𝑡! −𝒎𝒂𝒙𝟐 𝑝"
#$%"&'(
>τ
from disparate sources and assessing severity score to True
classify prognostic severity groups [10, 11, 12], instead of
𝒔𝒐𝒇𝒕𝒎𝒂𝒙
+𝒑𝒕 𝑖𝑓 𝑐𝑜𝑟𝑟𝑒𝑐𝑡 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛
* 𝒔𝒐𝒇𝒕𝒎𝒂𝒙
−𝒑𝒕 𝑖𝑓 𝑖𝑛𝑐𝑜𝑟𝑟𝑒𝑐𝑡 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑖𝑜𝑛
relying on datasets with multiple and correlated labelling Dynamic Psycho-Physical
Observation Window
schemes. As illustrated in figure 1 a deep CNN takes State
the images as input and branches into task-specific fully Figure 2: Overview of the proposed method for early stress
connected output networks, to end with a multi-task loss detection consisting of two main blocks: physiological envi-
function incorporating an indicator function to exploit ronment and DRL agent. The first involves the pre-processing
multi-dataset integration. of data and the description of the dynamic observation space.
The second block incorporates a SAC-based DRL agent fed
with data from the first block to control the system.
Deep Reinforcement Learning (DRL); second, we are fur-
ther expanding the multimodal view integrating infor-
mation from video, audio and text with the physiolog-
ical data. Robust and fast stress detection approaches
Figure 1: Overview of the proposed Multi-Dataset Multi- can bring benefit in several contexts: from providing
Task model architecture, composed by a shared backbone 𝑓 𝑠
a targeted and more personal assistance to patients, to
and two task-specific fully connected network heads, 𝑓 𝜏1 and
𝑓 𝜏2 , for tasks 𝜏1 and 𝜏2 , respectively, producing outputs 𝑂𝜏1 ensuring safety for workers, for instance, Air Traffic Con-
and 𝑂𝜏2 . trollers (ATC) that endure high levels of psychological
pressure during their job impacting operational safety.
Our first approach [13] employs a new DRL model to
Proceeding with a 5 cross-validation and leave one
identify stress indicators. We obtained this by leveraging
center out training, we evaluated the method across 18
a dynamic time observation window that expands each
different CNN backbone on prognosis classification task
step of the learning process, asking the agent to choose
and fine-tuning from BRIXIA dataset to AIforCOVID
either to continue observing or to classify based on the
dataset task. Best average performance with statistical
information gathered until that point, trying to mini-
robustness achieved: 68.6% accuracy, 66.6% F1-score and
mize the amount of data required for decision-making.
68.5% g-mean for the 5 cross validation, and 65.7% accu-
As depicted in the figure Figure 2, we adopted the Soft
racy, 64.3% F1-score and 66.0% g-mean for the leave-one-
Actor-Critic algorithm for its effectiveness in handling
center-out validation strategy. Future directions include
continuous action spaces. In a Leave-One-Subject-Out
new domains and the integration of XAI [6].
approach with data augmentation on the Non-EEG pub-
lic dataset we outperformed existing solutions, showing
3. Multimodal AI to foster the power of DRL for early stress detection.
On top of this approach we are exploring the larger
well-being multimodal asset of the Ulm-Trier Social Stress Test
Stress, a response to physical and emotional demands, dataset (ULM-TSST, MuSe 2022 challenge), containing 41
is crucial in determining individuals’ well-being and if training, 14 validation and 14 test subjects, simulating a
unmanaged can lead to conditions such as anxiety, depres- job interview scenario, with audio, video, text, and phys-
sion and cardiovascular diseases. Also in this scenario iological data modalities, rated on arousal and valence
multimodal AI offers a tool for proactive approach to stress parameters. The aim is to build a high performance
health management, in order to provide real-time moni- architecture that leverages non invasive modalities for
toring and interventions, thereby mitigating long-term stress detection that can be employed in work environ-
health risks associated with chronic stress. ments. In both cases the scarcity of large datasets that
We are targeting stress detection from two perspec- provide a quantitative measurement of stress is still the
tives: first, we are focusing on maximising the stress level main challenge: we will try to face it considering the
prediction accuracy within the shortest possible time, ex- construction of robust and specific acquisition protocols
ploiting multimodal physiological time series data and to test the effectiveness of the developed approaches in
real-world scenarios.
4. Resilient AI
Due to the high stakes involved in healthcare decisions,
the sensitivity of medical data, and the complexity of
medical environments, AI systems should be designed
to maintain their intended performance and integrity
in the face of adversities, as data corruption, missing
data, privacy leakage, or unexpected changes in their
operating environment. This is the goal of Resilient AI,
which is an aspect that cannot be left out when the aim Figure 3: Overall framework of the proposed method working
is to integrate AI for augmenting the medical practice. with triplet networks.
To meat this goal we are currently investigating three
main aspects that fall under the Resilient AI umbrella:
developing systems robust to missing data, the challenge Last but not least, the challenge related to patient pri-
of limited extension datasets and how to protect sensitive vacy led us to explore Federated learning (FL). FL presents
patients data. an innovative solution to the challenge of protecting sen-
With respect to the missing data challenge [14], al- sitive patient data in artificial intelligence applications
though a variety of strategies exist for addressing this in healthcare, in fact enabling the training of a shared
problem in health datasets, to overcome the obstacle to global model with a central server while ensuring data
select the most suitable one and their dependency on the privacy within local institutions. On this basis we intro-
dataset’s specifics, we developed a Transformer-based duce a new token-based FL paradigm, revolutionizing the
model [15] that applies masking to ignore the missing traditional approach with sequential or random passing
data, thus eliminating the need of imputation and dele- of a token between clients during each epoch. This inno-
tion techniques and focusing directly only on the avail- vative method allows only the token owner to send the
able features through self-attention. Moreover, we in- weights to the server, which redistributes them directly
troduced a novel feature-identifying form of positional to all models. By eliminating local training epochs and
encoding to facilitate the integration of tabular data into allowing immediate transmission, this paradigm shift
a Transformer framework. This method was validated streamlines the process by circulating a single model
through an overall survival classification task, employing among clients and also mitigating the need for an initial
clinical data from the CLARO [16] project and improving warm-up period, potentially paving the way for a de-
the prediction accuracy. centralized system that reduces dependence on a central
In order to address the problem of working with server and minimizes the number of parameters trans-
datasets limited in the extension, particularly frequent mitted in each iteration. Results on the tabular part of
in healthcare domain, Triplet networks, a subtype of the the AIforCOVID dataset [18] composed of 6 hospitals
Siamese networks, emerge as a promising solution, com- show that the performance of the FL model does not de-
prising three identical networks operating concurrently. viate from that of its equivalent trained on all datasets
Throughout training of these three networks two inputs aggregated into a single pool. The next steps will focus
belong to the same class, whereas the third belongs to a on integrating other modalities into the FL pipeline, such
distinct class, with the final objective to develop a feature as CXR scans of the AIforCOVID dataset itself.
space with two distinct clusters one per class by incor-
porating inter-class diversities alongside intra-class simi-
larities and providing scenarios with limited data with 5. AI for healthcare robotics
more triplets compared to instances (Figure 3). In our
study [17], using a private dataset of 86 CT scans, triplet The integration of robotics in healthcare settings exem-
networks surpass the plain deep networks in accurately plifies another dimension of AI’s impact, automating rou-
predicting the histological subtypes of NSCLC patients. tine tasks, assisting in surgeries with precision beyond
Currently, we are broadening the scope of our research human capability, and providing rehabilitation support
including PET images alongside CT scans and adopt- to patients. This not only enhances service delivery but
ing a multimodal strategy for the same classification. also alleviates the workload on healthcare professionals,
By integrating these complementary data we anticipate allowing them to focus more on patient-centered care.
achieving a significant improvement and overcoming the In this scenario we pursue two aims: first, enhancing
challenges posed by limited data scenarios. robotic surgery with real-time high precision localiza-
friendly, efficient operation, and safety. The aim of our
work is to recognize the terrain on which an exoskeleton
is walking and its inclination. Among several state-of-
the-art driven approaches, we achieved promising results
using LSTM architectures with IMU data (0.94 of accu-
racy in Leave-one-out cross-validation), and CNN-LSTM
architectures with EMG data (0.75 of accuracy). The fu-
sion of IMU and EMG data didn’t bring any significant
improvement, as explanatory tests indicated that the best
20 contributing features belong to IMU. Next, by varying
the number of sensors, and therefore features, we noticed
Figure 4: The MVSLAM pipeline integrates depth estimation, that the best results are achieved by selecting the most
pose estimation, and 3D reconstruction modules to generate relevant features, from one to three, according to SHAP
a continuously updated 3D map of the surgical environment (on a 3 subjects validation set), leading to 0.85, 0.89 and
from monocular endoscopic video frames.
0.93 of accuracy respectively. Lastly, we found that LSTM
and CNN-LSTM are valid architectures for slope inclina-
tion prediction (MAE of 1.95°) and stair height (MAE of
tion; second, boosting lower-limb robotic rehabilitation 15.65 mm), without significant differences in employing
optimizing the structural exoskeleton sensor configura- 3 or 4 sensors.
tion.
For the first objective we focus on the laparoscopy use
case, as one of the preferred surgical methods. Despite re- Acknowledgments
cent advancements in image acquisition it is still limited
Fatih Aksu, Alice Natalina Caragliano, Camillo Maria
to rely on 2D images view: misinterpreting anatomical
Caruso, Omar Coser, Arianna Francesconi, Leonardo Fu-
structures due to this limit is a common source of er-
ria, Guido Manni, Giustino Marino, Domenico Paolo
rors. In contrast, 3D imaging increases the accuracy of
and Filippo Ruffini are Ph.D. students enrolled in
instrument manipulation, leads to better outcomes in
the National Ph.D. in Artificial Intelligence, course
surgery, and shortens the learning for trainees. Even sev-
on Health and life sciences, organized by Univer-
eral research in surgical 3D imaging has been explored,
sità Campus Bio-Medico di Roma. We acknowl-
like camera-based tracking and mapping, Mosaicking,
edge financial support from: i) PNRR MUR project
Structure from Motion, and Shape from Template, they
PE0000013-FAIR; ii) PRIN 2022 MUR 20228MZFAA-
often rely on simplifications that can limit their effec-
AIDA (CUP C53D23003620008); iii) PRIN PNRR 2022
tiveness. On this ground Simultaneous Localization and
MUR P2022P3CXJ-PICTURE (CUP C53D23009280001);
Mapping (SLAM) has shown promising results, as it aims
iv) FCS MISE (CUP B89J23000580005); v) MAECI (grant
to create a map of the environment while localizing the
n. CN23GR09); vi) NRR MUR project PNC0000007
sensor position within it. Therefore we developed a ro-
Fit4MedRob. This work was also partially supported
bust deep learning SLAM pipeline to operate in real-time
by the following companies: Eustema S.p.A. and ENAV
across diverse surgical settings by providing an immer-
S.p.A..
sive, interactive 3D environment (Figure 4), allowing for
more precise and personalized interventions with the fu-
ture possibility to be integrated with augmented reality References
displays.
For the second objective, we focus on the challenges in [1] V. Guarrasi, L. Tronchin, C. M. Caruso, A. Rofena,
the field of lower limb robotics. It aims at supporting peo- G. Manni, F. Aksu, D. Paolo, G. Iannello, R. Sicilia,
ple with lower limb disabilities by enhancing movement, E. Cordelli, et al., Building an ai-enabled metaverse
mobility, and providing targeted exercise. Technologies for intelligent healthcare: opportunities and chal-
as exoskeletons, prosthetics, and rehabilitation robots, lenges, in: Ital-IA 2023, Italia Intelligenza Artificiale
are particularly helpful for those with neurological issues, Thematic Workshops, co-located with the 3rd CINI
offering improved rehabilitation, independence, and tai- National Lab AIIS Conference on Artificial Intel-
lored care. Effective use requires precise control settings ligence (Ital IA 2023), Pisa, Italy, May 29-30, 2023,
to adapt walking patterns to different terrains. Chal- CEUR-WS, 2023, pp. 134–139.
lenges in this field involve the extensive need for sensors [2] E. Cordelli, V. Guarrasi, G. Iannello, F. Ruffini, R. Si-
for terrain detection and the complexity in processing cilia, P. Soda, L. Tronchin, Making ai trustworthy in
sensor data. Simplifying sensor requirements to accu- multimodal and healthcare scenarios, Proceedings
rately determine terrain and slope is critical for user- of the Ital-IA (2023).
[3] J. P. Bharadiya, Artificial intelligence and the fu- [11] V. Guarrasi, N. C. D’Amico, R. Sicilia, E. Cordelli,
ture of web 3.0: Opportunities and challenges ahead, P. Soda, Pareto optimization of deep networks
American Journal of Computer Science and Tech- for covid-19 diagnosis from chest x-rays, Pattern
nology 6 (2023) 91–96. Recognition 121 (2022) 108242.
[4] V. Pereira, E. Hadjielias, M. Christofi, D. Vrontis, A [12] V. Guarrasi, P. Soda, Optimized fusion of cnns to
systematic literature review on the impact of artifi- diagnose pulmonary diseases on chest x-rays, in:
cial intelligence on workplace outcomes: A multi- International Conference on Image Analysis and
process perspective, Human Resource Management Processing, Springer, 2022, pp. 197–209.
Review 33 (2023) 100857. [13] L. Furia, et al., Exploring early stress detection from
[5] V. Guarrasi, P. Soda, Multi-objective optimization multimodal time series with deep reinforcement
determines when, which and how to fuse deep learning, in: 2023 IEEE International Conference
networks: An application to predict covid-19 out- on Bioinformatics and Biomedicine (BIBM), IEEE,
comes, Computers in Biology and Medicine 154 2023, pp. 1917–1920.
(2023) 106625. [14] A. Rofena, V. Guarrasi, M. Sarli, C. L. Piccolo,
[6] V. Guarrasi, L. Tronchin, D. Albano, E. Faiella, M. Sammarra, B. B. Zobel, P. Soda, A deep learn-
D. Fazzini, D. Santucci, P. Soda, Multimodal ex- ing approach for virtual contrast enhancement in
plainability via latent shift applied to covid-19 strat- contrast enhanced spectral mammography, arXiv
ification, arXiv preprint arXiv:2212.14084 (2022). preprint arXiv:2308.00471 (2023).
[7] D. Paolo, et al., Named entity recognition in ital- [15] C. M. Caruso, V. Guarrasi, S. Ramella, P. Soda,
ian lung cancer clinical reports using transformers, A deep learning approach for overall survival
in: 2023 IEEE International Conference on Bioin- analysis with missing values, arXiv preprint
formatics and Biomedicine (BIBM), IEEE, 2023, pp. arXiv:2307.11465 (2023).
4101–4107. [16] CLARO - CoLlAborative multi-sources Radiopath-
[8] M. Tortora, et al., Radiopathomics: multimodal omics approach for personalized Oncology in non-
learning in non-small cell lung cancer for adaptive small cell lung cancer., http://www.cosbi-lab.it/
radiotherapy, IEEE Access (2023). claro/, 2020. Accessed: 2023-03-20.
[9] G. Fiscon, F. Salvadore, V. Guarrasi, A. R. Garbuglia, [17] F. Aksu, et al., Early experiences on using triplet
P. Paci, Assessing the impact of data-driven lim- networks for histological subtype classification in
itations on tracing and forecasting the outbreak non-small cell lung cancer, in: 2023 IEEE 36th Inter-
dynamics of covid-19, Computers in biology and national Symposium on Computer-Based Medical
medicine 135 (2021) 104657. Systems (CBMS), IEEE, 2023, pp. 832–837.
[10] V. Guarrasi, N. C. D’Amico, R. Sicilia, E. Cordelli, [18] P. Soda, N. C. D’Amico, J. Tessadori, G. Valbusa,
P. Soda, A multi-expert system to detect covid-19 V. Guarrasi, C. Bortolotto, M. U. Akbar, R. Sicilia,
cases in x-ray images, in: 2021 IEEE 34th Inter- E. Cordelli, D. Fazzini, et al., Aiforcovid: Predict-
national Symposium on Computer-Based Medical ing the clinical outcomes in patients with covid-19
Systems (CBMS), IEEE, 2021, pp. 395–400. applying ai to chest-x-rays. an italian multicentre
study, Medical image analysis 74 (2021) 102216.