=Paper= {{Paper |id=Vol-3762/574 |storemode=property |title=Deep learning-based tumor resectability prediction model in patients with Ovarian Cancer: a preliminary evaluation |pdfUrl=https://ceur-ws.org/Vol-3762/574.pdf |volume=Vol-3762 |authors=Francesca Fati,Marina Rosanu,Luigi De Vitis,Gabriella Schivardi,Giovanni Damiano Aletti,Francesco Multinu,Roberto Veraldi,Paolo Zaffino,Carlo Cosentino,Maria Francesca Spadea,Elena De Momi |dblpUrl=https://dblp.org/rec/conf/ital-ia/FatiRVSAMVZCSM24 }} ==Deep learning-based tumor resectability prediction model in patients with Ovarian Cancer: a preliminary evaluation== https://ceur-ws.org/Vol-3762/574.pdf
                                Deep learning-based tumor resectability prediction model
                                in patients with Ovarian Cancer: a preliminary evaluation
                                Francesca Fati1,2,*,† , Marina Rosanu1 , Luigi De Vitis1 , Gabriella Schivardi1 ,
                                Giovanni Damiano Aletti1 , Francesco Multinu1 , Roberto Veraldi3 , Paolo Zaffino3 ,
                                Carlo Cosentino3 , Maria Francesca Spadea3,4 and Elena De Momi1,2
                                1
                                  Department of Gynecology, European Institute of Oncology (IEO), via Giuseppe Ripamonti 435, Milan, 20142, Italy
                                2
                                  Department of Electronics, Information and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milan, 20133, Italy
                                3
                                  Department of Experimental and Clinical Medicine, Università degli Studi Magna Graecia di Catanzaro, viale Europa, Catanzaro, 88100, Italy
                                4
                                  Institute of Biomedical Engineering, Karlsruhe Institute of Technology (KIT), 76131 Karlsruhe, Germany


                                                Abstract
                                                Ovarian cancer (OC) is the most lethal gynecologic malignancy worldwide, characterized by aggressive behavior, high relapse
                                                rate, and rapid progression. The cornerstone of OC treatment is cytoreductive surgery, targeting the removal of all detectable
                                                tumor lesions wherever feasible. In instances of widespread disease or significant perioperative morbidity risk, patients may
                                                initially receive neoadjuvant chemotherapy aimed at reducing the tumor’s volume prior to surgical intervention. The pivotal
                                                decision between surgery and chemotherapy poses a significant therapeutic challenge in OC management. Our contribution
                                                is to develop an artificial intelligence-based model to support this critical decision by predicting Tumor Resectability (TR)
                                                from preoperative Computed Tomography (CT) images at the time of diagnosis.
                                                Our study aims to develop a 3D Convolutional Neural Network capable of predicting TR in a cohort of 650 with advanced
                                                stage epithelial patients with OC who underwent surgery at the European Institute of Oncology (IEO, Milan, Italy). The
                                                model processes preoperative CT scans of the Thorax, Abdomen, and Pelvis to deliver a binary prediction: TR=0 indicates a
                                                tumor completely resected, while TR=1 indicates the presence of residual tumor after cytoreductive surgery. We design and
                                                train our model from the ground up, achieving as preliminary results an accuracy of 65%.
                                                As far as we are aware, this is the first attempt to leverage deep learning for assessing TR in OC patients based on preoperative
                                                CT scans. Our model represents a non-invasive and preoperative tool with the potential to facilitate clinical decision making
                                                in the era of individualized and precision medicine.
                                                The work is part of the project Under-XAI: understanding ovarian cancer initiation and progression through explainable AI.
                                                Project code: PNRR-MAD-2022-12376574.

                                                Keywords
                                                Ovarian Cancer (OC), Tumor Resecability (TR) prediction, Artificial Intelligence (AI), Precision Medicine



                                1. Introduction                                                                                        Although most patients initially respond positively to
                                                                                                                                       this standard of care, it is estimated that 70% of patients
                                Ovarian Cancer (OC) is the most lethal gynaecologic ma- will experience a relapse. Surgical intervention aims at
                                lignancy worldwide, ranking as the fifth deadliest cancer achieving complete tumor resection; however, it often
                                among women and accounting for approximately 13000 results to be either aggressive, leading to severe postop-
                                deaths in 2023 in the United States [1].                                                               erative complications, or ineffective, resulting in incom-
                                According to guidelines, suspected OC patients firstly plete tumor removal and an associated twofold increase
                                undergo pelvic ultrasound, Computed Tomography of in the risk of death, with the latter scenario occurring in
                                the Thorax, Abdomen and Pelvis (CT TAP) and CA125 approximately 40% of cases [2].
                                measurement for staging purposes. Depending on the The challenge in clinical practice is accurately predicting
                                CT TAP results and clinical assessment, clinicians eval- the success of cytoreductive surgery, critical due to the
                                uate the tumor resectability. Patients likely to achieve severe consequences of misjudgment, such as unneces-
                                complete tumor resection undergo primary debulking sary invasive procedures causing significant periopera-
                                surgery followed by adjuvant chemotherapy. Otherwise, tive complications and emotional distress. The complex-
                                they receive neoadjuvant chemotherapy, followed by in- ity of predicting surgical outcomes is heightened by the
                                terval debulking surgery and adjuvant chemotherapy.                                                    varied and distinct presentations of OC - four clinical
                                                                                                                                       cases are shown in Figure 1 - making it difficult to assess
                                Ital-IA 2024: 4th National Conference on Artificial Intelligence, orga- tumor resectability from diagnostic imaging. Advance-
                                nized by CINI, May 29-30, 2024, Naples, Italy
                                *
                                  Corresponding author.                                                                                ments in this area are crucial to minimize unnecessary
                                †
                                  These authors contributed equally.
                                                                                                                                       surgeries and tailor treatments to patient-specific needs.
                                $ francesca.fati@ieo.it (F. Fati)                                                                      Nowadays radiomics, a computational tool for extracting
                                          © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License
                                          Attribution 4.0 International (CC BY 4.0).
                                                                                                                                       high-dimensional features from medical images, becomes




CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
                                                                   2. Related Work
                                                                   Artificial Intelligence (AI) has been demonstrated to en-
                                                                   hance the effectiveness of tumor detection, classification,
                                                                   and treatment monitoring in cancer imaging.The integra-
                                                                   tion of radiomics and DL enabled the extraction of image
                                                                   features and information, which might be imperceptible
                                                                   to the human subjective evaluations, yielding to promis-
                                                                   ing medical applications [10].
                                                                   In the context of OC, several noteworthy studies have
                                                                   been conducted.
                                                                   In the domain of radiomics and ML domain, Lu et al.[11]
                                                                   proposed an approach to predict two-year overall sur-
Figure 1: In these CT scans, 4 different patients with OC are      vival in 364 epithelial OC patients. In this study, 657
depicted, each presenting unique diagnostic challenges. The        quantitative descriptors were extracted from preopera-
first patient’s scan identifies a discrete retroperitoneal lymph   tive CT images, upon which the ML algorithm Radiomic
node measuring 8.5 mm. The second patient has a conspicuous        Prognostic Vector was developed. The latter accurately
omental cake, which is considerably larger at 29.2 mm. For         identified the 5% of patients with a median overall sur-
the third patient, a 13.3 mm nodule is present. Lastly, the        vival of less than 2 years, demonstrating significant im-
fourth patient’s scan shows peritoneal thickening of 10.5 mm,      provement over established prognostic methods. Crispin-
raising concerns about potential peritoneal carcinomatosis.        Ortuzar et al. [12] addressed the challenge of predicting
Each case demonstrates the diverse presentations of OC and         neoadjuvant chemotherapy (NACT) response of 72 high-
the inherent challenges in predicting TR preoperatively.
                                                                   grade serous OC (HGSOC) affected patients, presenting
                                                                   an ensemble ML model that, integrating baseline clinical,
                                                                   blood-based, and radiomic biomarkers from primary and
part of personalized oncology treatments, driven by ad-            metastatic lesions, predicted changes in total disease vol-
vancements in Machine Learning (ML)[3]. However, ML                ume. Validation on internal and external cohorts showed
requires appropriate selection among the numerous ra-              that the model significantly improved prediction accu-
diomic features extracted from images [4] [5].                     racy compared to the clinical model, highlighting the
Deep Learning (DL) has shown promising results in au-              potential of radiomics in enhancing treatment response
tomatically and directly detecting valuable features from          predictions.
medical imaging [6], boosting the progress of computer             On the front of DL, Jan et al.[10] developed an AI ensem-
vision in the medical field [7] [8], and demonstrating su-         ble model combing radiomics, DL and clinical features
perior performance in comparison to hand-crafted image             from CT images to distinguish between benign and ma-
features [9].                                                      lignant OC. With 149 patients and 185 tumors, the model
In this paper, a 3D CNN was designed to perform bi-                achieved 82% accuracy, 89% specificity, and 68% sensitiv-
nary TR classification of patients with OC. Specifically,          ity. Compared to junior radiologists, the model exhibited
what emerged from literature research was the absence              higher accuracy and specificity while maintaining com-
of robust radiological indexes to select patients for total        parable sensitivity. Wang et al.[7] proposed a DL method
surgical resection. Hence, our primary contribution was            to predict 3-year recurrence in 245 high-grade serous OC
the implementation of a non-invasive and preoperative              patients from preoperative CT images. The DL network,
DL model to assess whether an upfront patient could be a           trained on 8917 CT images, extracts a 16-dimensional
suitable candidate for debulking surgery, when achieving           DL feature used to predict the outcome probability. The
a total resection appears feasible, or the patient might           model achieved AUC values of 0.772 and 0.825 for high
be recommended to undergo neoadjuvant chemotherapy                 and low recurrence risk, exhibiting stronger prognos-
before proceeding to interval surgery, when a complete             tic value compared to clinical characteristics. Zheng et
resection seems unlikely. Therefore, the proposed model            al.[13] proposed a Vit-based DL model for predicting
might potentially assist radiologists and gynecologists to         overall survival in 734 high-grade serous OC patients
assess TR and guide therapeutic strategies for patients            using preoperative CT images. Analyzing 734 patients,
with OC.                                                           the dataset was split into training (n = 550) and valida-
                                                                   tion (n = 184) cohorts. The model demonstrated robust
                                                                   performance with AUC = 0.822 in the training cohort of
                                                                   550 patients and AUC = 0.823 in the validation cohort
                                                                   of 184 women. Lei et al. [14] developed a DL model
                                                                   for predicting platinum sensitivity in 93 patients with
epithelial OC using contrast-enhanced magnetic reso- Table 1
nance imaging (MRI). A pre-trained CNN were used and Inclusion and Exclusion Criteria
1,024 features were automatically extracted from MRI            Inclusion Criteria               Exclusion Criteria
sequences to predict platinum sensitivity.The model per-
formed Area Under the Curve (AUC) of 0.97 and 0.98 in             Epithelial OC              CT slice thickness > 5 mm
training and validation cohorts.                              Advanced  stage (III-IV)        No  consent  to research
                                                           CT acquired before treatment       No CT or data available
Among the 20 research papers examining OC in [15], 11
                                                                 Age ≥ 18 years
primarily aimed at classifying between benign, malig-
nant, and/or borderline tumors. Two of these studies
focused on resistance to platinum-based chemotherapy,
with one extending its analysis to differentiate between        and therefore requires segmentation in the
high and low risks of disease survival and platinum treat-      preprocessing phase. To address the limitations
ment resistance [16], [17]. Additional studies targeted         associated with manual segmentation, including
various classification objectives, such as differentiating      potential bias, time-intensive procedures, and the
HGSC from non-HGSC [18], classifying epithelial OC              scarcity of annotated data, we implement auto-
into type I or II [19], and identifying OC as recurrent or      matic segmentation using TotalSegmentator [21].
nonrecurrent [20]. The majority of these studies were           TotalSegmentator is a DL segmentation model
single-center initiatives, with a sample size ranging from      which   automatically and robustly segments all
a minimum of 6 patients to a maximum of 758 patients.           major   anatomical    structures in body CT images.
However, to the best of our knowledge, this is the first        Each organ is associated with a label, which
attempt to predict TR exploiting a DL-based model in            allowed to set the upper and lower bounds of the
OC.                                                             ROI as ischiopubic rami of the pelvis and left
                                                                and right hemidiaphragm cupola, respectively.
                                                                Afterwards, each image was cropped along the
3. Methodology                                                  z-axis according to the interval selected.
                                                                The selection of the ROI derived from the fact
3.0.1. Dataset                                                  that, compared to other tumors, OC metastasis
TAP CT images with different manufacturers (GE Med-             occurs most frequently in the omentum or
ical Systems, Siemens, Philips, Toshiba, Hitachi) of 650        peritoneum, reporting almost 70% of patients
patients with OC treated between 2016 and 2022 were             with OC presented peritoneal cavity metastasis
retrospectively collected at the European Institute of On-      at the time of diagnosis.
cology (IEO) in Milan, Italy. In this study, the TAP CT
contrast enhanced portal-venous phase acquired at the           2. Step #2: Additional standard preprocessing. Im-
moment of diagnosis was considered. The CTs in our                 ages pixels intensity was normalized between 0
dataset were meticulously manually annotated by 5 ex-              and 1. Images were resized from the original di-
pert gynecologist for the purpose of classification. This          mension of 512 x 512 x n, with n varying among
involved a thorough examination of elecronic medical               patients, to 128 x 128 x 128, where 128 was the
records, resulting in the assignment of TR = 0 and TR =            average n.
1 labels. TR=0 means complete tumor resection with no       The aforementioned preprocessing steps are illustrated
residual tumor, and the TR=1 means no-complete tumor        in Figure 2.
resection with residual tumor. In our dataset, clinicians
annotated 446 cases with TR=0 and 204 cases with TR=1.      3.0.3. Model architecture
During the development of the model, all data were fully
anonymized to ensure the utmost privacy and data pro-       For the classification task of predicting the binary clinical
tection.                                                    outcome TR from 3D TAP CT images, we designed a 3D
The inclusion and exclusion criteria for the study are      CNN model. The architecture of the model comprises two
illustrated in Table 1.                                     fundamental components: a CNN-based Features Extrac-
                                                            tor (FE) and a feed-forward fully connected classifier. The
3.0.2. Image pre-processing                                 FE is composed by a sequence of 7 convolutional blocks,
                                                            each consisting of the following layers: a convolutional
We performed images preprocessing techniques in             layer which increases the number of input channels, fol-
Python 3.11. The following steps were performed:            lowed by a Rectified Linear Unit (ReLU) activation layer,
    1. Step #1: Segmentation and Region of Interest         a 3D batch normalization layer, another convolutional
       (ROI) selection. Identifying the ROI most affected   layer, which preserves the number of input feature maps,
       by OC in CT images is a fundamental first step,      and a max-pooling layer which halves the spatial dimen-
                                                                on different dataset splits based on the accuracy on the
                                                                validation sets. We then proceeded to retrain this model
                                                                using the best hyperparameters as found by the cross-
                                                                validation. After retraining, we evaluated its performance
                                                                on a separate test set to confirm its effectiveness and gen-
                                                                eralization capabilities.


                                                                5. Results and Discussion
                                                                  In figure 4, we report the confusion matrix. We obtained
                                                                  an accuracy value of 0.65 on a testing external cohort of 40
                                                                  patients, where the class TR=0 were 20 and the class TR=1
                                                                  were 20. From the results, the model correctly predicted
                                                                  the positive cases (class TR = 0) with an accuracy of 0.75
Figure 2: Preprocessing steps of TAP CT of patients with OC. and the negative cases (class TR = 1) with an accuracy
                                                                  of 0.55. The overall accuracy of 0.65 suggests moderate
                                                                  general correctness, but indicate that the model had still
                                                                  difficulty in correctly discriminate the classes.
sions of the input. The FE takes as input preprocessed 3D
TAP CT images with spatial dimensions of 128 x 128 x 128
and number of channels of 1, and returns a final vector of 6. Conclusions
512 extracted features. The feature vector is the input of
the classifier, designed as a sequential model with linear In this paper, we delved into the power of DL models
layers interleaved with ReLU activation functions and for the classification of TR in patients with OC, utilizing
Dropout layers, each having a dropout probability of 0.3. 3D TAP CT scans. TR is a pivotal diagnostic factor in-
The overall architecture is shown in Figure 3.                    fluencing clinical treatment decisions, that would highly
The model combines the 3D CNN’s ability to extract improve the management of OC patients if accurately
useful features from input 3D TAP CT images with the predicted at diagnosis. Leveraging the capabilities of DL
classifier’s power to discriminate to predict the correct in the medical domain, we extend its use to address this
class.                                                            challenge in OC. Our methodology involves employing
We designed and trained our model from scratch, for the a 3D CCN model for binary TR classification, aiming to
binary target task of TR classification in patients with aid clinical decision in OC care.
OC.                                                               Previous studies have already involved and introduced
                                                                  DL in the context OC, but to the best of our knowledge,
                                                                  this is the first attempt to harness the potential of DL for
4. Experiment description                                         specifically predicting TR. Indeed, one of the noteworthy
                                                                  challenges in this attempt is the absence of radiological
We split our dataset into a training and validation set, indexes to inform total surgical tumor resection decisions.
with respectively 457 and 153 patients, and we evaluate Our main contribution is to address this gap, introducing
our results on an external cohort of 40 patients.                 a DL model as a non-invasive preoperative tool to facili-
We configured a batch size 𝐵 = 8, employing Binary tate clinical decision making.
Cross Entropy (BCE) as a loss function, formulated as:            In conclusion, it is important to recognize the limita-
                      𝐵                                           tions of our study, notably the potential for enhanced
                  1 ∑︁
BCE(𝑦ˆ, 𝑦) = −           [𝑦𝑖 · log(𝑦ˆ𝑖 ) + (1 − 𝑦𝑖 ) · log(1 − 𝑦ˆ𝑖model
                                                                   )]     generalization and performance by expanding the
                  𝐵 𝑖=1                                           patient cohort, and considering alternative neural net-
                                                                  work architectures, such as Vision Transformer based
where 𝑦 is the model output and 𝑦ˆ is the target variable. models. We should broaden the application of our DL
The learning rate was set to be 0.0001, with a 0.1 multi- approach to predict other key diagnostic factors in OC,
plication every 30 epochs. Optimization was performed such as platinum sensitivity, overall survival, and surgical
using the Adam algorithm, and the maximum training complications. Finally, the integration of explainability
epoch was set to 200. The entire training procedure was techniques should be essential for interpreting the model
executed on a single NVIDIA A100 GPU with 40GB of decisions, fostering trust, and promoting wider clinical
memory.                                                           use.
In this study, we employed a 5-fold cross-validation
method to assess the performance of different model
Figure 3: The model architecture consists of two main components: a CNN-based Features Extractor (FE) and a feed-forward
fully connected classifier. The Features Extractor includes 7 convolutional blocks. Each convolutional block is composed
by a convolutional layer to increase the number of input channels, followed by ReLU activation, 3D batch normalization,
another convolutional layer which does not change the number of input feature maps, and max-pooling which halves the
spatial dimensions. The FE outputs a final 512-features vector. The classifier is a sequential model of linear layers with ReLU
activations and Dropout layers (dropout probability: 0.3) which process the feature vector and returns the final probability to
belong either to class TR=0 or to class TR=1.



                                                                 7.0.2. Ethical approval
                                                                 This work is part of the PNRR-MAD-2022-12376574
                                                                 project Under-XAI: understanding ovarian cancer initi-
                                                                 ation and progression through explainable AI, being ex-
                                                                 empted from the ethical committee approval by the Na-
                                                                 tional Ministry of Health. Furthermore, the European
                                                                 Institute of Oncology has implemented a broad consent
                                                                 which allows to include in the study all the institute’s
                                                                 patients, except those that refused explicitly to sign the
                                                                 informed consent.

Figure 4: Confusion matrix evaluating our model’s perfor-        7.0.3. Informed consent
mances on a testing external cohort of 40 patients. The model
correctly predicted the positive cases (class TR = 0) with an    Informed consent was obtained from all individual par-
accuracy of 0.75 and the negative cases (class TR = 1) with an   ticipants included in the study.
accuracy of 0.55. The model shows an overall accuracy of 0.65.

                                                                 References
7. Declarations                                                   [1] R. Siegel, D. Naishadham, A. Jemal, Cancer statis-
                                                                      tics, 2013., CA: a cancer journal for clinicians 63
7.0.1. Conflict of interest                                           (2013) 11–30.
The authors declare that they have no conflict of interest.       [2] F. Heitz, P. Harter, P. F. Alesina, M. K. Walz,
                                                                      D. Lorenz, H. Groeben, S. Heikaus, A. Fisseler-
                                                                      Eckhoff, S. Schneider, B. Ataseven, et al., Pattern
                                                                      of and reason for postoperative residual disease in
     patients with advanced ovarian cancer following                 for predicting overall survival in patients with high-
     upfront radical debulking surgery, Gynecologic                  grade serous ovarian cancer, Frontiers in Oncology
     oncology 141 (2016) 264–270.                                    12 (2022) 986089.
 [3] B. J. Erickson, P. Korfiatis, Z. Akkus, T. L. Kline, Ma-   [14] R. Lei, Y. Yu, Q. Li, Q. Yao, J. Wang, M. Gao, Z. Wu,
     chine learning for medical imaging, radiographics               W. Ren, Y. Tan, B. Zhang, et al., Deep learning
     37 (2017) 505–515.                                              magnetic resonance imaging predicts platinum sen-
 [4] R. J. Gillies, P. E. Kinahan, H. Hricak, Radiomics:             sitivity in patients with epithelial ovarian cancer,
     images are more than pictures, they are data, Radi-             Frontiers in Oncology 12 (2022) 895177.
     ology 278 (2016) 563–577.                                  [15] P. Shrestha, B. Poudyal, S. Yadollahi, D. E. Wright,
 [5] M. R. Tomaszewski, R. J. Gillies, The biological                A. V. Gregory, J. D. Warner, P. Korfiatis, I. C. Green,
     meaning of radiomic features, Radiology 298 (2021)              S. L. Rassier, A. Mariani, et al., A systematic review
     505–516.                                                        on the use of artificial intelligence in gynecologic
 [6] B. J. Erickson, P. Korfiatis, T. L. Kline, Z. Akkus,            imaging–background, state of the art, and future
     K. Philbrick, A. D. Weston, Deep learning in radiol-            directions, Gynecologic Oncology 166 (2022) 596–
     ogy: does one size fit all?, Journal of the American            605.
     College of Radiology 15 (2018) 521–526.                    [16] H. Veeraraghavan, H. A. Vargas, A. Jimenez-
 [7] S. Wang, Z. Liu, Y. Rong, B. Zhou, Y. Bai, W. Wei,              Sanchez, M. Micco, E. Mema, Y. Lakhman,
     M. Wang, Y. Guo, J. Tian, Deep learning pro-                    M. Crispin-Ortuzar, E. P. Huang, D. A. Levine,
     vides a new computed tomography-based prognos-                  R. N. Grisham, et al., Integrated multi-tumor radio-
     tic biomarker for recurrence prediction in high-                genomic marker of outcomes in patients with high
     grade serous ovarian cancer, Radiotherapy and                   serous ovarian carcinoma, Cancers 12 (2020) 3403.
     Oncology 132 (2019) 171–177.                               [17] X.-p. Yu, L. Wang, H.-y. Yu, Y.-w. Zou, C. Wang, J.-
 [8] D. S. Kermany, M. Goldbaum, W. Cai, C. C. Valentim,             w. Jiao, H. Hong, S. Zhang, Mdct-based radiomics
     H. Liang, S. L. Baxter, A. McKeown, G. Yang, X. Wu,             features for the differentiation of serous borderline
     F. Yan, et al., Identifying medical diagnoses and               ovarian tumors and serous malignant ovarian tu-
     treatable diseases by image-based deep learning,                mors, Cancer Management and Research (2021)
     cell 172 (2018) 1122–1131.                                      329–336.
 [9] M. S. Choi, B. S. Choi, S. Y. Chung, N. Kim, J. Chun,      [18] H. An, Y. Wang, E. M. Wong, S. Lyu, L. Han, J. A.
     Y. B. Kim, J. S. Chang, J. S. Kim, Clinical evalu-              Perucho, P. Cao, E. Y. Lee, Ct texture analysis in
     ation of atlas-and deep learning-based automatic                histological classification of epithelial ovarian car-
     segmentation of multiple organs and clinical tar-               cinoma, European Radiology 31 (2021) 5050–5058.
     get volumes for breast cancer, Radiotherapy and            [19] L. Qian, J. Ren, A. Liu, Y. Gao, F. Hao, L. Zhao, H. Wu,
     Oncology 153 (2020) 139–145.                                    G. Niu, Mr imaging of epithelial ovarian cancer:
[10] Y.-T. Jan, P.-S. Tsai, W.-H. Huang, L.-Y. Chou, S.-C.           a combined model to predict histologic subtypes,
     Huang, J.-Z. Wang, P.-H. Lu, D.-C. Lin, C.-S. Yen,              European Radiology 30 (2020) 5815–5825.
     J.-P. Teng, et al., Machine learning combined with         [20] Y. Liu, Y. Zhang, R. Cheng, S. Liu, F. Qu, X. Yin,
     radiomics and deep learning features extracted from             Q. Wang, B. Xiao, Z. Ye, Radiomics analysis of
     ct images: a novel ai model to distinguish benign               apparent diffusion coefficient in cervical cancer: a
     from malignant ovarian tumors, Insights into Imag-              preliminary study on histological grade evaluation,
     ing 14 (2023) 68.                                               Journal of Magnetic Resonance Imaging 49 (2019)
[11] H. Lu, M. Arshad, A. Thornton, G. Avesani, P. Cun-              280–290.
     nea, E. Curry, F. Kanavati, J. Liang, K. Nixon,            [21] J. Wasserthal, H.-C. Breit, M. T. Meyer, M. Pradella,
     S. T. Williams, et al., A mathematical-descriptor               D. Hinck, A. W. Sauter, T. Heye, D. T. Boll, J. Cyr-
     of tumor-mesoscopic-structure from computed-                    iac, S. Yang, et al., Totalsegmentator: Robust seg-
     tomography images annotates prognostic-and                      mentation of 104 anatomic structures in ct images,
     molecular-phenotypes of epithelial ovarian cancer,              Radiology: Artificial Intelligence 5 (2023).
     Nature communications 10 (2019) 764.
[12] M. Crispin-Ortuzar, R. Woitek, M. A. Reinius,
     E. Moore, L. Beer, V. Bura, L. Rundo, C. McCague,
     S. Ursprung, L. Escudero Sanchez, et al., Integrated
     radiogenomics models predict response to neoad-
     juvant chemotherapy in high grade serous ovarian
     cancer, Nature communications 14 (2023) 6756.
[13] Y. Zheng, F. Wang, W. Zhang, Y. Li, B. Yang, X. Yang,
     T. Dong, Preoperative ct-based deep learning model