<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Automatic Evaluation of Cancer Reduction During Radiotherapy</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Valerio Bellandi</string-name>
          <email>valerio.bellandi@unimi.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefano Siccardi</string-name>
          <email>stefano.siccardi@unimi.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Structure Segmentation, CBCT Analysis, Medical Image Processing, Machine Learning in Healthcare</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Consorzio Interuniversitario Nazionale per l'Informatica</institution>
          ,
          <addr-line>Via Ariosto, 25, Roma</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Università Degli Studi di Milano, Department of Computer Science</institution>
          ,
          <addr-line>Via Celoria 18, Milano</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <fpage>9</fpage>
      <lpage>11</lpage>
      <abstract>
        <p>In this study, we present an automated approach for monitoring tumor volume reduction during radiotherapy, aiming to optimize radiation dosing based on patient-specific responses. Conventional radiotherapy plans are static, often missing early and subtle volume changes detectable through imaging. Our proposed system compares planning CT scans with lower-resolution CBCT images acquired before each session, automatically delineating pathological structures and computing volume changes. If a variation higher than a predefined threshold is detected, an alert is issued for potential treatment adaptation. We developed and tested various image preprocessing and contour refinement methods, and introduced a supervised learning pipeline to (i) predict the presence of pathological structures and (ii) select the most appropriate contouring algorithm per case. Using synthetic data, we achieved promising classification performance and volume trend alignment with manual annotations. Future work will focus on real-patient data validation, inter-patient generalization, and algorithm ifne-tuning to enhance adaptive radiotherapy decision-making.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Radiotherapy is a cornerstone of cancer treatment, often administered in repeated sessions over several
weeks. During this time, the pathological anatomy of a patient—particularly the volume and shape
of tumors—can change remarkably. These changes may be due to therapeutic efectiveness, weight
loss, organ motion, or other biological factors. However, traditional radiotherapy plans are defined at
the outset based on a simulation CT scan, and they generally remain fixed throughout the treatment
unless clinicians intervene manually. This static approach may lead to suboptimal outcomes, including
unnecessary exposure of healthy tissue to radiation or insuficient dose to the tumor if important
anatomical changes go undetected.</p>
      <p>In current clinical practice, adaptation of the therapy plan is often based on visual inspection of
pre-session images such as Cone Beam CT (CBCT) scans. While these provide useful anatomical
information, visual inspection is inherently limited by human perceptual thresholds, time constraints,
and inter-observer variability. Consequently, only large and easily observable changes are typically
acted upon, while smaller but clinically relevant variations may be missed.</p>
      <p>The aim of our research is to introduce an automated, data-driven system for monitoring tumor
volume changes during radiotherapy. Our system compares pre-session CBCT images with the original
planning CT scans, automatically detects and contours pathological structures, computes volume
diferences, and triggers alerts when changes exceed a configurable threshold. This alert mechanism
enables clinical staf to reevaluate the treatment plan in a timely manner, potentially restarting the
planning process to ensure optimal dosing.</p>
      <p>To achieve this, we implement a multi-step image analysis pipeline that includes image preprocessing,
structure segmentation within a defined Region of Interest (ROI), volume computation, and decision</p>
      <p>CEUR
Workshop</p>
      <p>ISSN1613-0073
logic based on thresholded volume variation. Beyond basic automation, we also integrate machine
learning techniques to enhance the robustness of our system: we train classifiers to predict the presence
of the target structure in each image slice and to select the best segmentation method for that specific
context.</p>
      <p>Our initial experiments using synthetic data demonstrate that automated monitoring can reliably
track tumor regression trends and approximate manual annotations. These results suggest that such
a system can serve as a valuable aid in ofline adaptive radiotherapy, reducing the need for manual
interventions while preserving patient safety and improving therapeutic precision.</p>
      <p>In the following sections, we provide an overview of related work, describe the pipeline and
learningbased enhancements, report experimental findings, and outline our plans for validating the approach
on real patient data.</p>
    </sec>
    <sec id="sec-2">
      <title>2. State of the Art</title>
      <p>
        Research in image processing in medicine has a long history. For instance, a review of techniques for
diagnostic imaging can be found in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], that covers topics like the importance of image quality, analogue
and digital image systems, image processing and analysis, 3D images and other. The important topic of
image denoising with the objective to extract information about the scene being has been discussed e.g.
in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Artificial Intelligence applications for medical image processing has been used for image
segmentation, to find specific structures or regions of interest; identification of abnormalities; enhancing
the quality of medical images; tailoring treatment plans based on individual patient characteristics
and response to therapy; predict disease progression, treatment response etc.; detecting artifacts in
images; continuous monitoring of disease progression and treatment response over time, enabling
timely adjustments to the treatment plan (see e.g. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]). Many of these techniques have been used
specifically in radiation therapy.
      </p>
      <p>
        Several studies are devoted to tumor detection and segmentation, for instance see [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]
especially for liver cancer and metastases and [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] for a survey. These methods can be used at several
stages of the patients’ history, from cancer detection to treatment planning and evaluation of response
to the therapy. The latter point introduces our focus: adaptive radiotherapy, that consists of adjusting
treatment plans according to changes like organ deformation, weight loss, tumor shrinkage, and even
biological changes, in order to reduce toxicity. It can be conducted in two ways: online (adjustments
made during treatment sessions) and ofline (adjustments made between treatment sessions), see e.g.
[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] for an up to date review. The present study is in the field of ofline adaptive radiotherapy.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. The basic pipeline</title>
      <p>In modern radiotherapy, patient treatment is carried out over multiple sessions, requiring careful
planning, monitoring, and adaptation to anatomical changes. To support this process, a data processing
pipeline is employed, which ensures that therapeutic plans remain accurate throughout the treatment
cycle. The pipeline for a typical course of radiotherapy sessions is as follows:
1. During the planning session, a Computerized Tomography (CT) scan is performed. The contours
of all relevant structures are manually delineated, and the therapeutic plan is defined.
2. The resulting images and identified structures are stored in the alert system; structure volumes
are computed and saved.
3. Before each therapy session, a lower-resolution CT scan is performed.
4. These images are also stored in the alert system. Pathological structure contours are automatically
detected, and the resulting volumes are compared with those obtained during the planning session.
5. If volume changes exceed a predefined threshold, an alert is issued, allowing the operator to
review the situation and, if necessary, restart the process from step 1.</p>
      <p>Steps 1 and 3 are already part of routine clinical practice. The CT scan in step 1 is called the simulation
scan, while the CT scans in step 3 are called CBCT scans. CT devices save both images and structure
contours to files in standard formats, most commonly DICOM 1. These files can be processed using
well-known open-source and proprietary tools. For example, we used 3D Slicer [9] and its SliceRT
plugin [10], which is dedicated to Radiation Therapy, to prepare Fig. 1.</p>
      <p>The devices also store, in the same format, information that allows CBCT scans to be aligned with
the simulation scan, a process known as registration. This step is essential to ensure that structures
from diferent sessions can be accurately compared.</p>
      <p>Step 4 consists of identifying image contours based on grayscale diferences. This task must be
performed carefully, since multiple structures can appear in the same image. To address this, we define
a Region of Interest (ROI), bounded by the structure contours identified during the planning session.</p>
      <p>Several methods where tested to refine the contours, namely:</p>
      <sec id="sec-3-1">
        <title>1. isotropically enlarging the ROI of some millimeter</title>
        <p>2. smoothing the ROI
3. sharpening the ROI contrast
4. inverting the colours, to find dark structures like the lungs</p>
        <p>At present, these methods have been tested on synthetic images; real patient data will be used in
future work. Figure 1 (left panel) shows the contours of a pathological lung structure, with the outer
red region representing the planning session contours and the inner yellow region representing the
contours automatically detected during session 2 in a 2D CT slice. The corresponding 3D rendering of
the structure is shown in the right panel.</p>
        <p>In step 5, volumes are computed in the standard way: multiplying the structure area of each slice by
the slice thickness for inner slices, and by half the slice thickness for the outer slices.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Refining the Structures</title>
      <p>The described pipeline yielded promising results, as shown in fig. 2, left panel. In this case, the images
were generated to simulate a patient showing large volume reduction for the pathological structure,
so that the operators would decide to make two extra planning sessions to reduce doses at the 3rd
and 17th sessions. The volumes found during the planning sessions (red line) are compared to those
computed using automatically found contours for all the CBCT scans (blue line). It can be seen that
there is agreement between the trends, however we noted that some points needed to be refined.</p>
      <p>First of all, the pathological structure may reduce even vertically; in such cases there are slices at
some height with contours at planning time (simulation CT scan), where nothing must be found at
therapy time. However, as we define the ROI boundary using contours of the simulation scan, the
programs often erroneously find some contours.</p>
      <p>Another point is that diferent algorithms yield diferent contours, each one with varying goodness
of approximation depending of the images; we would like to have a way to choose the best suited
algorithm for each case.</p>
      <p>For this reasons, we decided to perform two learning procedure: the first one in order to create a
model to check if a structure must be searched in a given image, the second one to choose the best
suited algorithm to find its contours. We opted to have a supervised learning, with correct structures
manually drawn in the CBCT images.</p>
      <p>Fig. 2, right panel, shows the volume trends for:</p>
      <sec id="sec-4-1">
        <title>1. the simulation scan contours (yellow line)</title>
        <p>2. the contours found with each of the algorithms described above (base search for contours,
enlarging the ROI 6 mm. in each direction, smoothing the image and sharpening it; we did not
invert colours as the structure was not dark)
3. the manual contours drawn in the CBCT for learning (green line)
4. the contours drawn used the algorithms predicted by the learning model (blue)</p>
      </sec>
      <sec id="sec-4-2">
        <title>It can be seen the advantage of the learning over the other methods.</title>
        <p>We now describe the learning procedure.</p>
        <p>We run all the algorithms for the CBCT images at vertical positions where a simulation contour
could be found. This contour was used to define the ROI where the new contour should be found. If no
manual contour existed for the CBCT image, we marked the image as no contour, otherwise as contour.
This binary information was the target to predict for the first model.</p>
        <p>In case of images with a manual contour  , we considered the region  delimited by it and, for each
computed contour  , the delimited region  . Then we computed:
 1 =  ( ∩ )/ ( ); 
2 = 1 −  ( −  )/ ()
(1)
so that  1 = 1 if the computed contour covers all the manual one and  2 = 1 if the computed contour
does not cross the manual one. We used  =  1 2 as algorithm ranking to predict with the second
model.</p>
        <p>As features, we used a set of characteristics of the input image:
1. statistical metrics: mean, standard deviation (std), signal to noise ratio (mean/std), contrast
((  − )/(  + ) )
2. structural metrics: edge density (a measure of sharpness) and local variance (a texture measure)</p>
      </sec>
      <sec id="sec-4-3">
        <title>3. noise metrics: laplacian noise estimation and background noise estimation</title>
        <p>The above metrics were computed for a rectangular region surrounding the ROI.</p>
        <p>For these exploratory analysis, we used the 80% of data of our simulated patient to train a random
forest classifier for each task, the remaining 20% for testing. We realized that the contrast feature had
very low importance and dropped it.</p>
        <p>Fig. 3 shows the estimated importance of features for both models; models performance indicators
can be seen in tables 1 and 2.</p>
        <p>The computed models where then applied to the whole set of images, to find an optimized structure.
The result was used to compute the volumes shown in fig. 2, predicted line.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion and Future Work</title>
      <p>This study presents a promising approach to enhancing radiotherapy through automated monitoring of
tumor volume reduction using image analysis and machine learning techniques. By comparing planning
CT scans with pre-session CBCT images, our system automatically detects and segments pathological
structures, estimates volumetric changes, and generates alerts when above the established threshold
deviations are observed. This enables clinicians to consider adaptive replanning more promptly and
with greater precision.</p>
      <p>Our preliminary results, based on synthetic data, show that the proposed pipeline can closely
approximate manual contours and accurately track volume trends over multiple sessions. Furthermore,
the introduction of supervised learning models remarkably improves the system’s ability to detect
structure presence and select the most suitable segmentation method on a case-by-case basis.</p>
      <p>Although the current findings are exploratory, they lay the groundwork for future development of a
robust decision-support tool in ofline adaptive radiotherapy. Moving forward, we plan to validate our
system on real patient datasets, expand the training to include multiple patients for better generalization,
and refine the image processing algorithms through parameter tuning and expert-guided ranking.</p>
      <p>Ultimately, our goal is to support clinicians in delivering more personalized, responsive, and efective
radiotherapy, reducing unnecessary radiation exposure and improving patient outcomes. The present
work however exploratory has promising results. We plan to extend it in several directions:</p>
      <sec id="sec-5-1">
        <title>1. we need to replicate the analysis for a suitable number of real patients</title>
        <p>2. instead of using subsets of images of the same patient for training and testing, the algorithm
should be trained on some patients and tested on others
3. we will extend the ranking based on areas (1) with a manual ranking by domain experts
4. we will extend the training to fine tune the algorithms</p>
        <p>About the last point, we note that e.g. for smoothing we just used a bilateral filter with fixed
parameters and for sharpening a 2D filter with a simple fixed kernel. For smoothing one might consider
also blurring, gaussian blurring etc. moreover one might tune the parameters of each algorithm. The
same applies for sharpening.</p>
        <p>On the other side, we plan to implement an online system, that could be used by physicians of several
hospitals, to routinely to get automated alerts when large changes are found in the patients’ images.
The system will be equipped with tools to upload images with or without manual contours, request
contour detection and volume computation, review images and contours, get volume plots and reports
and assign a rank to contours in order to retrain the models that choose the best algorithms. When a
new CT scan will be uploaded, the system will automatically draw contours, recompute the volume and
send an alert if it exceeds the defined thresholds. The physicians will visually review the images and
decide if the therapeutic plan must be revised. The system is sketched in fig. 4. Gray boxes represent
operations that are already performed in the normal practice. The dotted box encloses time consuming
manual checks, that might not be routinely performed; the system goal is to aid the therapists to check
in depth only cases where a large volume changed is likely.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This work is partially supported by i) the Università degli Studi di Milano within the program “Piano di
sostegno alla ricerca”, ii) the MUSA – Multilayered Urban Sustainability Action – project, funded by
the European Union – NextGenerationEU, under the National Recovery and Resilience Plan (NRRP)
Mission 4 Component 2 Investment Line 1.5: Strenghtening of research structures and creation of R&amp;D
“innovation ecosystems”, set up of “territorial leaders in R&amp;D, and iii) the project SERICS (PE00000014)
under the MUR NRRP funded by the EU - NextGenerationEU.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <sec id="sec-7-1">
        <title>The author(s) have not employed any Generative AI tools.</title>
        <p>[9] A. Fedorov, R. Beichel, et al., 3d slicer as an image computing platform for the quantitative imaging
network., Magnetic Resonance Imaging 30 (2012).
[10] C. Pinter, A. Lasso, A. Wang, D. Jafray, G. Fichtinger, Slicerrt: Radiation therapy research toolkit
for 3d slicer, Medical Physics 39 (2012) 6332–6338. URL: https://aapm.onlinelibrary.wiley.com/doi/
10.1118/1.4754659. doi:10.1118/1.4754659.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>K.</given-names>
            <surname>Doi</surname>
          </string-name>
          ,
          <article-title>Diagnostic imaging over the last 50 years: research and development in medical imaging science and technology</article-title>
          ,
          <source>Physics in Medicine &amp; Biology</source>
          <volume>51</volume>
          (
          <year>2006</year>
          )
          <article-title>R5</article-title>
          . URL: https://dx.doi.org/10. 1088/
          <fpage>0031</fpage>
          -9155/51/13/R02. doi:
          <volume>10</volume>
          .1088/
          <fpage>0031</fpage>
          -9155/51/13/R02.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>N.</given-names>
            <surname>Goel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Yadav</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. M.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <article-title>Medical image processing: A review, in: 2016 Second International Innovative Applications of Computational Intelligence on Power, Energy and Controls with their Impact on Humanity (CIPECH</article-title>
          ),
          <year>2016</year>
          , pp.
          <fpage>57</fpage>
          -
          <lpage>62</lpage>
          . doi:
          <volume>10</volume>
          .1109/CIPECH.
          <year>2016</year>
          .
          <volume>7918737</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Obuchowicz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Strzelecki</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Piórkowski,</surname>
          </string-name>
          <article-title>Clinical applications of artificial intelligence in medical imaging and image processing-a review</article-title>
          ,
          <source>Cancers</source>
          <volume>16</volume>
          (
          <year>2024</year>
          ). URL: https://www.mdpi.com/ journal/cancers. doi:
          <volume>10</volume>
          .3390/cancers16101870.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Chlebus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Schenk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Moltz</surname>
          </string-name>
          ,
          <string-name>
            <surname>B. van Ginneken</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. K.</given-names>
            <surname>Hahn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Meine</surname>
          </string-name>
          ,
          <article-title>Automatic liver tumor segmentation in ct with fully convolutional neural networks and object-based postprocessing</article-title>
          ,
          <source>Scientific Reports</source>
          <volume>8</volume>
          (
          <year>2018</year>
          )
          <article-title>15497</article-title>
          . doi:
          <volume>10</volume>
          .1038/s41598-018-33860-7.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>N. J.</given-names>
            <surname>Wesdorp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Zeeuw</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. C. J.</given-names>
            <surname>Postma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Roor</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. H. T. M. van Waesberghe</surname>
            , J. E. van den Bergh,
            <given-names>I. M.</given-names>
          </string-name>
          <string-name>
            <surname>Nota</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Moos</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Kemna</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Vadakkumpadan</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Ambrozic</surname>
            ,
            <given-names>S. van Dieren</given-names>
          </string-name>
          ,
          <string-name>
            <surname>M. J. van Amerongen</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Chapelle</surname>
            ,
            <given-names>M. R. W.</given-names>
          </string-name>
          <string-name>
            <surname>Engelbrecht</surname>
            ,
            <given-names>M. F.</given-names>
          </string-name>
          <string-name>
            <surname>Gerhards</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Grunhagen</surname>
          </string-name>
          ,
          <string-name>
            <surname>T. M. van Gulik</surname>
            ,
            <given-names>J. J.</given-names>
          </string-name>
          <string-name>
            <surname>Hermans</surname>
          </string-name>
          , K. P. de Jong,
          <string-name>
            <surname>J. M. Klaase</surname>
            ,
            <given-names>M. S. L.</given-names>
          </string-name>
          <string-name>
            <surname>Liem</surname>
            ,
            <given-names>K. P. van Lienden</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>I. Q.</given-names>
            <surname>Molenaar</surname>
          </string-name>
          , G. Kazemier,
          <article-title>Deep learning models for automatic tumor segmentation and total tumor volume assessment in patients with colorectal liver metastases</article-title>
          ,
          <source>European Radiology Experimental</source>
          <volume>7</volume>
          (
          <year>2023</year>
          )
          <article-title>75</article-title>
          . doi:
          <volume>10</volume>
          .1186/s41747-023-00383-4.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Balaguer-Montero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Marcos</given-names>
            <surname>Morales</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ligero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zatse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Leiva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Atlagich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Staikoglou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Viaplana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Monreal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mateo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hernando</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>García-Álvarez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Salvà</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Capdevila</surname>
          </string-name>
          , E. Elez,
          <string-name>
            <given-names>R.</given-names>
            <surname>Dienstmann</surname>
          </string-name>
          , E. Garralda,
          <string-name>
            <given-names>R.</given-names>
            <surname>Perez-Lopez</surname>
          </string-name>
          ,
          <article-title>A ct-based deep learning-driven tool for automatic liver tumor detection and delineation in patients with cancer</article-title>
          ,
          <source>Cell Reports Medicine</source>
          <volume>6</volume>
          (
          <year>2025</year>
          )
          <article-title>102032</article-title>
          . URL: https://www.sciencedirect.com/science/article/pii/S2666379125001053. doi:https: //doi.org/10.1016/j.xcrm.
          <year>2025</year>
          .
          <volume>102032</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M. Z.</given-names>
            <surname>Islam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Naqvi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Haider</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. S.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <article-title>Deep learning for automatic tumor lesions delineation and prognostic assessment in multi-modality pet/ct: A prospective survey</article-title>
          ,
          <source>Engineering Applications of Artificial Intelligence</source>
          <volume>123</volume>
          (
          <year>2023</year>
          )
          <article-title>106276</article-title>
          . URL: https://www.sciencedirect.com/science/ article/pii/S0952197623004608. doi:https://doi.org/10.1016/j.engappai.
          <year>2023</year>
          .
          <volume>106276</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>O. M.</given-names>
            <surname>Dona Lemus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Cai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cummings</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zheng</surname>
          </string-name>
          , Adaptive radiotherapy: Nextgeneration radiotherapy,
          <source>Cancers</source>
          <volume>16</volume>
          (
          <year>2024</year>
          ). URL: https://www.mdpi.com/2072-6694/16/6/1206. doi:
          <volume>10</volume>
          .3390/cancers16061206.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>