<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>November</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Multi-Stage Segmentation and Cascade Classification Methods for Improving Cardiac MRI Analysis</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vitalii Slobodzian</string-name>
          <email>vitalii.slobodzian@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pavlo Radiuk</string-name>
          <email>radiukp@khmnu.edu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksander Barmak</string-name>
          <email>barmako@khmnu.edu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Iurii Krak</string-name>
          <email>iurii.krak@knu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Glushkov Cybernetics Institute</institution>
          ,
          <addr-line>40, Glushkov Ave., Kyiv, 03187</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Khmelnytskyi National University</institution>
          ,
          <addr-line>11, Institutes str., Khmelnytskyi, 29016</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>64/13, Volodymyrska str., Kyiv, 01601</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>2</volume>
      <fpage>0</fpage>
      <lpage>21</lpage>
      <abstract>
        <p>The segmentation and classification of cardiac magnetic resonance imaging are critical for diagnosing heart conditions, yet current approaches face challenges in accuracy and generalizability. In this study, we aim to further advance the segmentation and classification of cardiac magnetic resonance images by introducing a novel deep learning-based approach. Using a multi-stage process with U-Net and ResNet models for segmentation, followed by Gaussian smoothing, the method improved segmentation accuracy, achieving a Dice coefficient of 0.974 for the left ventricle and 0.947 for the right ventricle. For classification, a cascade of deep learning classifiers was employed to distinguish heart conditions, including hypertrophic cardiomyopathy, myocardial infarction, and dilated cardiomyopathy, achieving an average accuracy of 97.2%. The proposed approach outperformed existing models, enhancing segmentation accuracy and classification precision. These advancements show promise for clinical applications, though further validation and interpretation across diverse imaging protocols is necessary.</p>
      </abstract>
      <kwd-group>
        <kwd>cardiac MRI</kwd>
        <kwd>heart pathology</kwd>
        <kwd>deep learning</kwd>
        <kwd>segmentation</kwd>
        <kwd>Gaussian smoothing</kwd>
        <kwd>classification</kwd>
        <kwd>cascade1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Cardiovascular disease (CVD) remains the primary cause of global mortality, accounting for
approximately 17.9 million deaths annually [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Its substantial impact highlights an urgent demand
for effective diagnostic tools to detect and manage heart-related pathologies early. Cardiac magnetic
resonance imaging (MRI) has established itself as the gold standard in cardiac diagnostics, offering
non-invasive, high-resolution images of heart structures and functions. These capabilities make MRI
indispensable for identifying conditions such as myocardial infarction, cardiomyopathies, and
structural abnormalities [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ].
      </p>
      <p>
        Despite its strengths, cardiac MRI faces considerable challenges. The heart s intricate anatomy
and its continuous motion due to respiration and heartbeat introduce artifacts that compromise
image clarity. Additional factors, such as the presence of metal implants or equipment-induced
distortions, further complicate accurate image interpretation [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. These issues often require
laborintensive image preprocessing and corrections, thereby increasing the cost and time required for
analysis.
      </p>
      <p>
        Artificial intelligence (AI) has emerged as a transformative technology in medical imaging,
demonstrating its ability to automate complex tasks and identify subtle abnormalities that may elude
human observers. Deep learning (DL), in particular, has shown remarkable potential for tasks such
as image segmentation and classification, offering high accuracy and consistency [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. However, the
integration of AI into medical workflows faces several obstacles, including the need for extensive
annotated datasets, concerns about data privacy, and challenges in adapting AI models to diverse
clinical environments [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>The primary issue in cardiac MRI processing is the difficulty in achieving accurate segmentation
and classification of MRI scans due to motion artifacts, complex heart anatomy, and existing model
limitations. Existing solutions often struggle with issues like image artifacts, poor segmentation in
complex cases, and the inability to accurately classify various heart conditions due to segmentation
errors. Thus, this study aims to address these challenges by introducing an innovative approach to
cardiac MRI analysis. Specifically, the objective is to design novel methods that deliver highly
accurate segmentation and classification performance, ultimately advancing clinical
decisionmaking.</p>
      <p>The structure of the paper is as follows: Section 2 reviews the state-of-the-art techniques in
cardiac MRI segmentation and classification, highlighting advancements and limitations. In Section
3, the manuscript introduces a multi-stage segmentation process using U-Net and ResNet models,
followed by a cascade classification system. Section 4 presents improved segmentation accuracy
through mask localization and postprocessing, alongside high classification precision. Finally,
Section 5 summarizes the study s findings, emphasizing its contributions to enhancing cardiac MRI
analysis and discussing potential limitations and future research directions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related works</title>
      <p>
        DL has completely transformed medical image analysis by uncovering complex patterns in data that
traditional methods struggle to identify. Models like U-Net [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and ResNet [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] have been instrumental
in achieving accurate image segmentation, even when trained on limited datasets. U-Net s
encoderdecoder architecture, for instance, efficiently captures both global and local image features. However,
these models often demand significant computational resources and rely on substantial training data
to achieve optimal performance [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>
        Recent trends emphasize building trust in AI systems by introducing human-in-the-loop [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] and
human-centric approaches [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. While these hybrid techniques improve interpretability and
reliability, they increase the complexity of deployment. Additionally, combining deep learning with
traditional methods, such as active contour modeling, enhances segmentation precision but adds to
computational overhead [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>
        In the field of cardiac MRI, multimodal approaches that integrate data from various imaging
modalities, such as CT and MRI, have shown promise [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. While these methods improve
segmentation outcomes, their reliance on datasets from different imaging sources creates significant
integration challenges [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. For instance, Hu et al. [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] developed a deeply supervised network paired
with a 3D Active Shape Model that reduces manual initialization efforts. Despite its effectiveness,
the method s high computational demands and lack of validation across imaging protocols limit its
broader applicability. da Silva et al. [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] introduced a cascade approach utilizing DL models for
automatic segmentation of cardiac structures in short-axis cine-MRIs, achieving enhanced
segmentation accuracy; however, it may face limitations such as increased computational complexity
and reduced generalizability due to reliance on high-quality training data.
      </p>
      <p>
        In addition, recent enhancements to U-Net, such as attention mechanisms [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] and residual
connections [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], have further boosted their performance in cardiac MRI segmentation. These
improvements allow the model to better focus on relevant regions and handle variations in heart
anatomy. However, challenges remain in terms of computational efficiency and robustness to
imaging artifacts.
      </p>
      <p>
        Segmentation and classification are often treated as isolated tasks, but recent works aim to
combine these processes. Sander et al. [20] addressed segmentation errors with a corrective
framework that requires manual intervention, increasing workflow complexity. Ammar et al. [
        <xref ref-type="bibr" rid="ref20">21</xref>
        ]
designed a combined segmentation-classification pipeline for diagnosing heart diseases, but its
reliance on high-quality segmentation introduces additional training burdens. Similarly, Zheng et al.
[
        <xref ref-type="bibr" rid="ref21">22</xref>
        ] utilized semi-supervised learning for explainable classification but encountered issues with
85
motion artifacts. Zhang et al. [
        <xref ref-type="bibr" rid="ref22">23</xref>
        ] leveraged dilated convolutions for multi-scale segmentation,
though their method struggled with overfitting and resource-intensive training.
      </p>
      <p>Existing approaches to cardiac MRI face several unresolved issues, including dependency on
highquality data, poor generalizability across diverse clinical environments, and the high computational
cost of model training and deployment. These limitations hinder the practical application of DL in
cardiac MRI analysis.</p>
      <p>The goal is to enhance the accuracy of heart structure segmentation and improve the
classification of conditions such as hypertrophic cardiomyopathy, myocardial infarction, and dilated
cardiomyopathy. The main contributions of this research are as follows:
• A multi-stage segmentation method combining U-Net and ResNet DL models for
localizing and segmenting heart structures, followed by postprocessing with Gaussian
smoothing to refine contours and reduce artifacts.
• An MRI classification method based on the DL cascade model for distinguishing between
heart conditions by leveraging segmented MRI data.
• Significant improvement in segmentation accuracy, achieving a Dice coefficient of up to
0.974 for left ventricle (LV) and 0.947 for right ventricle (RV) segmentation.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methods and materials</title>
      <p>In this study, we introduce a novel approach to the segmentation and classification of MRI scans,
involving a multi-stage process, as illustrated in Figures 1.</p>
      <p>
        The proposed approach is divided into two key stages. In the first stage, relevant heart parts are
segmented to extract critical anatomical features. In the second stage, a cascade of DL models [
        <xref ref-type="bibr" rid="ref23">24</xref>
        ] is
employed to classify the MRI scans, ultimately producing the predicted classes. The following
subsections detail each stage of the process, along with the materials and techniques used.
      </p>
      <p>The first stage of the process is presented as a novel method of MRI segmentation, while the
second stage is formalized as a new method of MRI classification. Below, we describe all stages of
the proposed approach in detail.</p>
      <sec id="sec-3-1">
        <title>3.1. Method of MRI segmentation</title>
        <p>The proposed method for heart segmentation on MRIs involves three key steps: localization, mask
generation, and post-processing to refine contours. First, existing masks are split into binary masks
for the myocardium, LV, and RV with a DL model used to identify the region for each fragment.
Then, DL helps refine the contours, and finally, the masks are combined into a single mask and
resized to their original dimensions for improved accuracy.</p>
        <p>These steps together provide an integrated approach (Figure 2), which increases the accuracy of
heart segmentation on MRI scans.</p>
        <p>Below is a detailed description of each step of the method.</p>
        <p>The input data for the process in the image consists of MRI scans of the heart, where masks
representing different heart structures are provided. These masks depict the LV, RV, and
myocardium as distinct areas for analysis.</p>
        <p>Step 1. The localization part consists of decomposing the existing masks into separate binary
masks for different heart structures: myocardium, LV, and RV (Figure 3).</p>
        <p>This process allows each heart structure to be processed separately, improving segmentation
accuracy. Each binary mask focuses on a specific heart structure, where relevant pixels are marked
as 1, and all others are 0. This separation helps DL models target individual structures, reducing
interference from other parts of the image and simplifying the segmentation task, which boosts
accuracy and reduces computational complexity.</p>
        <p>For each mask, a separate DL model is trained to detect the location of a specific heart fragment,
working like an object detector to identify boundaries within the MRI scan. For example, the model
trained for the LV focuses only on locating that specific structure.</p>
        <p>
          The models are trained using the Fastai library [
          <xref ref-type="bibr" rid="ref24">25</xref>
          ] and pre-trained networks built on U-Net [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]
and ResNet [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] architectures, with the ResNet-34 version (34 layers) being used in this study. Images
are resized for uniformity before training, and the model is trained for 10 epochs, followed by
finetuning and an additional 10 epochs. This method improves accuracy by adjusting parameters, and
the resulting masks help center and localize the heart structures by adjusting the image s aspect ratio
and adding a 15% frame for better focus. The localization result for the LV is shown in Figure 4.
        </p>
        <p>As an outcome, the first phase yields localized images with marked regions of interest: the
myocardium, LV, and RV.</p>
        <p>Step 2. For cardiac mask generation, there were three separately trained models for each heart
structure. These models take the localized images from step 1 as input and perform detailed region
delineation of each heart structure.</p>
        <p>Training here follows the same approaches and technologies as in step 1. Image localization helps
to operate with less data, boosting accuracy in determining heart structure contours. This
localization helps avoid noise and unrelated structures, allowing the DL model to capture finer
details, which is essential for this step s accuracy. Figure 5 shows original input image, samples of
input localized images, and output masks from step 2.</p>
        <p>Therefore, the output of step 2 is segmented images containing masks of separately defined areas:
the myocardium, LV and RV.</p>
        <p>Step 3. Postprocessing focuses on refining and improving the quality of the generated masks. Since
the models are trained on uniformly resized images, they must be scaled back to their original
dimensions for proper comparison with the ground truth masks. However, simple resizing can cause
detail loss and artifacts, which affects the final evaluation. To address this, smoothing methods that
create smooth pixel transitions for a more natural appearance when resizing are used. In our case,
Gaussian smoothing offered an acceptable balance between performance and efficiency. It is
formalized by the following formula:
 ( ,  ) =</p>
        <p>1
2  2

−
 2+  2
2 2 ,
where  ( ,  ) is the Gaussian filter value in point ( ,  ),  stands for standard deviation, which
specifies the intensity of smoothing, ( ,  ) represent pixel coordinates.</p>
        <p>Linear regression is utilized to identify the optimal value automatically of  in formula (1) for
each image size. Finally, the output data of the proposed method are segmented images with
improved masks in their original size for more correct comparison with expert masks.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.1. Method of MRI classification</title>
        <p>The proposed classification method detects abnormalities in LV and RV or confirms a normal state
by analyzing MRI scans across different cardiac cycle stages. Structured in multiple levels to
(1)
minimize class confusion and improve generalization, it incorporates critical anatomical features
such as tissue density, ventricular volume, and dynamic myocardium thickness.</p>
        <p>By leveraging segmentation results from prior steps of method of MRI segmentation and
combining MRI scans and segmentation masks from both diastolic and systolic phases, the model
captures both geometric and texture details essential for accurate diagnosis. Each heart segment is
represented in separate RGB channels, aiding the DL model in analyzing structural and tissue
heterogeneity, with images interpolated to a consistent size to reduce noise and irrelevant details
before classification.</p>
        <p>Figure 6 shows the set of images that are typically fed into the DL model.</p>
        <sec id="sec-3-2-1">
          <title>End systole</title>
        </sec>
        <sec id="sec-3-2-2">
          <title>End diastole</title>
          <p>
            To address the common issue of class imbalance in medical datasets, the proposed method uses a
cascading classification model, following the scheme in [
            <xref ref-type="bibr" rid="ref23">24</xref>
            ]. This approach helps improve
generalization in small datasets by training binary classifiers that focus on two specific classes at a
time, enhancing classification accuracy.
          </p>
          <p>The cascade consists of four classifiers:
1. The first classifier separates LV pathologies from RV pathologies and normal conditions,
allowing the model to focus on general LV features.
2. The second classifier distinguishes between RV abnormalities and normal conditions, further
refining the model s accuracy.
3. The third classifier differentiates hypertrophic cardiomyopathy from other LV pathologies.
4. The fourth classifier separates myocardial infarction-related pathologies from dilated
cardiomyopathy, which are often hard to tell apart, enabling the model to better distinguish
between them.</p>
          <p>Figure 7 illustrates the application of all four classifiers for pathology identification.</p>
          <p>Is ARV?</p>
          <p>Is HCM?</p>
          <p>Yes
Is DCM?</p>
          <p>No</p>
          <p>Yes
DCM</p>
          <p>Yes
HCM</p>
          <p>No</p>
          <p>NOR</p>
          <p>Input
42×64×64×3</p>
          <p>Yes
ARV</p>
          <p>No</p>
          <p>Start
Is ALV?</p>
          <p>No
MINF</p>
          <p>End</p>
          <p>
            The proposed classifiers utilize the CNN model [
            <xref ref-type="bibr" rid="ref25">26</xref>
            ] adapted for the task of binary classification.
The architecture is schematically represented in Figure 8.
          </p>
          <p>×3</p>
          <p>The model architecture has 50 layers and includes essential components like an initial
convolutional layer for extracting basic features and normalization and activation layers to stabilize
learning. The first layer, Conv1, uses large filters to capture basic features like edges and textures,
followed by Conv2 through Conv5, which apply various filters to learn more complex and abstract
details at each stage.</p>
          <p>After these convolutional operations, global average pooling gathers all learned features into a
single vector, which is then passed to the final layer responsible for binary classification. This
multilayered processing allows the model to accurately analyze both simple and complex patterns in the
input data, making it highly suitable for classification tasks.</p>
          <p>The overall method scheme is depicted in Figure 9. The method involves the following key steps.
The input data consists of modified images from the dataset, including MRI scans for each patient
during both the diastolic and systolic phases.</p>
          <p>Step 1: MRI scans are prepared by cropping to focus on the necessary heart segments, then resizing
them to a uniform dimension. The segmentation masks and images are combined, with each heart
segment placed in a separate channel.</p>
          <p>Step 2: The cascade of four classifiers is trained, with each classifier trained individually. The
the data split into training and validation sets. Early stopping is used during training to prevent
overfitting by stopping the process if validation losses don t improve.</p>
          <p>The output of the method is a trained cascade of classifiers that can identify the following
pathologies:
1. Abnormal right ventricle (ARV).
2. Hypertrophic cardiomyopathy (HCM).
3. Previous myocardial infarction (MINF).
4. Dilated cardiomyopathy (DCM).
5. Normal state (NOR).</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>3.2. Dataset</title>
        <p>
          The Automated Cardiac Diagnostic Challenge (ACDC) dataset [
          <xref ref-type="bibr" rid="ref26">27</xref>
          ] was used for both segmentation
and classification tasks in this study. The dataset includes 150 patients split into five groups: healthy,
myocardial infarction, dilated cardiomyopathy, hypertrophic cardiomyopathy, and right ventricular
anomaly. Each patient s data includes physical parameters, images, and expert-annotated heart
structure masks. While previous work [
          <xref ref-type="bibr" rid="ref27">28</xref>
          ] filtered the dataset for improved results, this study uses
the original dataset. The pre-formed training and testing sets were used to ensure comparability with
other studies.
        </p>
      </sec>
      <sec id="sec-3-4">
        <title>3.3. Evaluation criteria</title>
        <p>Experiments were conducted to evaluate each stage of the method, with models trained using
consistent epochs, architecture, and data. The results were averaged over 10 training and testing
cycles to ensure objectivity. Segmentation quality was measured using the Dice coefficient, which
compares the overlap between predicted and expert masks. The Dice coefficient formula is as follows:
2 × | ∩  |
 = , (2)</p>
        <p>| | + | |
where  is a set of pixels,  is a set of pixels of true segmentation, | | represents set  count, | |
stands for set  count, | ∩  | represents count of overlapped elements for the set  and set  ; a
value of 0 in formula (2) indicates no overlap, and 1 indicates perfect alignment between the masks.</p>
        <p>For classification accuracy, the average is calculated by considering each classifier s accuracy at
every step and taking the arithmetic mean of all class accuracies to get the overall model accuracy.
This approach ensures a fair comparison with other methods. The following formalizations are used
for these calculations:
 NOR,ARV =
 HCM =</p>
        <p>MINF,DCM =
 =</p>
        <p>1 +  
+  
+  
,
4,
(3)
(4)
(5)
(6)
where   1−4 represents the accuracy of each classifier,  NOR,ARV, , ,
represents the classification accuracy of each class, with A being the overall accuracy of the method.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results and discussion</title>
      <sec id="sec-4-1">
        <title>4.1. Results for method of segmentation</title>
        <p>The experimental results obtained to determine the accuracy of the localization, decomposition,
and postprocessing stages are shown in Table 1.</p>
        <p>Moreover, the results obtained are compared with other methods (Table 2).</p>
        <p>Segmentation of original images. In the first stage of the experiments, a model was trained to
segment full MRI scans without any prior localization or decomposition. The model was trained to
detect the contours of the myocardium, as well as the LV and RV, across the entire image. The results
of this experiment are shown in Figure 10.</p>
        <p>Localization and segmentation of original images. The second stage of the experiments involved
localization and segmentation of the original MRI scans. First models were used to determine the
heart area location (with myocardium, RV, and LV). After that, the localized area was passed to the
input of the DL model for detailed segmentation. An example of the result of the described
experiment is shown in Figure 11.</p>
        <p>Approaches</p>
        <p>Localization and segmentation of decomposed images. The fourth stage of the experiments involved
localization and segmentation of the decomposed images. First, for each of the binary masks
(myocardium, LV, and RV), localization models were used to define the regions of these structures.
The localized regions were then passed to DL models for detailed segmentation. This approach
allowed us to assess the impact of preliminary localization and decomposition on segmentation
accuracy. An example of the result of the described experiment is shown in Figure 13.</p>
        <p>Localization and segmentation of decomposed images with postprocessing (proposed approach). At
the fifth and final stage of the experiments, the decomposed images were localized and segmented,
followed by postprocessing. After completing localization and segmentation for each of the binary
masks, the results were processed using postprocessing to smooth transitions and reduce artifacts.
The masks were returned to their original size using blurring techniques to ensure a correct
comparison with the expert masks. The results are shown in Figure 14.</p>
        <p>Therefore, the experiments have demonstrated enhanced accuracy of the proposed method, which
includes localization, decomposition, and postprocessing of images. This approach provides high
accuracy of segmentation of heart structures in MRI scans, which is critical for further clinical
analysis and diagnosis.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Results for method of classification</title>
        <p>The proposed classification method was evaluated using several metrics, including precision, recall,
F1-score, and overall accuracy. For each of the four classification steps, metrics (2) (6) were used to
assess the detection and separation of various heart pathologies. Figure 15 presents the confusion
matrix for each classification step, demonstrating the rate of correct, false positive, and false negative
classifications.
Classifier 1</p>
        <p>Classifier 2</p>
        <p>Classifier 3</p>
        <p>Classifier 4</p>
        <p>The first step showed a high accuracy of 0.96 in separating LV pathologies from other cases, while
the second step achieved a perfect accuracy of 1.0 for distinguishing between the normal state and
RV abnormalities. The third step also achieved a perfect accuracy of 1.0 in classifying hypertrophic
cardiomyopathy from other LV pathologies. Finally, the fourth step, which differentiates between
previous myocardial infarction and dilated cardiomyopathy, showed an accuracy of 0.90.</p>
        <p>Figure 16 presents the Receiver Operating Characteristic (ROC) curves for each of the four
classification steps, illustrating the relationship between sensitivity (True Positive Rate) and
specificity (False Positive Rate).</p>
        <p>Classifier 1</p>
        <p>Classifier 2</p>
        <p>Classifier 3</p>
        <p>Classifier 4</p>
        <p>The results obtained indicate that the proposed multi-stage segmentation and cascade
classification approach delivers competitive performance in cardiac MRI analysis. The AUC values
for the classification steps are consistently high, with Classifiers 1, 2, and 3 achieving near-perfect
classes. Classifier 4, while slightly lower with an AUC of 0.91, still demonstrates adequate
performance, though there may be room for further refinement to improve classification of more
handling various heart conditions with minimal misclassification.</p>
        <p>A comparison of the overall accuracy of this method with the results from other authors work is
presented in Table 4.</p>
        <p>
          Comparative analysis (Table 4) shows that our method achieves an overall classification accuracy
of 0.972, positioning it closely with other state-of-the-art techniques. Although slightly lower than
the highest reported accuracy of 0.998 by Mahendra et al. [
          <xref ref-type="bibr" rid="ref22">23</xref>
          ], our approach maintains a strong
balance between accuracy and practical applicability, achieving improvements over several other
benchmarks, including Zheng et al. [
          <xref ref-type="bibr" rid="ref21">22</xref>
          ] and Ammar et al. [
          <xref ref-type="bibr" rid="ref20">21</xref>
          ]. These results suggest that the
proposed methods are robust and reliable, making them suitable for clinical applications.
        </p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Limitations of the proposed methods</title>
        <p>While the proposed methods for myocardium segmentation in LV and RV show promise, there are
some inherent limitations that need to be addressed. First, the model s performance can degrade
significantly when processing low-quality MRI images. This is particularly noticeable when parts of
the myocardium or ventricles are not fully visible, leading the model to either generate incorrect
segmentations or miss the regions altogether. The model relies on detecting differences between the
target structures and surrounding tissues, so poor visualization can severely affect its accuracy</p>
        <p>Another challenge arises when the brightness levels in the images are either too low or too high.
In such cases, the model might struggle to correctly identify the boundaries of the heart structures,
resulting in poorly defined segmentations. Furthermore, the model s training data may lack sufficient
examples of certain pathological conditions, such as cardiomyopathy or spongy myocardium. This
scarcity of cases can reduce the model s ability to generalize to these complex conditions, affecting
its reliability in clinical settings.</p>
        <p>Therefore, while the approach is robust under ideal conditions, its accuracy depends largely on
the quality of the input data. Special care is needed when working with low-quality images or
uncommon pathologies, as these can lead to decreased accuracy and make the model less reliable in
critical diagnostic scenarios.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>This study presented a novel approach to cardiac MRI segmentation and classification, significantly
improving accuracy using a multi-stage process combining U-Net and ResNet models to enhance the
segmentation of heart structures. Gaussian smoothing is applied to refine the contours and minimize
artifacts. The classification process leverages a cascade of DL classifiers to distinguish between heart
conditions such as hypertrophic cardiomyopathy, myocardial infarction, and dilated
cardiomyopathy.</p>
      <p>The performance of the methods was evaluated using the Dice coefficient for segmentation
accuracy and several classification metrics. The proposed approach demonstrated significant
improvements in segmentation accuracy, achieving a Dice coefficient of 0.974 for the LV and 0.947
for the RV. Classification of heart conditions also showed high results, achieving an accuracy of 96%
for LV pathologies, 100% for hypertrophic cardiomyopathy, and 90% for differentiating myocardial
infarction from dilated cardiomyopathy. Despite these promising results, the method has limitations,
particularly when processing low-quality images or dealing with complex pathologies, where
segmentation accuracy may decrease.</p>
      <p>Future work will focus on developing new techniques for interpreting the results, aiming to make
the method more applicable and reliable in clinical settings.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <sec id="sec-6-1">
        <title>The authors have not employed any Generative AI tools.</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <article-title>[1] Assessing national capacity for the prevention and control of noncommunicable diseases: report of the 2021 global survey</article-title>
          . Geneva: World Health Organization;
          <year>2023</year>
          .
          <article-title>Licence: CC BY-NC-SA 3.0 IGO.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Counseller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Aboelkassem</surname>
          </string-name>
          ,
          <article-title>Recent technologies in cardiac imaging</article-title>
          ,
          <source>Front. Med</source>
          . Technol.
          <volume>4</volume>
          (
          <year>2023</year>
          )
          <article-title>984492</article-title>
          . doi:
          <volume>10</volume>
          .3389/fmedt.
          <year>2022</year>
          .
          <volume>984492</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Seraphim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. D.</given-names>
            <surname>Knott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Augusto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Bhuva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Manisty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Moon</surname>
          </string-name>
          , Quantitative cardiac
          <string-name>
            <surname>MRI</surname>
          </string-name>
          ,
          <source>J. Magn. Reson. Imaging 51.3</source>
          (
          <year>2020</year>
          )
          <fpage>693</fpage>
          711. doi:
          <volume>10</volume>
          .1002/jmri.26789.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Kramer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Barkhausen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bucciarelli-Ducci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. D.</given-names>
            <surname>Flamm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Kim</surname>
          </string-name>
          , E. Nagel,
          <article-title>Standardized cardiovascular magnetic resonance imaging (CMR) protocols: 2020 update</article-title>
          ,
          <source>J. Cardiovasc. Magn. Reson. 22.1</source>
          (
          <year>2020</year>
          )
          <article-title>17</article-title>
          . doi:
          <volume>10</volume>
          .1186/s12968-020-00607-1.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Boutet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Rashid</surname>
          </string-name>
          , I. Hancu,
          <string-name>
            <given-names>G. J. B.</given-names>
            <surname>Elias</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Gramer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Germann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dimarzio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Paramanandam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Prasad</surname>
          </string-name>
          , et al.,
          <string-name>
            <surname>Functional</surname>
            <given-names>MRI</given-names>
          </string-name>
          <article-title>safety and artifacts during deep brain stimulation: Experience in 102 patients</article-title>
          ,
          <source>Radiology</source>
          <volume>293</volume>
          .1 (
          <year>2019</year>
          )
          <fpage>174</fpage>
          183. doi:
          <volume>10</volume>
          .1148/radiol.2019190546.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>P.</given-names>
            <surname>Radiuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Barmak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Manziuk</surname>
          </string-name>
          and
          <string-name>
            <surname>I. Krak</surname>
          </string-name>
          ,
          <article-title>Explainable deep learning: A visual analytics approach with transition matrices</article-title>
          ,
          <source>Mathematics 12.7</source>
          (
          <year>2024</year>
          )
          <article-title>1024</article-title>
          . doi:
          <volume>10</volume>
          .3390/math12071024.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hussain</surname>
          </string-name>
          , I. Mubeen,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ullah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. S.</given-names>
            <surname>U. D. Shah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. A.</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zahoor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ullah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Sultan</surname>
          </string-name>
          ,
          <article-title>Modern diagnostic imaging technique applications and risk factors in the medical field: A review</article-title>
          ,
          <source>BioMed Res. Int</source>
          .
          <year>2022</year>
          .
          <volume>1</volume>
          (
          <year>2022</year>
          )
          <article-title>5164970</article-title>
          . doi:
          <volume>10</volume>
          .1155/
          <year>2022</year>
          /5164970.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>O.</given-names>
            <surname>Ronneberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Fischer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Brox</surname>
          </string-name>
          , U-Net:
          <article-title>Convolutional networks for biomedical image segmentation</article-title>
          ,
          <source>in: Lecture Notes in Computer Science</source>
          , 9351st. ed., Springer International Publishing, Cham,
          <year>2015</year>
          , pp.
          <fpage>234</fpage>
          <lpage>241</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -24574-4_
          <fpage>28</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>K.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , S. Ren,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>Deep residual learning for image recognition</article-title>
          ,
          <source>in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          , IEEE, New York, NY, USA,
          <year>2016</year>
          , pp.
          <fpage>770</fpage>
          <lpage>778</lpage>
          . doi:
          <volume>10</volume>
          .1109/cvpr.
          <year>2016</year>
          .
          <volume>90</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>El-Taraboulsi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. P.</given-names>
            <surname>Cabrera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Roney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Aung</surname>
          </string-name>
          ,
          <article-title>Deep neural network architectures for cardiac image segmentation</article-title>
          ,
          <source>Artif. Intell. Life Sci. 4</source>
          (
          <year>2023</year>
          )
          <article-title>100083</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.ailsci.
          <year>2023</year>
          .
          <volume>100083</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>P.</given-names>
            <surname>Radiuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kovalchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Slobodzian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Manziuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Barmak</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Krak</surname>
          </string-name>
          ,
          <article-title>Human-in-the-loop approach based on MRI and ECG for healthcare diagnosis</article-title>
          ,
          <source>in: Proceedings of the 5th International Conference on Informatics &amp; Data-Driven Medicine, CEUR-WS.org, Aachen</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>9</fpage>
          <lpage>20</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>B.</given-names>
            <surname>Lambert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Forbes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Doyle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Dehaene</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dojat</surname>
          </string-name>
          ,
          <article-title>Trustworthy clinical AI solutions: A unified review of uncertainty quantification in deep learning models for medical image analysis</article-title>
          ,
          <source>Artif. Intell. Med</source>
          .
          <volume>150</volume>
          (
          <year>2024</year>
          )
          <article-title>102830</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.artmed.
          <year>2024</year>
          .
          <volume>102830</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>R.</given-names>
            <surname>Azad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. K.</given-names>
            <surname>Aghdam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rauland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. H.</given-names>
            <surname>Avval</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bozorgpour</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Karimijafarbigloo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Cohen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Adeli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Merhof</surname>
          </string-name>
          ,
          <article-title>Medical image segmentation review: The success of U-Net</article-title>
          ,
          <source>IEEE Trans. Pattern Anal. Mach. Intell</source>
          .
          <volume>46</volume>
          .12 (
          <year>2024</year>
          )
          <fpage>10076</fpage>
          10095. doi:
          <volume>10</volume>
          .1109/tpami.
          <year>2024</year>
          .
          <volume>3435571</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Jafari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Shoeibi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Khodatars</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ghassemi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Moridian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Alizadehsani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Khosravi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. H.</given-names>
            <surname>Ling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Delfan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.-D.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , et al.,
          <article-title>Automated diagnosis of cardiovascular diseases from cardiac magnetic resonance imaging using deep learning models: A review</article-title>
          ,
          <source>Comput. Biol. Med</source>
          .
          <volume>160</volume>
          (
          <year>2023</year>
          )
          <article-title>106998</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.compbiomed.
          <year>2023</year>
          .
          <volume>106998</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Pandey</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.-F. Chen</surname>
            ,
            <given-names>E. B.</given-names>
          </string-name>
          <string-name>
            <surname>Dam</surname>
          </string-name>
          ,
          <article-title>Comprehensive multimodal segmentation in medical imaging: Combining YOLOv8 with SAM and HQ-SAM models</article-title>
          , in: 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), IEEE, New York, NY, USA,
          <year>2023</year>
          , pp.
          <fpage>2584</fpage>
          <lpage>2590</lpage>
          . doi:
          <volume>10</volume>
          .1109/iccvw60793.
          <year>2023</year>
          .
          <volume>00273</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>H.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Frangi</surname>
          </string-name>
          ,
          <article-title>Fully automatic initialization and segmentation of left and right ventricles for large-scale cardiac MRI using a deeply supervised network and 3D-ASM, SSRN Electron</article-title>
          . J.
          <volume>240</volume>
          (
          <year>2023</year>
          )
          <article-title>107679</article-title>
          . doi:
          <volume>10</volume>
          .2139/ssrn.4341036.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>I. F. S.</surname>
          </string-name>
          <article-title>da</article-title>
          <string-name>
            <surname>Silva</surname>
            ,
            <given-names>A. C.</given-names>
          </string-name>
          <string-name>
            <surname>Silva</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. C. de Paiva</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>Gattass, A cascade approach for automatic segmentation of cardiac structures in short-axis cine-MR images using deep neural networks, Expert Syst</article-title>
          .
          <source>With Appl</source>
          .
          <volume>197</volume>
          (
          <year>2022</year>
          )
          <article-title>116704</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.eswa.
          <year>2022</year>
          .
          <volume>116704</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>O.</given-names>
            <surname>Oktay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schlemper</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.L.</given-names>
            <surname>Folgoc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Heinrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Misawa</surname>
          </string-name>
          et al.
          <article-title>Attention U-Net: Learning where to look for the pancreas</article-title>
          ,
          <source>Preprint</source>
          ,
          <year>2018</year>
          . arXiv. doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>1804</year>
          .
          <volume>03999</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>D.</given-names>
            <surname>Jha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. H.</given-names>
            <surname>Smedsrud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Riegler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Johansen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. D.</given-names>
            <surname>Lange</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Halvorsen</surname>
          </string-name>
          , H. D. Johansen, ResUNet++:
          <article-title>An advanced architecture for medical image segmentation</article-title>
          ,
          <source>in: 2019 IEEE International Symposium on Multimedia (ISM)</source>
          , IEEE, New York, NY, USA,
          <year>2019</year>
          , pp.
          <fpage>225</fpage>
          <lpage>230</lpage>
          . doi:
          <volume>10</volume>
          .1109/ism46123.
          <year>2019</year>
          .
          <volume>00049</volume>
          . failures in
          <source>cardiac MRI, Sci. Rep</source>
          .
          <volume>10</volume>
          .1 (
          <year>2020</year>
          )
          <article-title>21769</article-title>
          . doi:
          <volume>10</volume>
          .1038/s41598-020-77733-4.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ammar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Bouattane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Youssfi</surname>
          </string-name>
          ,
          <article-title>Automatic cardiac cine MRI segmentation and heart disease classification</article-title>
          ,
          <source>Comput. Med</source>
          . Imaging Graph.
          <volume>88</volume>
          (
          <year>2021</year>
          )
          <article-title>101864</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.compmedimag.
          <year>2021</year>
          .
          <volume>101864</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Delingette</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ayache</surname>
          </string-name>
          ,
          <article-title>Explainable cardiac pathology classification on cine MRI with motion characterization by semi-supervised learning of apparent flow</article-title>
          ,
          <source>Med</source>
          . Image Anal.
          <volume>56</volume>
          (
          <year>2019</year>
          )
          <fpage>80</fpage>
          95. doi:
          <volume>10</volume>
          .1016/j.media.
          <year>2019</year>
          .
          <volume>06</volume>
          .001.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , W. Zhang, W. Shen,
          <string-name>
            <given-names>N.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Automatic segmentation of the cardiac MR images based on nested fully convolutional dense network with dilated convolution</article-title>
          ,
          <source>Biomed. Signal Process. Control</source>
          <volume>68</volume>
          (
          <year>2021</year>
          )
          <article-title>102684</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.bspc.
          <year>2021</year>
          .
          <volume>102684</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>R.</given-names>
            <surname>Tkachenko</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Izonin</surname>
          </string-name>
          , I. Dronyuk,
          <string-name>
            <given-names>M.</given-names>
            <surname>Logoyda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Tkachenko</surname>
          </string-name>
          ,
          <article-title>Recovery of missing sensor data with GRNN-based cascade scheme</article-title>
          ,
          <source>Int. J. Sens. Wirel. Commun. Control 11.5</source>
          (
          <year>2021</year>
          )
          <fpage>531</fpage>
          541. doi:
          <volume>10</volume>
          .2174/2210327910999200813151904.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>J.</given-names>
            <surname>Howard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gugger</surname>
          </string-name>
          ,
          <article-title>Fastai: A layered API for deep learning</article-title>
          ,
          <source>Information 11.2</source>
          (
          <year>2020</year>
          )
          <article-title>108</article-title>
          . doi:
          <volume>10</volume>
          .3390/info11020108.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>P.</given-names>
            <surname>Radiuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Barmak</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Krak</surname>
          </string-name>
          ,
          <article-title>An approach to early diagnosis of pneumonia on individual radiographs based on the CNN information technology</article-title>
          ,
          <source>Open Bioinform. J. 14.1</source>
          (
          <year>2021</year>
          )
          <fpage>93</fpage>
          107. doi:
          <volume>10</volume>
          .2174/1875036202114010093.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>O.</given-names>
            <surname>Bernard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lalande</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cervenansky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.-A.</given-names>
            <surname>Heng</surname>
          </string-name>
          , I. Cetin,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lekadir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Camara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Gonzalez Ballester</surname>
          </string-name>
          , et al.,
          <article-title>Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: Is the problem solved?</article-title>
          ,
          <source>IEEE Trans. Med. Imaging</source>
          <volume>37</volume>
          .11 (
          <year>2018</year>
          )
          <fpage>2514</fpage>
          2525. doi:
          <volume>10</volume>
          .1109/tmi.
          <year>2018</year>
          .
          <volume>2837502</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>V.</given-names>
            <surname>Slobodzian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Radiuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zingailo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Barmak</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Krak</surname>
          </string-name>
          ,
          <article-title>Myocardium segmentation using twostep deep learning with smoothed masks by Gaussian blur</article-title>
          ,
          <source>in: Proceedings of the 6th International Conference on Informatics &amp; Data-Driven Medicine, CEUR-WS.org, Aachen</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>77</fpage>
          <lpage>91</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>