<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Computed Tomography Images</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Talshyn Sarsembayeva</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Adai Shomanov</string-name>
          <email>Adai.shomanov@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Magzhan Sarsembayev</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Madina Mansurova</string-name>
          <email>mansurova.madina@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ainur Zhumasheva</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Aigerim Zhunussova</string-name>
          <email>aigerim.zhunusova12@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gassyrbek Rakhimzhanov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Al-Farabi Kazakh National University</institution>
          ,
          <addr-line>Al-Farabi Ave. 71, Almaty, 050038</addr-line>
          ,
          <country country="KZ">Kazakhstan</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The purpose of the research described in the article is to use machine learning algorithms to automatically detect symptoms of chronic obstructive pulmonary disease. Machine learning methods and neural network were used in chronic obstructive pulmonary disease. The issues addressed in this work are: Prevention of symptoms of chronic obstructive pulmonary diseases, achieving quick and accurate results through machine learning and neural networks, Use of effective machine learning techniques to obtain computed tomography images. In order to identify structural data for lung function testing and their important role in the diagnosis and treatment of COPD, machine learning classifiers were tested and tested. The accuracy of the data studied using the U-Net architecture to easily and quickly identify the symptoms of the disease with the classifier of machine learning using tomographic images has been proven to be effective. segmentation, neural network, COPD, computed tomography images</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        As more studies are committed to the findings supported by the spread, it can be argued that after
the pandemic of the new corona illness, scientists are concentrating on lung ailments more and more
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Almost always when there is pulmonary morbidity. Malignant lymph nodes in the lungs frequently
progress into lung cancer, a major threat to human health that can even be fatal. A tiny percentage of
pulmonary nodules in the lung or false properties may occasionally go undetected by doctors while they
are doing an observation. As a result, when artificial intelligence is developed, intelligent algorithms
are employed to support and direct the doctor's attention toward developing a precise diagnosis. For the
purpose of capturing a portion of the lung parenchyma, the authors of the article [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] presented a
multithreshold approach. From the perspective of application, a model for segmenting the lung parenchyma
was constructed in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Few publications have, to date, described the constructed deep analysis
simulation network for lung parenchyma segmentation [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. It is simple and common procedure to
segment the parenchyma using morphological models [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. U-Net was used to optimize the extraction
of lung parenchyma in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The authors of [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] created a CNN network to segment the lung parenchyma.
Unsupervised modeling clustering is used to recognize legal nodes and groups [8], to segment legal
nodes in the active contour model that has been presented, and to construct fuzzy clustering in [9,10].
The authors of [11] employed a well-researched graph based on a preliminary cut to extract pulmonary
nodes, [12] created an R-CNN mask, and [13] presents an artificial interactive method for extracting
lung nodules. An adaptive morphological progressive neural network with two directions was
Proceedings of the 7th International Conference on Digital Technologies in Education, Science and Industry (DTESI 2022), October 20–21,
      </p>
      <p>Sarsembayev);
ainur93ardak@gmail.com
(Ainur</p>
      <p>Zhumasheva);</p>
      <p>2022 Copyright for this paper by its authors.
constructed for the segmentation of pulmonary nodes in the publication [14]. Computer diagnostic
techniques have been shown to be a helpful tool for doctors to use when making an accurate diagnosis.
However, it is still challenging to examine weak pulmonary nodes and the complicated and varied
characteristics of pulmonary nodes [15]. This contains the following specifically: (1) The
computergenerated model does not correspond to the method of diagnosis used by doctors. (2) In the event that
the lungs have nodules.</p>
      <p>Being able to easily and quickly identify symptoms of chronic obstructive pulmonary diseases based
on computer tomography images. Chronic obstructive pulmonary disease (COPD), diagnosed based on
smoking, consumption of other harmful substances, presence of respiratory symptoms, and chronic
airflow limitation confirmed by spirometry after bronchodilator. It is one of the leading diseases of
morbidity and mortality worldwide. It is characterized by airway obstruction, shortness of breath, and
decreased exercise tolerance.</p>
      <p>In primary health care, the initial stage in the clinical diagnosis of COPD is the assessment of
respiratory complaints: cough, sputum production, and dyspnea. Given the irreversibility of airflow
limitation in COPD, these symptoms will have a chronic course, that is, exist for more than 12 weeks
for 1 year or more [16]. The importance of studying the prevalence of respiratory symptoms is
emphasized in a number of international studies, which show that in the adult population they can occur
in 41–48% of cases [17, 18]. It should be taken into account that most of these studies were aimed at
studying the prevalence of respiratory symptoms among smokers, people working or living in
conditions of dust pollution, and the elderly [19-20]. However, as our studies have shown, in real
practice among patients seeking medical care from family doctors, a high prevalence of chronic
respiratory complaints was revealed, amounting to 58.9% [19]. However, the detection of symptoms
alone is not enough to diagnose COPD due to their low prognostic value [22]. Decision this problem
can be facilitated by the use of international standardized questionnaires, which are recommended for
use in primary health care [23, 24]. Developing a diagnostic tool based on questionnaires can help in
identifying a group of patients with a high risk of developing COPD, which is an urgent task for
outpatient practice.</p>
      <p>Sudden spontaneous weight loss and low body mass index put COPD patients at increased risk of
death. Early and accurate detection of differences in body composition, appropriate treatment, such as
improved nutrition and lung rehabilitation, should be carried out in time. According to the results of the
EPISCAN II study in Spain, its prevalence among the population over 40 years old is 11.8% (14.6% in
men and 9.4% in women) [25].</p>
      <p>Symptoms of COPD can be diagnosed in several ways. Computed tomography imaging is currently
the standard measure of body composition, however, computed tomography imaging is relatively
expensive, may be available only in limited settings, is time-consuming, and involves exposure to
ionizing radiation. [26] Given the ease of obtaining measurements from computed tomography images,
body composition studies in COPD patients are performed using these images.</p>
      <p>X-ray examination of patients with COPD can be conditionally divided into two stages. The first of
which is aimed at the primary assessment of organs of the thoracic cavity and usually involves the use
of conventional x-ray examination - X-ray or fluorography. Any of these studies are almost all patients
with COPD at the stage of primary diagnosis or during an exacerbation of the disease. The second stage
is an in-depth study of the morphology and function of the lung tissue and is aimed primarily at
identifying emphysema and bronchiectasis, determination of the type and prevalence of pathological
changes. The main technology in these cases is x-ray computed tomography (CT). Other imaging
modalities, such as ultrasound and radionuclide imaging, magnetic resonance imaging are of limited
value in the evaluation of COPD.</p>
      <p>Neural networks [27] show recognition accuracy better than or comparable to humans in many
recognition tasks, including road sign recognition, face recognition, and number recognition. Modern
materials research worldwide, using X-ray microtomography and 3D image analysis, has always limited
the accuracy of dense fibrous materials. However, it can be said that recent machine learning methods
and especially deep learning are helping to overcome this challenge [28].</p>
      <p>To obtain morphometric measurements of fiber bundles and to accurately estimate their density, it
is necessary to achieve a first segmentation of sufficient quality. Among other applications, the
proposed method thus allows the design of more realistic models of MDF material. Pneography is
recommended as an outpatient method of monitoring respiratory diseases. However, its ambulatory
nature makes recordings more susceptible to noise sources. Identifying and removing such noisy
segments is critical because they can greatly impact the performance of data-driven decision support
tools. In general, machine learning algorithms are used to separate noisy bioimpedance signals from
clean ones in chronic obstructive pulmonary disease symptoms. There are different approaches to
machine learning. Compare: heuristic algorithm, feature-based classification model (SVM) and
convolutional neural network (CNN).</p>
      <p>The utilization of various datasets to assess newly suggested models is a significant difficulty in the
field of lung/lesion segmentation (and image segmentation in general). Additionally, there are no
reference base models that may be used as a standard by which to compare the effectiveness of the
suggested models. The description includes benchmarks for counting the number of lung issues visible
on CT scans, such as the volume in [29] tests samples of 20 models of the number of lung segmentation
in patients using COVID-19. In [30] compares raw trial algorithms developed by several research teams
and offers additional comparison studies on issues like lung nodule segmentation. You should take note
that research on image segmentation issues uses a deep learning approach and has successfully been
applied to non-medical images like coral reef images [31], where the authors test four models, and to
images of cities with altitude [32], where the authors test 12 different models.</p>
      <p>Taking as an example a data set of 47 patients with chronic obstructive pulmonary disease with
limited breathing, their breathing was recorded during the experiment using a bioimpedance device and
a spirometer. It can be observed that the accuracy of both machine learning approaches (SVM: 87.77 ±
2.64% and CNN: 87.20 ± 2.78%) is significantly higher compared to the heuristic approach (84.69 ±
2.32%). Moreover, no significant differences were observed between the two machine learning
approaches [33, 34].</p>
      <p>A corresponding value of 92.51 ± 1.74% was obtained using the neural network model. These results
suggest that a data-driven approach may be useful for the task of detecting artifacts in respiratory chest
bio-impedance signals.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Results of using machine learning algorithms to identify COPD symptoms</title>
      <p>U-net was originally invented and first used for biomedical image segmentation. Its architecture can
be broadly considered as an encoder network connected to a decoder network. Unlike classification,
where the end result of a deep network is the only thing that matters, semantic segmentation requires
not only discrimination at the pixel level, but also a mechanism for projecting the discriminative features
learned at different stages of the encoder into pixel space.</p>
      <p>The encoder is the first half of the architecture diagram. This is usually a pre-trained classification
network such as VGG/ResNet, which uses convolutional blocks and then uses a maxpool-reduced
model to encode the input image into feature representations at several different levels.</p>
      <p>The decoder is the other half of the architecture. The goal is to semantically project the
discriminative features learned by the encoder (low resolution) into pixel space (high resolution) to
obtain dense classification. The decoder consists of sampling and combining, followed by constant
convolution operations.</p>
      <p>U-Net is an architecture for semantic segmentation. It consists of a short path and a wide path. A
contract string follows the typical architecture of a convolutional network. It consists of repeated
application of two 3x3 convolutions (unfilled convolutions), each with a corrected linear unit (ReLU),
and a 2x2 max pooling operation with two steps to obtain the subsample. At each downsampling step,
we double the number of feature channels. Each step in the expanded path consists of sampling a feature
map, followed by a 2x2 convolution ("roll up") that halves the number of feature channels,
concatenation with the corresponding clipped feature map from the contract path, and two 3x3
convolutions, each followed by ReLU. At each convolution, clipping is required due to the loss of
border pixels. In the last layer, a 1x1 conversion is used to map each 64-component feature vector to
the required number of classes. In total, there are 23 convolutional layers in the network.</p>
      <p>First announced in 2015, the U-Net architecture revolutionized deep learning. Architecture won the
Cell Tracking Challenge at the 2015 International Symposium on Biomedical Imaging (ISBI) by a wide
margin in multiple categories. Some of their work includes segmentation of neuronal structures in
electron microscopy stacks and transmitted light microscopy images.</p>
      <p>With this U-Net architecture, segmentation of 512X512 images can be computed using modern
GPUs in less time. Due to the great success of this architecture, there have been many versions and
modifications. Some of them focus on LadderNet, U-Net, Recurrent and Residual Convolutional U-Net
(R2-UNet) and U-Net with residual blocks or blocks with dense connections.</p>
      <p>Although U-Net is an important breakthrough in the field of deep learning, it is equally important to
understand the previous methods used to solve similar tasks. One of the main examples to be completed
was the sliding window method, which won the EM segmentation challenge at ISBI in 2012 by a large
margin. The sliding window method was able to generate a wide range of sample patches apart from
the original training data set.</p>
      <p>This is because the result obtained in Figure 1 uses the meshing method of the sliding window
architecture by creating the class label of each pixel as separate units by providing a local area (patch)
around that pixel. Another achievement of this architecture was that it can be easily localized on any
training data set for relevant tasks.</p>
      <p>However, the sliding-window approach suffers from two major drawbacks that confront the U-Net
architecture. Since each pixel is treated individually, the resulting patches we overlap a lot. Thus, the
total surplus was generated more. Another limitation is that the overall training procedure was very
slow and required a lot of time and resources. The viability of the network is questionable for the
following reasons.</p>
      <p>U-Net is an elegant architecture that solves most of the problems that arise. It uses the concept of
fully convolutional networks for this approach. The goal of U-Net is to capture both context features
and localization. This process has been successfully completed for the type of architecture that was
built. The main idea of the implementation is to use sequential conformal layers performed with
immediate upsampling operators to achieve high-resolution results on the input images.</p>
      <p>Data visualization (Figure 2). Now that we have collected and preprocessed our data, our next step
is to briefly review the dataset. A dataset needs to be analyzed by displaying both the image and its
corresponding segmented output. This segmented output with masking is often referred to as the ground
truth annotation. Along with the pillow library, I-Python uses the display option to randomly display
the selected image.</p>
      <p>The most effective, qualitative method was selected using various methods of determining accuracy:
Gaussian Naive Bayes: 0, 79
K-NN classifier: 0.94
Decision Tree Classifier: 0.912
U-Net: 0.98</p>
      <p>Data trained using the U-Net architecture showed high accuracy.</p>
      <p>As shown below, green areas and red areas indicate the presence of symptoms of chronic obstructive
pulmonary disease. Green areas indicate the initial stages of disease symptoms, that is, the onset, and
red areas indicate the direct presence of disease viruses.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Conclusion</title>
      <p>A U-Net model of segmentation for lung function assessment was designed to aid in the clinical
application of machine learning classifiers, taking into account structural data for pulmonary function
testing and their significant importance in the diagnosis and management of COPD. The best
approaches to identify the symptoms of the condition were taken into consideration, compiled, and
results were achieved using machine learning classifiers. Chronic obstructive pulmonary disorders. It
has been demonstrated that data trained using the U-Net architecture is more accurate than data trained
using other techniques.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Acknowledgements</title>
      <p>This work was funded by Committee of Science of Republic of Kazakhstan AP09260767
"Development of intellectual information-analytical system for assessing the health of students in
Kazakhstan" (2021-2023).</p>
    </sec>
    <sec id="sec-5">
      <title>5. References</title>
      <p>[8] S. Sivakumar and C. Chandrasekar, Lung nodule segmentation through unsupervised clustering
models, Procedia engineering 38 (2012) 3064–3073.
[9] Y. Qiang, X. Zhang, G. Ji, and J. Zhao, Automated lung nodule segmentation using an active
contour model based on PET/CT images, Journal of Computational and Theoretical Nanoscience
12.8 (2015) 1972–1976.
[10] E. Nithila and S. Kumar, Segmentation of lung nodule in CT data using active contour model and</p>
      <p>Fuzzy C-mean clustering, Alexandria Engineering Journal 55.3 (2016) 2583–2588.
[11] S. Mukherjee, X. Huang, and R. Bhagalia, Lung nodule segmentation using deep learned prior
based graph cut, in: Proceedings of the 2017 IEEE 14th international symposium on biomedical
imaging, ISBI 2017, Melbourne, VIC, Australia, 2017, pp. 1205–1208.
[12] M. Liu, J. Dong, X. Dong, H. Yu, and L. Qi, Segmentation of lung nodule in CT images based on
mask R-CNN, in: Proceedings of the 2018 9th International Conference on Awareness Science and
Technology, iCAST, Fukuoka, Japan, 2018, pp. 1–6.
[13] G. Aresta, C. Jacobs, T. Araújo et al., iW-Net: an automatic and minimalistic interactive lung
nodule segmentation deep network, Scientific Reports 9.1 (2019) 1–9.
[14] A. Halder, S. Chatterjee, and D. Dey, Adaptive morphology aided 2-pathway convolutional neural
network for lung nodule classification, Biomedical Signal Processing and Control 72 (2022)
103347.
[15] N. Zhang, J. Lin, B. Hui, B. Qiao, W. Yang, R. Shang, X. Wang, J. Lei, Lung Nodule Segmentation
and Recognition Algorithm Based on Multiposition U-Net, Computational and Mathematical
Methods in Medicine (2022). doi: 10.1155/2022/5112867.
[16] The Global Strategy for the Diagnosis, Management and Prevention of Chronic obstructive
pulmonary disease. Global Initiative for Chronic Obstructive Lung Disease (GOLD), 2018 URL:
http://www.goldcopd.org/ (Access Date 27.04.2018).
[17] V. Sobradillo, M. Miravitlles, C. A. Jimenez, et al., Epidemiological study of chronic obstructive
pulmonary disease in Spain (IBERPOC): prevalence of chronic respiratory symptoms and airflow
limitation, Arch Bronconeumol 35 (1999) 159-166. doi: 10.1183/09031936.01.17509820.
[18] B. Lundback, L. Nystrom, L. Rosenhall et al., Obstructive lung disease in northern Sweden:
respiratory symptoms assessed in a postal survey, Eur Respir J. 4 (1991) 257-66.
[19] S. S. Ferreira, L. Rocha, J. Bento, et al., Respiratory symptoms related to occupational exposure
to dust, Eur Respir J 50 (2017) 423. doi: 10.1183/1393003.congress-2017. PA423.
[20] S. Hallit, J. De Blic, C. Marguet et al., Respiratory and allergic symptoms in early life: The ELFE
cohort, Eur Respir J 50 (2017) 4146. doi: 10.1183/1393003.congress-2017. PA4146.
[21] K. V. Ovakimyan, The prevalence of chronic respiratory symptoms at the stage of primary care,</p>
      <p>Russian family doctor 3 (2015) 29–33. doi: 10.17116/profmed201619324-27.
[22] E. Andreeva, A. Lebedev, I. Moiseeva, et al., The Prevalence of Chronic Obstructive Pulmonary
Disease by the Global Lung Initiative Equations in North-Western Russia, Respiration 91.1 (2016)
43-55. doi: 10.1159/000442887.
[23] M L. Levy, M. Fletcher, D. B. Price, et al., International Primary Care Respiratory Group (IPCRG)
Guidelines: diagnosis of respiratory diseases in primary care, Prim Care Respir J. 15/1 (2006)
2034. doi: 10.1016/j. pcrj.2005.10.004.
[24] WHO: Chronic obstructive pulmonary disease (COPD). URL:
http://www.who.int/mediacentre/factsheets/fs315/en/ (Access date: 04/27/2018).
[25] P. N. Cruz Rivera, R. L. Goldstein, M. Polak, A. A. Lazzari, M. L. Moy and E. S. Wan,
Performance of bioelectrical impedance analysis compared to dual X-ray absorptiometry (DXA)
in Veterans with COPD, Scientific Reports (2022) 4-8.
[26] Y. Sun, Y. Chen, X. Wang, and X. Tang, Deep learning face representation by joint
identificationverification, NIPS (2014) 8-9.
[27] J. Moeyersons, J. Morales, N. Seeuws, C. Van Hoof, E. Hermeling, W. Groenendaal, R. Willems,
S. Van Huffel and C. Varon, Artefact Detection in Impedance Pneumography Signals: A Machine
Learning Approach, Sensors 21.8 (2021) 2613.
[28] D. Blanco-Almazán, W. Groenendaal, F. Catthoor, R. Jané, Wearable Bioimpedance Measurement
for Respiratory Monitoring During Inspiratory Loading, IEEE Access 7 (2019) 89487 - 89496.
[29] X. He, S. Wang, S. Shi, X. Chu, J. Tang, X. Liu, C. Yan, J. Zhang, G. Ding, Benchmarking deep
learning models and automated model design for covid19 detection with chest ct scans, medRxiv
(2020).
[30] J. Kalpathy-Cramer, B. Zhao, D. Goldgof, Y. Gu, X. Wang, H. Yang, Y. Tan, R. Gillies, S. Napel,
A comparison of lung nodule segmentation algorithms: methods and results from a
multiinstitutional study, Journal of digital imaging 29.4 (2016) 476–487.
[31] A. King, S. M. Bhandarkar, B. M. Hopkinson, A comparison of deep learning methods for
semantic segmentation of coral reef survey images, in: Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition Workshops, 2018, pp. 1394–1402.
[32] Q. Liu, A.-B. Salberg, R. Jenssen, A comparison of deep learning architectures for semantic
mapping of very high resolution images, in: Proceedings of the IEEE International Geoscience and
Remote Sensing Symposium, IGARSS, IEEE, 2018, pp. 6943–6946.
[33] D. Z. Akhmed-Zaki, T. S. Mukhambetzhanov, Z. M. Nurmakhanova, Z. M. Abdiakhmetova, Using
Wavelet Transform and Machine Learning to Predict Heart Fibrillation Disease on ECG, in:
Proceedings of the 2021 IEEE International Conference on Smart Information Systems and
Technologies, SIST, 2021, 9465990.
[34] D. Blanco-Almazán, W. Groenendaal, F. Catthoor, R. Jané, Chest Movement and Respiratory
Volume both Contribute to Thoracic Bioimpedance during Loaded Breathing, Sci. Rep. 9 (2019)
20232.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L.</given-names>
            <surname>Yang</surname>
          </string-name>
          , S. Liu, J. Liu et al.,
          <source>COVID-19: immunopathogenesis and immunotherapeutics</source>
          ,
          <source>Signal Transduction and Targeted Therapy</source>
          <volume>5</volume>
          .1 (
          <issue>2020</issue>
          )
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Karthikeyan</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Valliammai</surname>
          </string-name>
          ,
          <article-title>Lungs segmentation using multi-level thresholding in CT images</article-title>
          ,
          <source>International Journal of Electrical and Computer Engineering</source>
          <volume>1</volume>
          (
          <year>2012</year>
          )
          <fpage>1509</fpage>
          -
          <lpage>1513</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Mansoor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Bagci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Xu</surname>
          </string-name>
          et al.,
          <article-title>A generic approach to pathological lung segmentation</article-title>
          ,
          <source>IEEE Transactions on Medical Imaging</source>
          <volume>33</volume>
          .12 (
          <year>2014</year>
          )
          <fpage>2293</fpage>
          -
          <lpage>2310</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B.</given-names>
            <surname>Skourt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hassani</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Majda</surname>
          </string-name>
          ,
          <article-title>Lung CT image segmentation using deep neural networks</article-title>
          ,
          <source>Procedia Computer Science</source>
          <volume>127</volume>
          (
          <year>2018</year>
          )
          <fpage>109</fpage>
          -
          <lpage>113</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>X.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Qiang</surname>
          </string-name>
          et al.,
          <article-title>An automated segmentation method for lung parenchyma image sequences based on fractal geometry and convex hull algorithm</article-title>
          ,
          <source>Applied Sciences 8.5</source>
          (
          <year>2018</year>
          )
          <fpage>832</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Lv</surname>
          </string-name>
          and
          <string-name>
            <given-names>X.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>Lung parenchyma segmentation based on improved unet network</article-title>
          ,
          <source>Journal of Physics: Conference Series IOP Publishing 1605.1</source>
          (
          <year>2020</year>
          )
          <fpage>012026</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Maity</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Nair</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mehta</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Prakasam</surname>
          </string-name>
          ,
          <article-title>Automatic lung parenchyma segmentation using a deep convolutional neural network from chest X-rays</article-title>
          ,
          <source>Biomedical Signal Processing and Control</source>
          <volume>73</volume>
          (
          <year>2022</year>
          )
          <fpage>103398</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>