<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>” European Respiratory Journal</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Various intelligent approaches for classification using CT-scan images: A Systematic Literature Review</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ritika Mahajan</string-name>
          <email>ritikamahajan2010@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Amritpal Singh</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Aman Singh</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>CT scan</institution>
          ,
          <addr-line>Classification Techniques, COVID-19, RT-PCR</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Lovely Professional University</institution>
          ,
          <addr-line>Phagwara, Punjab</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Universidad Europeadel Atlántico</institution>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>56</volume>
      <issue>2</issue>
      <fpage>2606</fpage>
      <lpage>2614</lpage>
      <abstract>
        <p>COVID-19 has caused a devastating effect in every aspect across the world. The pandemic brought life to a standstill. Frontline workers are working day and night to treat patients and save lives. As critical is the timely and quick detection of this communicable disease, it necessitates the need for a diagnostic system that is automatic and as accurate as possible. The number of false negatives and hysteresis must be as low as possible. CT scans of the lungs can help in quicker detection of the presence of the virus as opposed to RT-PCR test. The purpose of this article is to present a survey of current scientific work on CT scan classification techniques, outlining and structuring what is currently available. We conduct a systematic literature review in which we compile and categorize the latest papers from top conferences to present a synopsis of CT scan images data classification techniques and their issues. This review identifies the present state of CT image classification research and decides where further research is needed. A review paper discusses different classification methods for CT scan images, including a comparative study of major classification techniques.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>COVID-19 is a contagious disease whose cause is the coronavirus variant SARS-CoV-2. It is a
severe acute respiratory syndrome caused the SARS-CoV-2 virus variant. Coronavirus is a part of the
family of RNA viruses. As per Phylogenetic analysis, coronavirus probably originated from bats,
where it transferred to other animals and then to Humans in the Huanan wet market in Wuhan City.
Six other such viruses have also been identified in the past. All of these are suspected to have
emanated from animals.</p>
      <p>
        Coronavirus infects the nose, upper throat and/or sinuses [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. It was first identified in Wuhan,
China in December of the year 2019 and before it could be controlled, other countries of the world
also identified cases of this virus. COVID-19 virus affects the respiratory tract and causes an infection
there. Not all coronavirus infections are deadly. A person who gets the COVID-19 can have an
infection that can range from being asymptomatic to severe. The symptoms are very similar to
seasonal viral infections which are nothing but cold, cough, fever, sore throat, body ache, etc. but
additionally, this infection also includes symptoms such as breathlessness, chills, loss of smell and/or
taste, nausea and diarrhoea.
      </p>
      <p>2022 Copyright for this paper by its authors.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Literature Review</title>
      <p>In this paper, a systematic literature review is conducted to discover what intelligent classification
techniques in research exist and identify probable areas wherein further work is required. The
objective of this review is to classify and do an analysis of large data papers, both quantitatively and
qualitatively, relevant to CT scan classification techniques. What recent classification techniques do
there exist in the sense of CT scan image classification?
The subsequent section contains some gaps found in the existing research:
• Some research studies used very limited data for research. It may be due to a lack of data
owing to inaccessibility. It may also be due to time constraints and similar factors. Thus,
studies conducted on limited data may not provide very reliable inferences.
• Datasets also lacked demographic features. The spread of the coronavirus began in Wuhan in
China. It slowly reached other countries of the world. Throughout the pandemic period, it has
been identified that the nature of the infections and the spread varied from region to region.</p>
      <p>The datasets were not homogeneous and were from different parts of the world in different
research papers and hence, might also differ in the technological advancements of medical imaging
tools. The lack of uniformity is bound to affect the results of different research. If there needs to be a
method for the detection of corona from X-rays and CT scans, it needs to be consistent and available
to all, then only the research will prove fruitful. The existing research was specific to different
countries and hence, involved differing technologies.</p>
      <p>
        Jaiswal et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] made use of the network DenseNet201-based deep learning model that was
pretrained for the classification of CT scans. The results were obtained from VGG16,
InceptionResNetV2 and ResNet152V2 and compared. Their model could achieve an accuracy score of 96.25%.
      </p>
      <p>
        Alom et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] deployed deep learning models namely Inception Recurrent Residual Neural
Network (IRRCNN) which is based on the concept of transfer learning, which was further utilized for
COVID-19 detection and while NABLA-N model which was used for image segmentation. Images
present in the dataset were resized to uniform dimensions. Train test split was performed on this
dataset then. The COVID-19 detection using image segmentation achieved an accuracy scoreof
98.78% and 99.56% respectively with the Adam optimizer.
      </p>
      <p>
        Gozes et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]in their research proposed a system using image segmentation of CT scans of lungs
that could classify all cases as COVID-19 or normal. ResNet50 was used for training and testing
multiple data sets which could then detect COVID-19 from CT scan images. The sensitivity score was
94%, specificity was 98% and the AUC score achieved was 0.9940.
      </p>
      <p>
        Ozturk et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] proposed a Convolutional Neural Networks (CNN)based model for the detection
of COVID-19 cases from X-ray images of the chest. Two models were proposed, a binary classifier
and a multi-class classifier. The binary classifier gave the predictions as COVID-19 positive or
negative. The multi-class classifier gave the predictions as COVID-19 positive or negative or
pneumonia. The binary classifier achieved an accuracy of 96% and the multi-class classifier achieved
an accuracy of 98%.
      </p>
      <p>
        Keles A et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] trained and developed ResNet architecture using a deep learning transfer
approach that is capable of automatically differentiating COVID-19 cases from the normal chest
Xrays. The model could achieve an accuracy score of 94.28%.
      </p>
      <p>
        Li X et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] proposed a portable device system to automatically detect COVID-19 from chest
Xrays. The proposed system was developed with the ability tofollow upon the case also. The classifier
model was developed using the DenseNet-121 architecture with deep transfer learning that achieved
an accuracy score of 88%.
      </p>
      <p>
        Hu et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] proposed a system in their research that was based on an Artificial Intelligence model
on ShuffleNet V2. Image augmentation was done before the system was trained. Results showed that
this system was capable of fast training with great accuracy in transfer learning applications. The
sensitivity, specificity, and area under the curve (AUC) scoresachieved with this system were 90.52%,
91.58%, and 0.9689, respectively.
      </p>
      <p>
        Shah et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] proposed a model named CTnet-10 which was a binary classifier that showed an
accuracy of 82.1% against the classification accuracy of 94.5% of the pre-trained VGG-19 model.
      </p>
      <p>
        Dansana et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] proposed a multi-class classifier. The classifier can classify chest X-rays into
COVID-19 positive, COVID-19 negative and pneumonia. The proposed model built feature maps
from X-ray images and classification was done using VGG-16 with the vectors of these feature maps.
The training was done on the ImageNet dataset, after which the weights of VGG-16 model were
saved and used for deep learning model training. This system achieved an accuracy of 94.5%.
      </p>
      <p>
        Jin et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] used two publicly available data sets, LIDC-IDRI [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] and ILD-HUG[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] for the
training of the proposed system. Medical scanning images were also obtained from the Wuhan Union
Hospital, Western Campus of Wuhan Union Hospital, and Jianghan Mobile Cabin Hospital in Wuhan.
A 2D CNN was used for CT scan image segmentation. Then, the model was trained on these images.
The proposed model achieved the AUC score, sensitivity, and specificity of 0.9791, 94.06% and
95.47% respectively.
      </p>
      <p>
        Barstugan et al. [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] used a machine learning algorithm, Support Vector Machine (SVM) with
feature extraction methods like GLSZM and DWT for training. The accuracy score achieved was
99.68%.
      </p>
      <p>
        Amyar et al. [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] proposed the architecture of a model for image classification, reconstruction and
segmentation model architecture based on the encoder and convolutional layer. Three datasets were
used to train the model and the best model fetched an AUC score of 0.93.
      </p>
      <p>
        Wang et al. [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] proposed Unet for training for image segmentation of lungs. The proposed
modelthen is used to test CT scan volumes that fetch all lung masks. All CT scan volumes along with
their respective lung masks were concatenated and sent to DeCoVNet for training the model. An
AUC-ROC score of 0.959 was achieved on this network model.
      </p>
      <p>
        Singh et al. [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] proposed a MODE(multi-objective differential evolution)-based CNN to detect
COVID-19 in chest images. The proposed approach performed better than CNN, ANFIS and ANN
models across all evaluation metrics that were considered.
      </p>
      <p>
        Ahuja et al. [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] classified COVID-19 images using data augmentation and pre-trained networks.
Stationary wavelets and random rotation, translation and shearing operations were used for data
augmentation which wasthen applied to the dataset of CT scan images. For classification of the
images, ResNet18 is better than ResNet50, ResNet101 and SqueezeNet by achieving an AUC score of
0.9965.
      </p>
      <p>Wang et al. [19] proposed a deep learning model that used DenseNet121-FPN for lung image
segmentation and COVID 19 NET for classification. Two test data sets were used. The first resulted
in an AUC-ROC of 0.87 and the second resulted in an AUC-ROC of 0.88.</p>
      <p>Xu et al. [20] proposed a method that achieved an accuracy score of 86.7%. The proposed model
segmented CT images using ResNet18 after preprocessing and also classified them after relative
location information of the lung image patch was concatenated.</p>
      <p>Kang et al. [21] proposed a pipeline and Multiview representation learning model for CT
scanimage classification. The proposed method performed better than M models considered for
comparison i.e. Support Vector Machine(SVM), Logistic Regression(LR), Gaussian Naïve
Bayes(NB), KNN and neural networks. The technique achieved an accuracy of 95.5%, sensitivity
score of 96.6% and specificity of 93.2%.</p>
      <p>Ying et al. [22] proposed a network, DRE-Net, based on a pre-trained ResNet-50. The proposed
network was put in comparison with pre-trained ResNet, DenseNet and VGG16 models. The
proposed network performed better than all other models and fetched an AUC-ROC score of 0.92 for
image-level classification &amp; an AUC-ROC score of 0.95 for human-level classification.</p>
      <p>Rajagopal et. al. [23] has proposed a framework forthe classification of X-ray images into
COVID-19 positive, pneumonia or COVID-19 negative. Convolutional Neural Network (CNN) has
been used for classification. Transfer learning was done using VGG Net. XGBoost and SVM were
used for feature extraction. The results showed that SVM with CNN performed the best.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Analysis</title>
      <p>Various types of Classifiers</p>
      <p>In this section, we take a quantitative and qualitative look at the different articles.</p>
      <sec id="sec-3-1">
        <title>Various intelligent classification techniques available are:</title>
        <p>1. VGG-16: Simonyan was the one who proposed this design. In the 2014 ILSVRC competition,
VGG-16 architecture showed one of the best performances. The short kernel size of this DCNN
is its key feature. It employs a 33-bit kernel that is recurrent 256 and 512 times throughout the
layers. The use of small kernel sizes in VGG architecture has various downsides. The number of
parameters to train rises as the convolutions used for the VGG model are small. It also makes
use of pooling layers in the right places to eliminate unnecessary features and reduce the
model's complexity.
2. ResNet50: ResNet50 is a residual learning framework that enables easier training networks
with a lot of depth. Instead of learning unreferenced functions, the layers are explicitly
reformulated as learning residual functions with reference to the layer inputs. Residual can be
simply defined as the removal of features learned from the layer's input. This is accomplished
by establishing alternate connections that connect the nth layer to the (n+x)th layer directly.
These residual networks are easier to optimise and can benefit from additional depth to improve
accuracy.
3. Inception V3: In 2014, Inception V3 came in first on GoogLeNet, with a Top-5 accuracy of
93.3 percent. A larger two-dimensional convolution is split into two smaller one-dimensional
convolutions by the network. It not only cuts down on the number of parameters, but it also
speeds up calculations and reduces overfitting. Inception V3's architecture highlights the
relevance of memory management and the model's computing capabilities.
4. DenseNet121: The most recent network architecture is DenseNet121. It took first place in the
2017 ImageNet competition. It makes use of characteristics in order to get better results with
fewer parameters. It can connect all layers directly if the maximum information transmission
between layers in the network is guaranteed.
5. Convolutional Neural Networks (CNN): A convolutional neural network (CNN/ConvNet) is a
type of deep neural network used to evaluate visual imagery in deep learning. When we think
about neural networks, we usually think of matrix multiplications, but this isn't the case with
ConvNet. It employs a technique known as Convolution. Convolution is a mathematical
operation on two functions that yields a third function that explains how the shape of one is
changed by the other.</p>
      </sec>
      <sec id="sec-3-2">
        <title>Covolutional Neural An accuracy of 97.61% Network (CNN) and residual for COV19-ResNet and neural network (ResNet) 94.28% for COV19architecture CNNet</title>
      </sec>
      <sec id="sec-3-3">
        <title>A portable device system for automatic detection of COVID-19 patients based on their chest X-ray images</title>
        <p>Classification accuracy of
88%</p>
      </sec>
      <sec id="sec-3-4">
        <title>Shah,</title>
        <p>Vruddhi&amp;Keniya,
Rinkal&amp;Shridharani,
Akanksha &amp; Punjabi,
Manav &amp; Shah,
Jainam&amp;Mehendale,
Ninad.</p>
      </sec>
      <sec id="sec-3-5">
        <title>Dansana,</title>
        <p>Debabrata&amp; Kumar,
Raghvendra&amp;
Bhattacharjee,
Aishik&amp; D, Jude &amp;
Gupta, Deepak &amp;
Khanna, Ashish &amp;
Castillo, Oscar.</p>
      </sec>
      <sec id="sec-3-6">
        <title>Cheng, Jin&amp; Chen,</title>
        <p>Weixiang&amp; Cao,
Yukun&amp; Xu,
Zhanwei&amp; Tan,
Zimeng&amp; Zhang, Xin
&amp; Deng, Lei &amp;
Zheng,
Chuansheng&amp; Zhou</p>
      </sec>
      <sec id="sec-3-7">
        <title>A system based on an AI model on ShuffleNet V2, Augmentation of images</title>
      </sec>
      <sec id="sec-3-8">
        <title>A binary classifier model called the CTnet-10 model</title>
      </sec>
      <sec id="sec-3-9">
        <title>Average sensitivity,</title>
        <p>specificity, and area under
the curve (AUC) score
obtained were 90.52%,
91.58%, and 0.9689,
respectively</p>
      </sec>
      <sec id="sec-3-10">
        <title>An accuracy of 82.1%</title>
      </sec>
      <sec id="sec-3-11">
        <title>Convolutional Neural</title>
        <p>Network (CNN) based
model using VGG-19,
Inception_V2 and decision
tree
91% accuracy presented
by the proposed model as
opposed to accuracy 78%
by Inception_V2 and
60% by decision tree</p>
      </sec>
      <sec id="sec-3-12">
        <title>An artificial intelligence (AI) system with 2D Convolutional Neural Networks (CNN)</title>
        <p>AUC score of 0.9791,
sensitivity of 94.06%, and
specificity of 95.47%</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Acknowledgements</title>
      <p>Although this study examines intelligent techniques for image classification, it does not cover
every paper in the field, since it would be impractical to review them all manually. This review aims
more to discuss recent papers and provide both a qualitative and quantitative overview, in order to
establish a snapshot of the current state of the art. By selecting papers from the top conferences and
manually evaluating their content, we only include papers related to image classification.</p>
      <p>From the papers reviewed for this review, neither of the topics addressed are specific to image
classification techniques; rather, the papers combine existing topics in new ways. Accordingly, CT
scan image classification seems to be no different from other research, since the ideas seem to scale
well.</p>
      <p>Although most papers were not initially intended for image classification, they were included after
quality assessment. A potential problem with choosing papers only from top conferences is that while
the paper quality is strong, only papers with innovative ideas will be considered by conferences. We
conclude from this study that most image classification techniques are not really innovative but rather
new twists on existing ideas. The variety of papers proposing automatic COVID-19 detection methods
for CT scan image classification should therefore be expanded.</p>
      <p>In the future, a mechanism could be proposed that segments CT scans in real time with utmost
accuracy so that it can be used in medical science.</p>
    </sec>
    <sec id="sec-5">
      <title>5. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1] https://www.facebook.com/WebMD. (
          <year>2013</year>
          ,
          <article-title>August 7). Coronavirus and COVID-19: What You Should Know</article-title>
          . WebMD; WebMD. https://www.webmd.com/lung/coronavirus#4-
          <fpage>8</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Jaiswal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Gianchandani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kumar</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Kaur</surname>
          </string-name>
          , “
          <article-title>Classification of the COVID19 infected patients using DenseNet201 based deep transfer learning</article-title>
          ,
          <source>” Journal of Biomolecular Structure &amp; Dynamics</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Alom</surname>
            ,
            <given-names>Md</given-names>
          </string-name>
          <string-name>
            <surname>Zahangir</surname>
            , Rahman,
            <given-names>M M</given-names>
          </string-name>
          <string-name>
            <surname>Shaifur</surname>
          </string-name>
          , Nasrin, Mst Shamima, Taha,
          <string-name>
            <surname>Tarek</surname>
            <given-names>M</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Asari</surname>
          </string-name>
          ,
          <string-name>
            <surname>Vijayan</surname>
            <given-names>K.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>COVID_MTNet: COVID-19 Detection with Multi-Task Deep Learning Approaches</article-title>
          . ArXiv.org. https://arxiv.org/abs/
          <year>2004</year>
          .03747
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>O.</given-names>
            <surname>Gozes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Frid-Adar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Greenspan</surname>
          </string-name>
          et al.,
          <article-title>“Rapid AI development cycle for the coronavirus (COVID-19) pandemic: initial results for automated detection &amp; patient monitoring using deep learning CT image analysis</article-title>
          ,
          <source>” arXiv</source>
          ,
          <year>2020</year>
          , https://arXiv:
          <year>2003</year>
          .05037.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Ozturk</surname>
            <given-names>T</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Talo</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yildirim</surname>
            <given-names>EA</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baloglu</surname>
            <given-names>UB</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yildirim</surname>
            <given-names>O</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Acharya</surname>
            <given-names>UR</given-names>
          </string-name>
          . “
          <article-title>Automated detection of COVID-19 cases using deep neural networks with X-ray images”</article-title>
          .
          <source>Computers in biology and medicine</source>
          .
          <year>2020</year>
          ;
          <volume>121</volume>
          :
          <fpage>103792</fpage>
          . pmid:
          <volume>32568675</volume>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Keles</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Keles</surname>
            <given-names>MB</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Keles</surname>
            <given-names>A</given-names>
          </string-name>
          . COV19
          <article-title>-CNNet and COV19-ResNet: diagnostic inference Engines for early detection of COVID-19</article-title>
          .
          <string-name>
            <given-names>Cognitive</given-names>
            <surname>Computation</surname>
          </string-name>
          .
          <year>2021</year>
          ; p.
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          . pmid:
          <volume>33425046</volume>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Li</surname>
            <given-names>X</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            <given-names>C</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhu D.</surname>
          </string-name>
          COVID-MobileXpert:
          <article-title>On-device COVID-19 patient triage and follow-up using chest X-rays</article-title>
          .
          <source>In: 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)</source>
          . IEEE;
          <year>2020</year>
          . p.
          <fpage>1063</fpage>
          -
          <lpage>1067</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Hu</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ruan</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xiang</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liang</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Automated diagnosis of covid-19 using deep learning and data augmentation on chest ct</article-title>
          . medRxiv.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Shah</surname>
            <given-names>V</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Keniya</surname>
            <given-names>R</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shridharani</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Punjabi</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shah</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mehendale</surname>
            <given-names>N.</given-names>
          </string-name>
          <article-title>Diagnosis of COVID-19 using CT scan images and deep learning techniques</article-title>
          .
          <source>Emergency radiology</source>
          .
          <year>2021</year>
          ; p.
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          . pmid:
          <volume>33523309</volume>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Dansana</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kumar</surname>
            <given-names>R</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bhattacharjee</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hemanth</surname>
            <given-names>DJ</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gupta</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Khanna</surname>
            <given-names>A</given-names>
          </string-name>
          , et al.
          <article-title>Early diagnosis of COVID-19-affected patients based on X-ray and computed tomography images using deep learning algorithm</article-title>
          .
          <source>Soft Computing</source>
          .
          <year>2020</year>
          ; p.
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          . pmid:
          <volume>32904395</volume>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>C.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Cao</surname>
          </string-name>
          et al.,
          <article-title>“Development and evaluation of an AI system for COVID-</article-title>
          19 diagnosis,” MedRxiv,
          <year>2020</year>
          , https://medRxiv.org/abs/
          <year>2020</year>
          .03.20.20039834.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S. G.</given-names>
            <surname>Armato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>McLennan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bidaut</surname>
          </string-name>
          et al., “
          <article-title>The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans,” Medical physics</article-title>
          , vol.
          <volume>38</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>915</fpage>
          -
          <lpage>931</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Depeursinge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vargas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Platon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Geissbuhler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. A.</given-names>
            <surname>Poletti</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          , “
          <article-title>Building a reference multimedia database for interstitial lung diseases,” Computerized Medical Imaging and Graphics</article-title>
          , vol.
          <volume>36</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>227</fpage>
          -
          <lpage>238</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Barstugan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Ozkaya</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Ozturk</surname>
          </string-name>
          , “
          <article-title>Coronavirus (COVID-19) classification using CT images by machine learning methods</article-title>
          ,” ArXiv,
          <year>2020</year>
          , https://arXiv:
          <year>2003</year>
          .09424.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Amyar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Modzelewski</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Ruan</surname>
          </string-name>
          ,
          <article-title>“Multi-task deep learning based ct imaging analysis for covid-19: classification</article-title>
          and segmentation,” medRxiv,
          <year>2020</year>
          , https://medRxiv.org/abs/
          <year>2020</year>
          .04.16.20064709.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Fu</surname>
          </string-name>
          et al.,
          <article-title>“A weakly-supervised framework for COVID-19 classification and lesion localization from chest CT,”IEEE transactions on medical imaging</article-title>
          , vol.
          <volume>39</volume>
          , no.
          <issue>8</issue>
          , pp.
          <fpage>2615</fpage>
          -
          <lpage>2625</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>D.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. K.</given-names>
            <surname>Vaishali</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Kaur</surname>
          </string-name>
          , “
          <article-title>Classification of COVID-19 patients from chest CT images using multiobjective differential evolution-based convolutional neural networks</article-title>
          ,”
          <source>European Journal of Clinical Microbiology &amp; Infectious Diseases</source>
          , vol.
          <volume>39</volume>
          , no.
          <issue>7</issue>
          , pp.
          <fpage>1379</fpage>
          -
          <lpage>1389</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ahuja</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. K.</given-names>
            <surname>Panigrahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Dey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Gandhi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>V.</given-names>
            <surname>Rajinikanth</surname>
          </string-name>
          , “
          <article-title>Deep transfer learning-based automated detection of COVID-19 from lung CT scan slices</article-title>
          ,
          <source>” Applied Intelligence</source>
          ,
          <year>2020</year>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>