<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Tissue Type Recognition in Whole Slide Histological Images</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alexander Khvostikov</string-name>
          <email>khvostikov@cs.msu.ru</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrey Krylov</string-name>
          <email>kryl@cs.msu.ru</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ilya Mikhailov</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pavel Malkov</string-name>
          <email>pmalkov@mc.msu.ru</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Natalya Danilova</string-name>
          <email>natalyadanilova@gmail.com</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University</institution>
          ,
          <addr-line>Leninskie Gory, 1, building 52, Moscow, 119991</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Faculty of Medicine, Lomonosov Moscow State University</institution>
          ,
          <addr-line>Lomonosovskiy prospekt, 27, building 1, Moscow, 119991</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Medical Research and Educational Center, Lomonosov Moscow State University</institution>
          ,
          <addr-line>Lomonosovskiy prospekt, 27, building 10, Moscow, 119991</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Moscow Center of Fundamental and Applied Mathematics</institution>
          ,
          <addr-line>Moscow</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Automatic layers recognition of the wall of the stomach and colon on whole slide images is an extremely urgent task in digital pathology as it can be used for automatic determining the depth of invasion of the digestive tract tumors. In this paper we propose a new CNN-based method of automatic tissue type recognition on whole slide histological images. We also describe an efective pipeline of training that uses 2 diferent training datasets. The proposed method of automatic tissue type recognition achieved 0.929 accuracy and 0.903 balanced accuracy on CRC-VAL-HE-7K dataset for 9-types classification and 0.98 accuracy and 0.926 balanced accuracy on the test subset of whole slide images from PATH-DTMSU dataset for 5-types classification. The developed method makes it possible to classify the areas corresponding to the gastric own mucous glands in the lamina propria and also to distinguish the tubular structures of a highly diferentiated gastric adenocarcinoma with normal glands.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Deep Learning</kwd>
        <kwd>Image Segmentation</kwd>
        <kwd>Histology</kwd>
        <kwd>Pathology</kwd>
        <kwd>Whole Slide Images</kwd>
        <kwd>Tissue Recognition</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Determining the depth of invasion (T) of the digestive tract tumors (stomach and colon tumors)
is one of the most important tasks of surgical pathology since the depth of invasion is a reliable
highly significant negative prognostic factor. If determining the depth of invasion of the
digestive tract tumors at advanced stages is a relatively simple task for a pathologist, then the
determination of foci of microinvasion of adenocarcinoma in polyps with low and high grade
dysplasia is a rather dificult task, which can be tried to be solved using of deep learning based
methods.</p>
      <p>
        The key task for developing these algorithms is learning to recognize the layers of the wall
of the stomach and colon on whole slide images (WSI), namely the lamina propria, muscularis
mucosa, submucosa, own muscle layer, subserous layer, serous membrane and adjacent areas
adipose tissue. Recognition of tumors and layers of the stomach wall in gastric cancer is an
example of one of the most urgent problems in the digital pathology of the digestive tract tumors.
The papers available in this area are mainly devoted to the recognition and classification tasks
for endoscopic images obtained during gastroscopy. For example, it has been shown that the
accuracy of deep learning-based algorithms for predicting the depth of invasion of early gastric
cancer is 71.43%, which is slightly higher compared to the opinion of endoscopists of 64.41% [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        Whole slide histological images recognition in gastric cancer using deep learning algorithms
is used to estimate the density of infiltration by various types of immune cells [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Deep learning
algorithms are also being developed to count the number of lymph nodes with metastases
in gastric cancer. Already existing algorithms make it possible to identify lymphoid tissue
and tumor area, and then determine the ratio of tumor area to lymphoid tissue area in order
to determine lymphogenous metastases in gastric cancer. After training the tumor detection
performance of the model was comparable to that of experienced pathologists and achieved
similar performance in two independent cohorts of gastric cancer screening [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        Thus, the existing algorithms based on deep learning are so far aimed only to solve specific
narrow problems in relation to histological images of gastric cancer. While there are already
quite efective algorithms for recognizing the depth of invasion and the layers of the intestinal
wall in colorectal cancer [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], then there are no similar algorithms for gastric cancer, which can
be explained by the higher frequency of difuse type tumors and tumors with a discohesive
component among gastric tumors.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Used data</title>
      <p>In this work we use three diferent image datasets that are developed for the purpose of WSI
segmentation and tissue type recognition.</p>
      <p>
        The first dataset is NCT-CRC-HE-100K [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. It consists of 100,000 non-overlapping image
patches from hematoxylin &amp; eosin (H&amp;E) stained histological images of human colorectal
cancer and normal tissue (data available at http://dx.doi.org/10.5281/zenodo.1214456). Each
patch has a resolution of 224 × 224 pixels and is matched with one of the 9 class labels according
to the specific tissue type or background: adipose (ADI), background (BACK), debris (DEB),
lymphocytes (LYM), mucus (MUC), smooth muscle (MUS), normal colon mucosa (NORM),
cancer-associated stroma (STR), colorectal adenocarcinoma epithelium (TUM). These images
were manually extracted from 86 H&amp;E stained human cancer tissue slides from formalin-fixed
parafin-embedded (FFPE) samples from the NCT Biobank (National Center for Tumor Diseases,
Heidelberg, Germany) and the UMM pathology archive (University Medical Center Mannheim,
Mannheim, Germany). Tissue samples contained CRC primary tumor slides and tumor tissue
from CRC liver metastases; normal tissue classes were augmented with non-tumorous regions
from gastrectomy specimen to increase variability.
      </p>
      <p>
        The second dataset is CRC-VAL-HE-7K [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. It consists of 7180 image patches from 50 patients
with colorectal adenocarcinoma (no overlap with patients in NCT-CRC-HE-100K). The authors
of the dataset recommend to use this dataset as a validation set for models trained on the larger
NCT-CRC-HE-100K. Like in the NCT-CRC-HE-100K dataset, image patches are 224 × 224 pixels
at 0.5 MPP and correspond to the same 9 classes. All tissue samples were provided by the NCT
tissue bank.
      </p>
      <p>
        The third dataset is the subset of PATH-DT-MSU dataset [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ], containing 7 H&amp;E whole slide
histological images of digestive tract tumors. Each WSI image is a full-thickness fragment of the
stomach wall, cut from the surgical material, and includes areas of adenocarcinoma, adjacent
areas of visually unchanged lamina propria and the underlying layers of the stomach wall
(muscularis mucosae, submucosa, own muscle layer, subserous areas). The images are collected
with 40X scanning magnification and have a resolution up to 111.552 × 90.473 pixels. These
whole slide images contain annotation of 5 classes (tissue types): areas of gastric adenocarcinoma
(TUM), unchanged areas of the lamina propria (LP), unchanged areas of the muscularis mucosae
(MM), areas of the submucosa, the own muscle layer of the stomach and subserous areas in
one class (AT), background (BG). The annotations represent the polygons, all pixels inside
them belong to the same class (tissue type or background). An example of WSI image from
PATH-DT-MSU with its annotation is shown in Fig.1. Images were captured using a scanning
microscope Leica Aperio AT2 (Leica Microsystems Inc., Germany), the annotations were made
using the Aperio ImageScope 12.3.3 (Leica Microsystems Inc., Germany).
      </p>
      <p>It is also worth noting that area of annotated regions of whole slide images in PATH-DT-MSU
dataset is relatively small compared to the area of WSI images. The main reason of this is the
necessity of choosing only the regions with "clear" texture that is most characteristic to the
one of the corresponding 5 classes. Also an objective reason of this fact is the complexity and
laboriousness of WSI annotation process.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed methodology</title>
      <p>
        At the moment, the direct implementation of segmentation models for whole slide image
processing using fully-supervised deep learning methods is almost impossible due to the necessity
of preparing accurate full pixel-wise annotations of whole slide images. The common way of
getting the approximate result of automatic tissue segmentation is splitting WSI image into
small patches and predicting the class label for each patch using the classification model. Since
the impressive resolution of WSI images this way provides a tissue segmentation map with the
resolution which is appropriate from the medical point of view [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>In this paper we propose a CNN-based model for automatic tissue type recognition of whole
slide histological images based on the patch classification approach. We also propose a special
pipeline for training the model using several diferent datasets to maximize the eficiency of the
model.</p>
      <sec id="sec-3-1">
        <title>3.1. Data preparation</title>
        <p>
          Image patches from NCT-CRC-HE-100K dataset are available in two versions: with and without
color-normalization using Macenko’s method [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], image patches from CRC-VAL-HE-7K are
available only in color-normalized version. Since that we use only color-normalized version of
        </p>
        <p>NCT-CRC-HE-100K dataset. This allows us to use CRC-VAL-HE-7K dataset for validation of the
model trained on NCT-CRC-HE-100K without any modifications.</p>
        <p>The whole slide images from the PATH-DT-MSU dataset are also split into train and test
subsets. The train subset contains 5 annotated images, the test subset contains 2 annotated
images.</p>
        <p>The annotations of WSI images from PATH-DT-MSU dataset made with Aperio ImageScope
software can easily be exported in the XML markup format. In order to train the patch
classification model and make our PATH-DT-MSU dataset compatible with NCT-CRC-HE-100K and
CRC-VAL-HE-7K we extract patches from the WSI images in accordance with annotations.</p>
        <p>For the train subset we use the window of 320 × 320 pixels, which is moved over each WSI
image with stride of 160 pixels. At each position we calculate whether the window intersects any
of the annotation polygons and if the intersection happens the area of intersection is calculated.
If the area of intersection is larger than 0.75 of the window area, the patch corresponding to the
current position of the window is extracted and saved.</p>
        <p>For the test subset the process of patch extraction is the same except that the window size is
224 × 224 pixels and the stride is 112 pixels. The diferent size for the windows is chosen in
order to augment the training set of extracted patches with rotation. The stride of the moving
window is always half of the window size.</p>
        <p>Moreover, due to the 2 times diference in the magnification level of images from
NCT-CRCHE-100K and PATH-DT-MSU datasets, all the described above patch extraction procedure from
PATH-DT-MSU WSI images is performed on the half of the largest scale (at 20x magnification
level).</p>
        <p>The described patch extraction procedure with the current annotations of WSI images from
PATH-DT-MSU dataset allowed us to extract 70871 patch from training subset and 14462 patches
from test subset. Some examples from the extracted set of patches are shown in Fig.2.</p>
        <p>It is also necessary to mention that all three datasets used in this work are imbalanced (the
number of patches per class for all used datasets is given below):
• NCT-CRC-HE-100K. ADI - 10407, BACK - 10566, DEB - 11512, LYM - 11557, MUC - 8896,</p>
        <p>MUS - 13536, N0RM - 8763, STR - 10446, TUM - 14317,
• CRC-VAL-HE-7K. ADI - 1338, BACK - 847, DEB - 339, LYM - 634, MUC - 1035, MUS - 592,</p>
        <p>NORM - 741, STR - 421, TUM - 1233,
• Train subset of PATH-DT-MSU. AT - 15207, BG - 45977, LP - 3911, MM - 675, TUM - 5101,
• Test subset of PATH-DT-MSU. AT - 1491, BG - 11252, LP - 590, MM - 211, TUM - 918.</p>
        <p>To overcome the class imbalance we use the simple oversampling technique that guarantees
that the number of patches of each class fed into the CNN model during training is equal.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. CNN model and training strategy</title>
        <p>
          In this work we use a CNN-based model for automatic tissue type recognition of whole slide
histological images based on the patch classification approach. Since the amount of data in the
obtained datasets is limited, we chose the DenseNet architecture [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] as it tends to perform well
in the case of relatively small number of training samples.
        </p>
        <p>
          DenseNet is the further development of the ResNet architecture [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] with additional direct
connections between any two layers within the same feature-map size. The DenseNet consists
of several dense blocks, each of them corresponding to fixed feature-map size, connected with
pooling layers. Visualization of single dense block of the DenseNet architecture is shown in
Fig.3, visualization of the full DenseNet architecture is shown in Fig.4. As a direct consequence
of the input concatenation, the feature-maps learned by any of the DenseNet layers can be
accessed by all subsequent layers. This encourages feature reuse throughout the network, and
leads to more compact model.
        </p>
        <p>To get the most eficient model of tissue type recognition in this work we use the transfer
learning principle gradually turning the DenseNet model to be able to classify patches from the
target PATH-DT-MSU whole slide images.</p>
        <p>Thus, the training of the proposed model consists of two main steps:
• take the DenseNet model pretrained on ImageNet dataset and fine tune it on the
NCT</p>
        <p>CRC-HE-100K dataset,</p>
        <p>• fine tune the obtained DenseNet model on the PATH-DT-MSU dataset.</p>
        <p>These steps will be split into three phases of training and will be described in detail in the
next section of the paper.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experiments and results</title>
      <p>
        DenseNet-121 version from [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] was chosen as the base model. All described below training
procedures were done with Adam [11] optimizer with automatic decrease of learning rate on
plateau. The batch size was chosen to be 16.
      </p>
      <p>Since all used in this work histological datasets are imbalanced concerning the number of
patches per class (tissue type), we use the oversampling technique while training CNN-based
model. Particularly the data batches that are fed into CNN are created by generator in the
following way:
• randomly select one of the classes,
• randomly select a patch from the list existing patches corresponding to the selected class,
• augment the patch (diferent augmentation methods are used at diferent phases of
training),
• repeat until the batch is collected.</p>
      <p>This technique equalizes the amount of patches for each class passed through the CNN during
training thus making the classification more efective and robust.</p>
      <p>During the validation step all image patches from the validation set are passed through the
CNN, accuracy, balanced accuracy and confusion matrix are calculated based on the retrieved
prediction.</p>
      <sec id="sec-4-1">
        <title>4.1. First phase of training</title>
        <p>The first phase of training consists in fine tuning of the DenseNet-121 model that is pretrained
on the ImageNet dataset. Due to the enormous amount of diferent types of images in ImageNet
dataset the head of the pretrained DenseNet-121 can be used as a universal image feature
extractor. We replace the last fully-connected layer with the new one with 9 outputs in correspondence
with the number of classes in NCT-CRC-HE-100K dataset, freeze all head layers and train the
obtained modified DenseNet for 20 epochs with the initial learning rate of 2 · 10− 5. At this step
the augmentation with random flip and 90 degrees turn is applied. The validation is performed
on CRC-VAL-HE-7K dataset. We reached the accuracy value of 0.919 and balanced accuracy of
0.8816 at this phase.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Second phase of training</title>
        <p>The DenseNet obtained at the previous phase now represent an universal feature extractor
tuned for tissue type classification. In order to better adapt it for working with histological
images at the second phase of training we unfreeze all layers in the model and further train
it with the same NCT-CRC-HE-100K dataset for 30 epochs with the initial learning rate of
10− 4. The augmentation and validation are the same as at the previous phase. At this point we
reached the accuracy value of 0.929 and balanced accuracy of 0.903.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Third phase of training</title>
        <p>After the previous phase of training the DenseNet model is much more adapted to work with
histological images. Finally we fine tune it on patches from whole slide images of
PATH-DTMSU dataset. We replace the last fully-connected layer with the new one with 5 outputs in
correspondence with the number of classes in PATH-DT-MSU dataset, freeze all head layers
and train the model for 8 epochs with the initial learning rate of 10− 5. Herewith the data
augmentation besides the random flip includes the random rotation with subsequent crop
to the 224 × 224 size. The validation is performed on the test set of patches extracted from
PATH-DT-MSU whole slide images. At this phase we finally achieved the accuracy value of
0.98 and balanced accuracy of 0.926. The obtained confusion matrix is shown in Table.1.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Applying model to whole slide images</title>
        <p>In order to perform the visual inspection of the efectiveness of the trained CNN model in the
task of automatic tissue type recognition we apply it to the whole slide images from the test
subset of PATH-DT-MSU. Each WSI image is split into patches of size 224 × 224 pixels without
overlap, for each patch the trained model predicts the class label, after that all the predictions
are combined into a matrix, which is then visualized as a semi-transparent layer imposed on
the source WSI image.</p>
        <p>The visualization of the proposed method of tissue type recognition is shown in Fig.5. The
developed algorithm makes it possible to classify with suficient accuracy the areas corresponding
to the gastric own mucous glands in the lamina propria and also makes it possible to distinguish
the tubular structures of a highly diferentiated gastric adenocarcinoma with normal glands.
Less accurate results obtained with respect to the muscularis mucosa can be explained by a
similar structure with longitudinal smooth muscle fibers in the own muscle layer of the stomach.</p>
      </sec>
      <sec id="sec-4-5">
        <title>4.5. Comparison with alternative models and training techniques</title>
        <p>Several experiments with alternative models and training techniques were tested within this
work.</p>
        <p>Diferent shallow CNN models with reduced number of parameters (compared to
DenseNet121) without pretraining were only able to provide the balanced accuracy value of 0.76 on
PATH-DT-MSU dataset.</p>
        <p>Direct fine tuning of the pretrained DenseNet-121 model on NCT-CRC-HE-100K dataset in
one step instead of the proposed two phases of training and further training the model on
PATH-DT-MSU dataset allowed us to get the balanced accuracy value of 0.9 on PATH-DT-MSU
dataset.</p>
        <p>Replacing the third phase of training with two phases similar with the first two phases (first
training fully connected layer, then training whole model) did not improve the achieved level of
accuracy. This can be explained by the small size of the training set extracted from whole slide
images of PATH-DT-MSU dataset.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Implementation details</title>
      <p>The proposed CNN-based method for automatic tissue type recognition in whole slide
histological images was implemented using Python 3 programming language and open-source software
library for machine learning Tensorflow 2 [ 12]. The training of CNN model was performed on a
personal computer with AMD Ryzen 9 5900HS with 32 Gb RAM and Nvidia GeForce 3080 GPU.</p>
      <p>Processing whole slide images was done using slideio library [13], shapely library [14] was
used for geometric calculations during patch extraction.</p>
      <p>After preliminary translation and saving the WSI images from PATH-DT-MSU dataset as
numpy arrays, the inference time of applying the proposed model of automatic tissue type
recognition with full visualization of the results takes about 1.5 minutes on the mentioned above
PC configuration, which is good enough for practical use by pathologists.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>In this paper we proposed a new CNN-based method of automatic tissue type recognition in
whole slide histological images and a special pipeline of training CNN model within three
phases using diferent histological datasets. The proposed model achieved accuracy value of
0.929 and balanced accuracy of 0.903 on CRC-VAL-HE-7K dataset and accuracy value of 0.98
and balanced accuracy of 0.926 on the test subset of patches from WSI images of PATH-DT-MSU
dataset.</p>
      <p>The developed method makes it possible to classify the areas corresponding to the gastric
own mucous glands in the lamina propria and also to distinguish the tubular structures of a
highly diferentiated gastric adenocarcinoma with normal glands.</p>
      <p>Further improvements of current work will be focused on the enlarging the number of whole
slide images in PATH-DT-MSU dataset, adding more detailed annotations and increasing the
number of supported types of tissues. The performance of the proposed method is also planed to
be improved by pretraining the model with autoencoders on WSI images that are not annotated.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>The work was funded by RFBR, CNPq and MOST according to the research project 19-57-80014
(BRICS2019-394).
ceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp.
770–778.
[11] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint
arXiv:1412.6980 (2014).
[12] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving,
M. Isard, et al., Tensorflow: A system for large-scale machine learning, in: 12th {USENIX}
symposium on operating systems design and implementation ({OSDI} 16), 2016, pp. 265–
283.
[13] S. Melnikov, V. Popov, Slideio: a new python library for reading medical images, 2020–.</p>
      <p>URL: https://gitlab.com/bioslide/slideio.
[14] S. Gillies, et al., Shapely: manipulation and analysis of geometric objects, 2007–. URL:
https://github.com/Toblerity/Shapely.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Shang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Dong</surname>
          </string-name>
          , et al.,
          <article-title>Deep learning system compared with expert endoscopists in predicting early gastric cancer and its invasion depth and diferentiation status (with videos), Gastrointestinal Endoscopy (</article-title>
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Chen</surname>
          </string-name>
          , C. Liu,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Shen</surname>
          </string-name>
          , et al.,
          <article-title>The immune subtypes and landscape of gastric cancer and to predict based on the whole-slide images using deep learning</article-title>
          ,
          <source>Frontiers in Immunology</source>
          <volume>12</volume>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Guan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          , et al.,
          <article-title>Predicting gastric cancer outcome from resected lymph node histopathology images using deep learning</article-title>
          ,
          <source>Nature communications 12</source>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J. N.</given-names>
            <surname>Kather</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Krisam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Charoentong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Luedde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Herpel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.-A.</given-names>
            <surname>Weis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Gaiser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Marx</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. A.</given-names>
            <surname>Valous</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ferber</surname>
          </string-name>
          , et al.,
          <article-title>Predicting survival from colorectal cancer histology slides using deep learning: A retrospective multicenter study</article-title>
          ,
          <source>PLoS medicine 16</source>
          (
          <year>2019</year>
          )
          <article-title>e1002730</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J. N.</given-names>
            <surname>Kather</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Halama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Marx</surname>
          </string-name>
          ,
          <volume>100</volume>
          ,
          <article-title>000 histological images of human colorectal cancer and healthy tissue</article-title>
          ,
          <year>2018</year>
          . URL: https://doi.org/10.5281/zenodo.1214456. doi:
          <volume>10</volume>
          .5281/ zenodo.1214456.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Khvostikov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Krylov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Mikhailov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kharlova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Oleynikova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Malkov</surname>
          </string-name>
          ,
          <article-title>Automatic mucous glands segmentation in histological images</article-title>
          .,
          <source>International Archives of the Photogrammetry, Remote Sensing &amp; Spatial Information Sciences</source>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>N.</given-names>
            <surname>Oleynikova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Khvostikov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Krylov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Mikhailov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kharlova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Danilova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Malkov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ageykina</surname>
          </string-name>
          , E. Fedorov,
          <article-title>Automatic glands segmentation in histological images obtained by endoscopic biopsy from various parts of the colon</article-title>
          ,
          <source>Endoscopy</source>
          <volume>51</volume>
          (
          <year>2019</year>
          )
          <article-title>OP9</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Macenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Niethammer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Marron</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Borland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. T.</given-names>
            <surname>Woosley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Guan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Schmitt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. E.</given-names>
            <surname>Thomas</surname>
          </string-name>
          ,
          <article-title>A method for normalizing histology slides for quantitative analysis</article-title>
          ,
          <source>in: 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro</source>
          , IEEE,
          <year>2009</year>
          , pp.
          <fpage>1107</fpage>
          -
          <lpage>1110</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. Van Der</given-names>
            <surname>Maaten</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. Q.</given-names>
            <surname>Weinberger</surname>
          </string-name>
          ,
          <article-title>Densely connected convolutional networks</article-title>
          ,
          <source>in: Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>4700</fpage>
          -
          <lpage>4708</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>K.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , S. Ren,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>Deep residual learning for image recognition</article-title>
          , in: Pro-
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>