<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Glioblastoma Multiforme Classification On High Resolution Histology Image Using Deep Spatial Fusion Network?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>P. Sobana Sumi ??</string-name>
          <email>sobanasumi.p2018@vitstudent.ac.in</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Radhakrishnan Delhibabu</string-name>
          <email>r.delhibabu@vit.ac.in</email>
          <email>radhakrishnandelhibabu@tdtu.edu.vn</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Modeling Evolutionary Algorithms Simulation and Artificial Intelligence, Faculty of Electrical &amp; Electronics Engineering, Ton Due Thang University</institution>
          ,
          <addr-line>Ho Chi Minh City</addr-line>
          ,
          <country country="VN">Vietnam</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>School of Computer Science and Engg., Vellore Institute of Technology</institution>
          ,
          <addr-line>Vellore</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Brain tumor is a growth of abnormal cells in brain, which can be cancerous or non-cancerous. The Brain tumor have scarce symptoms so it is very difficult to classify. Diagnosing brain tumor with histology images will efficiently helps us to classify brain tumor types. Sometimes, histology based image analysis is not accepted due to its variations in morphological features. Deep learning CNN models helps to overcome this problem by feature extraction and classification. Here proposed a method to classify high resolution histology image. InceptionResNetV2 is an CNN model, which is adopted to extract hierarchical features without any loss of data. Next generated deep spatial fusion network to extract spatial features found in between patches and to predict correct features from unpredictable discriminative features. 10-fold cross-validation is performed on the histology image. This achieves 95.6 percent accuracy on 4-class classification (benign, malignant, Glioblastoma, Oligodendroglioma). Also obtained 99.1 percent accuracy and 99.6 percent AUC on 2-way classification (necrosis and non-necrosis).</p>
      </abstract>
      <kwd-group>
        <kwd>Glioblastoma Multiforme</kwd>
        <kwd>Deep spatial fusion network</kwd>
        <kwd>InceptionResNetV2</kwd>
        <kwd>classification</kwd>
        <kwd>patches</kwd>
        <kwd>CNN</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Cancer tumor anywhere in the body spreads to the brain or it starts in the
brain. A brain tumor can be normal (benign) or cancerous (malignant) based
on its characteristics, it is normaly found in children and adults. The brain
tumors are differentiated in to two types as Low Grade Gliomas (LGG) and High
Grade Gliomas (HGG). Grade 1 and grade 2 are defined as LGG, grade3 and
grade 4 are defined as HGG. Astrocytomas are grade 1 and grade 2 level
tumor, Oligodendroglioma is grade 3 level tumor and Glioblastoma Multiforme is
grade 4 level tumor. Children are affected by Astrocytoma, Ependymoma and
Medulloblastoma. Adults suffer from Astrocytoma, Oligodendrogliomas,
Meningioma, Glioblastoma and Schwannoma. Gliobastoma Multiforme (GBM) is the
matured stage of brain tumor with scarce symptoms so it is very hard to
classify. Diagnosing this kind of tumors at the right time helps to increase patient
survival. Histology images are obtained through biopsy. The biopsy is a process
of taking tissue from the tumor and that tumor tissue is analyzed under electron
microscope. Histology images have to be analyzed in large numbers with
multiple staining to diagnose a single case, which causes time consuming problem.
Sometimes this kind of histology image analysis is not accepted due to large
variations in pathological features. At the initial stage to detect tumors in histology
images Computer Assisted Diagnosis (CAD) systems are used. In this technique,
images are scanned at first next the digital images are processed and analyzed by
visual feature extraction of machine learning technique. Color difference occurs
due to various scanners, staining procedures, different age patients and due to
tissue thickness. Color normalization helps to classify colors among samples [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
CAD works well in breast cancer images, not in brain tumor.
      </p>
      <p>
        Xu et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] proposed deep activation features for large scale histology
images. CNN is used for feature extraction and the extracted features are passed
into SVM to classify the patches of necrosis and non necrosis area. This method
achieved 90 percent accuracy on small datasets by classification and
segmentation process. Fukuma et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] have explored feature extraction and disease stage
classification for glioma histology images. Here tumors are distinguished as LGG
and GBM by using significant features as object level features and spatial
arrangement features to classify disease stage. These features are evaluated by K-S
test and this obtained result is classified by using SVM classifier. Classification
accuracy is low when compared to other non significant features. In the work
of automated discrimination of low and high grade glioma, Mousavi et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]
proposed about pseudopalisiding necrosis area that is detected by cell
segmentation and by cell count profile. Microvascular proliferation detection (MVP)
is detected by spatial and morphological feature extraction. Finally, the
hierarchical decision is made through a decision tree. MVP detection accuracy is
less when compared to necrosis because of its structural complexity. Macyszyn
et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] examined about multidimensional pattern classification method. Here
SVM classifier is used to classify patient survival in short (6 months) and long
terms (18 months). Two cross accuracy for short term survival and three cross
accuracy for the long term survival is used. Here MRI images are analyzed to
predict survival. Barkerc et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] explored coarse to fine method to analyze
the characters of pathology images. Spatial features like shape, color and
texture are extracted from tiled region and passed into clustering to have better
classification. K-means is used for clustering and PCA is used to reduce data
dimensionality and classification complexity. Powell et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] examined low grade
gliomas using a bag of words approach. The edge detection algorithm is used
for nuclear segmentation on hematoxylin stain and eosin stains. Threshold is
assigned using global value. K mean is used for feature extraction and SVM is used
to classify overall patient survival in short and long terms. Xu et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] suggested
CNN architecture AlexNet for classification. Achieved good result than previous
methods. Here image analysis is done with a limited number of images where
CNN works well on a large dataset. Yonekura et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] proposed CNN
architecture with deep networks. This network consists of three convolution layers, three
pooling layers, and three ReLU. Classification accuracy is low when compared
with other network but the work mainly focuses on disease stage classification.
Yonekura et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] suggested deep CNN architecture of LeNet network to
extract features and to classify disease stage. Classification accuracy is low when
compared with other networks.
      </p>
      <p>Classifying high resolution histology image is a major problem. CNN can
extract unpredictable discriminative features but training CNN directly with high
resolution image is computationaly high. Mostly, histology image is found with
unpredictable discriminative features which causes a challenge in patch based
CNN classification. So, to solve this problem, InceptionResNetV2 architecture
is adopted for hierarchical feature extraction and deep spatial fusion network is
used to predict spatial features found in between patches. This proposed system
gives better accuracy.</p>
      <p>The paper is organised as follows. Section 2 introduces the studied problem.
Section 3 describes two freely available databases with high-resolution brain
tumor histology images. Section 5 briefly describes types of neural networks
used in computer vision and image analysis and peculiarities of histology image.
Section 4 introduces the used architecture of deep neural networks. Section 6
provides readers with the essence of the proposed solution, while Section 7
explains training process aspects. Section 8 is devoted to machine experiments and
results discussion. Finally, Section 9 concludes the paper.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Problem Statement</title>
      <p>Histology image diagnosing is completely a human factor process. Sometimes this
kind of analysis is disagreed due to various differences in morphological features.
To diagnose a single tissue and to classify its disease stage, tissue has to be
analyzed under various magnification factors. Several tissues have to be analyzed
by a pathologist to conclude. In some cases, both surgery and tissue diagnosing
has to be done at the same time, this leads to time consuming problems. Lack of
images (datasets) and images with good quality are rare. Large datasets should
be processed with deep learning technique. Many diseases are not diagnosed due
to the lack of training data. Histology images are diagnosed as per H and E
stains only. Imbalanced datasets can be solved by data augmentation and some
other molecular features can be raised to diagnose histology images other than
H and E stains.</p>
    </sec>
    <sec id="sec-3">
      <title>Dataset</title>
      <p>
        TCGA and TCIA are most popular databases from where these high-resolution
brain tumor histology images are taken. TCGA database consists of 2034 high
resolution brain tumor histology images with the size of 2048*1536 and TCIA
database consist of 2005 images. Each histology images are taken by biopsy
process to maintain its molecular composition and its original structure. Each
dataset is H and E stained microscopic histology image, with various
magnification factors like 40X, 100X, 200X and 400X. These datasets are found under
various classifications as benign, malignant, Astrocytoma and Oligodendroglioma.
Both datasets consist of this 4 class labels evenly. Astrocytoma and
Oligodendroglioma are malignant type tumors. To avoid data imbalance and overfitting,
Tensorflow perform data augmentation by rotating, saturation adjustment etc.
Normalization is done with an interval of [
        <xref ref-type="bibr" rid="ref1">-1, 1</xref>
        ], before the augmentation process
to reduce variance that occurs through H and E stains [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. To have a better
classification four types of magnification images are given in the ratio of 7:3
for the training and testing process. Tumor classification works mainly focus on
benign and malignant binary classification.
4
      </p>
    </sec>
    <sec id="sec-4">
      <title>Architecture</title>
      <p>
        The proposed architecture is shown in figure 1. A high-resolution brain
tumor histology image is given as input. Unpredictable discriminative features are
present sparsely on the entire image, which denote that it is not necessary for
all patches to be consistent with image-wise labels. To model this fact and also
to have good image-wise prediction, a spatial fusion network has been proposed.
First, the adapted InceptionResNetv2 is trained to extract hierarchical
discriminative features and predict probabilistic values of different cancer type for local
image patches. Compared to VGG [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], InceptionResNetV2 (INRV2) performs
well because of its shortcut connection network. Skip connection structure of
INRV2 helps to reduce several problems with training deep neural networks
that occur while performing backpropagation and improve feature extraction.
Secondly, a deep spatial fusion network is specially designed to learn spatial
relationship between patches, it gets input from the spatial feature map. Patch-wise
probability vector is taken as a base unit for spatial feature maps to have
better usage. The fusion model learns to adjust the bias of patch-wise predictions
and tends to have efficient image-wise prediction, compared to typical fusion
methods.
5
5.1
      </p>
    </sec>
    <sec id="sec-5">
      <title>Related Theory</title>
      <sec id="sec-5-1">
        <title>Convolutional Neural Network</title>
        <p>CNN is a category of the neural network, which shows its effective result in
image classification, image recognition etc. Its operations are convolution, non
linearity (ReLU), pooling or subsampling, classification (fully connected layer).
Extracting features from the input image is the major work of the convolution
part; the image convolves with a filter and gives the feature map. A rectified
linear unit is a non-linear operation, which performs element-wise operation and
replaces all negative values found in feature map with zero value. Pooling helps
to reduce the dimensionality of each feature map without any loss in information.
A fully connected layer is a multilayer perceptron which uses softmax activation
function in the output layer. Convolutional and pooling layers give high level
features as the output. The fully connected layer uses these high level features
to classify the input image.
5.2</p>
      </sec>
      <sec id="sec-5-2">
        <title>InceptionResNetV2</title>
        <p>InceptionResNetV2 is a CNN network, which is trained on more than one million
images from ImageNet database. It consists of 164 deep layer network, it can
classify images into 1000 object categories – variety of animals, birds, box, pencil,
etc. Through this, the network has learned several features from various images.
ResNet skips its connections with no loss of information. Training phase with this
network is much faster and produces better accuracy than Inception network.
INRV2 achieves 19.9 percent on top 1 error and 4.9 percent on top 5 error. In
the proposed work this network consist of 24 layers and 4 blocks.
5.3</p>
      </sec>
      <sec id="sec-5-3">
        <title>Histology Image Analysis</title>
        <p>The brain tumor histology image is analyzed only by H and E stained slides.
Malignancy state can be determined by the presence or absence of certain
histological features such as presence and absence of necrosis, mitotically active cells,
nuclear atypia, microvascular proliferation (enlarged blood vessels).</p>
        <p>These H and E stains are universally used for histological tissue
examination. Classification and grading of brain tumor can be done by including other
molecular information along with the histology image based information. In our
proposed method deep patch based process and deep spatial fusion network is
used to classify high resolution histology image.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Proposed Work</title>
      <sec id="sec-6-1">
        <title>Patch-wise INVR2</title>
        <p>Instead of adapting normal feedforward CNN our proposed architecture used
InceptionResNetV2.</p>
        <p>Compared to other CNN architectures INRV2 effectively reduces difficulties
in training the deep network using shortcut connections and by residual learning.
Though it skip connections there is no loss of information by non-linear functions.
Extracted hierarchical features from low level to high level are combined to make
final predictions, where as discriminative features are distributed in the image
from cellular to tissue level. Input layer receives normalized image patches with
the size of 512*512, which is sampled from whole histology image. Depth of
the network is 24 layers with 4 block units for exploring region patterns in a
different scale. 19*19 to 43*43, 51*51 to 99*99, 115*115 to 211*211 and 243*243
to 435*435 pixel are the size of four block groups size. This pixel size responds
to region patterns in nuclei, structure organization and tissue organization.
6.2</p>
      </sec>
      <sec id="sec-6-2">
        <title>Deep Spatial Fusion Network</title>
        <p>
          The main purpose of the fusion model is to predict the image-wise label zˆ among
Y classes C = {C1, C2, . . . , CY }, given all patch-wise feature maps F as the
input to the proposed INRV2 network. This image-wise classification prediction
is defined as MAP [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] as follows
zˆ = arg max P (z|F ).
        </p>
        <p>z∈C</p>
        <p>If the entire high resolution image is divided into M ∗ N patches, then all
the patch wise probability maps are arranged in spatial order, as
F =  F21</p>
        <p>.
 ..</p>
        <p>
          Here, deep neural network (DNN) is used to utilize the spatial relationship
between patches as shown in figure 5. Proposed fusion model contains 4 fully
connected layers, each follows by ReLU activation function [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. During image
wise training, multilayer perceptron converts the spatial distribution of local
probability maps into global class probability vectors. One dropout layer is added
in before each hidden layer to avoid overfitting and to increase robustness. Also,
a dropout layer is inserted between the flattened probabilistic vector and first
hidden layer. By dropping out half of the probability maps, this models learns
image wise prediction with half information of patches and also minimize the
cross entropy loss in training.
512*512 size pixel with overlapping patches extracted from the high resolution
image and given as input for deep patch based model shown in figure 4. We
assume that patch labels are consistent with image ground truth values because
patch-wise labels are not given in the training dataset. Bias may suffer during
patch based training and reduce patch-wise classification accuracy. By the second
stage under supervised learning of image labels, bias get reduced during image
based training. To avoid imbalanced dataset and overfitting, augmentation is
done by random rotation, by making a change in contrast, brightness and by
horizontal flipping etc. This generate 202,174 patches from the TCGA database
and 140,099 patches from TCIA database. InceptionResNetV2 is trained on 32
sized mini batch to minimize the cost function of cross entropy using Adam
optimization [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] with the learning rate of 10−5 for each 50 epoch. Now this
patch wise network after training, encodes the 512*512 patch to 10*10 feature
map and to 4 class probabilistic values.
        </p>
        <p>Data augmentation process is performed, to train spatial fusion network and
generated 5,890 high resolution images from the TCGA training dataset and
3,998 images from TCIA training dataset. Each high resolution image is divided
into 12 non overlapping patches with 512*512 size pixels after augmentation.
Each of these patch is given individually as an input to InceptionResNetV2 and
output 512 feature map of size 10*10 and with a class probability vector of size
1*4. Probabilistic vectors of patches in the current image are then combined into
a probabilistic map following their own spatial order which is given as input for
spatial fusion network. This generated probability map can be viewed as high
level feature map that encodes all the patch wise discriminative features and the
image wise spatial context features. After that it is analyzed with image ground
truth values. Spatial fusion network weights are learned by using mini batch
gradient descent with a batch size of 32 with Adam optimization. To minimize
cross entropy loss function during training, the spatial fusion model encodes the
biased probabilistic map into k class vector as per approximate image ground
truth (k=4). By using the spacial context aware features hidden in probabilistic
map, image based classification accuracy can be improved.
8</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Experimental Results</title>
      <p>
        We evaluated the performance of the patch based InceptionResNetV2 and then
focused on the effectiveness of spatial fusion network by having multiple
experiment comparison on the two dataset. In the first experiment (Baseline) on TCIA
dataset is a baseline method [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], which is based on patch based plain CNN along
with multiple vote based fusion method. Second (Residual and Vote) on TCIA
dataset, replaced plain CNN with patch wise residual network. Our proposed
work is examined on both dataset and shown in Table 1. All these methods are
evaluated with ten fold cross validation. On the TCIA dataset, proposed method
achieve the accuracy of 86.8 percent for 4 class classification, which outperforms
on baseline method [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] by 8.4 percent. ResNet and vote method brings
improvement by 4.9 percent. As a comparison, on TCGA dataset CNN with GDT [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] is
performed, which adopt several CNN network (ResNet50, InceptionV3, VGG16)
by using ensemble model and extracted features with gradient boost tree
classifier. On TCGA dataset, our proposed method achieve 95.6 percent accuracy
on 4 class classification and 99.1 percent accuracy, 99.6 percent AUC on 2 class
classification (necrosis and non necrosis).
      </p>
      <p>Performance of classification in terms of ROC (receiver operating
characteristic curve) and confusion matrix is shown in figure 6 and figure 7. By PyTorch
library on NIVIDIA 1080Ti GPU, this performance is achieved. It took around
80ms to classify single high resolution histology image.
9</p>
    </sec>
    <sec id="sec-8">
      <title>Conclusion</title>
      <p>The proposed method of this paper describes a deep spatial fusion network
that handles the complex building of discriminative features over patches. Also
learns to adjust bias on patch wise prediction in high resolution histology image.
Patch wise InceptionResNetV2 is adopted to extract features from the cellular
level to tissue level. This method is exposed to analyze the spatial relationship
between patches. Compared to the previous experiments of CNN using various
architecture, our proposed method gives better performance. Further, this work
can be extended with some other networks that efficiently analyzes more types
of malignant tumors other than Glioblastoma and Oligodendroglioma.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>H.O.</given-names>
            <surname>Lyon</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.P. De Leenheer</surname>
            ,
            <given-names>R.W.</given-names>
          </string-name>
          <string-name>
            <surname>Horobin</surname>
            ,
            <given-names>W.E.</given-names>
          </string-name>
          <string-name>
            <surname>Lambert</surname>
            ,
            <given-names>E.K.W.</given-names>
          </string-name>
          <string-name>
            <surname>Schulte</surname>
            ,
            <given-names>B. Van</given-names>
          </string-name>
          <string-name>
            <surname>Liedekerke</surname>
            ,
            <given-names>D.H.</given-names>
          </string-name>
          <string-name>
            <surname>Wittekind</surname>
          </string-name>
          ,
          <article-title>Standardization of reagents and methods used in cytological and histological practice with emphasis on dyes, stains and chromogenic reagents, Histochem</article-title>
          . J.
          <volume>26</volume>
          (
          <issue>7</issue>
          ) (
          <year>1994</year>
          )
          <fpage>533</fpage>
          -
          <lpage>544</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Yan</given-names>
            <surname>Xu</surname>
          </string-name>
          , Zhipeng Jia, Yuqing Ai, Fang Zhang, Maode Lai,
          <string-name>
            <surname>Eric I-Chao</surname>
            <given-names>Chang</given-names>
          </string-name>
          ,
          <source>Deep Convolutional Activation Features For Large Scale Brain Tumor Histopathology Image Classification And Segmentation</source>
          , 2015 IEEE ICASSP
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Kiichi</given-names>
            <surname>Fukuma</surname>
          </string-name>
          , Hiroharu Kawanaka, Surya Prasath,
          <string-name>
            <given-names>Bruce J.</given-names>
            <surname>Aronow</surname>
          </string-name>
          and
          <string-name>
            <given-names>Haruhiko</given-names>
            <surname>Takase</surname>
          </string-name>
          ,
          <source>Feature Extraction and Disease Stage Classification for Glioma Histopathology Images</source>
          ,
          <year>2015</year>
          17th
          <string-name>
            <given-names>International</given-names>
            <surname>Conference on E-health</surname>
          </string-name>
          <string-name>
            <surname>Networking</surname>
          </string-name>
          ,
          <source>Application and Services (HealthCom)</source>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Hojjat</given-names>
            <surname>Seyed</surname>
          </string-name>
          <string-name>
            <given-names>Mousavi</given-names>
            , Vishal Monga, Ganesh Rao,
            <surname>Arvind U. K. Rao</surname>
          </string-name>
          ,
          <article-title>Automated discrimination of lower and higher grade gliomas based on histopathological image analysis</article-title>
          ,
          <source>J Pathol Inform</source>
          <year>2015</year>
          ,
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Luke</given-names>
            <surname>Macyszyn</surname>
          </string-name>
          , Hamed Akbari,
          <string-name>
            <surname>Jared M. Pisapia</surname>
          </string-name>
          , Xiao Da, Mark Attiah, Vadim Pigrish,Yingtao Bi, Sharmistha Pal,
          <string-name>
            <surname>Ramana</surname>
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Davuluri</surname>
          </string-name>
          , Laura Roccograndi, Nadia Dahmane, Maria Martinez-Lage, George Biros, Ronald L. Wolf, Michel Bilello,
          <string-name>
            <surname>Donald M. O'Rourke</surname>
          </string-name>
          , and Christos Davatzikos,
          <article-title>Imaging patterns predict patient survival and molecular subtype in glioblastoma via machine learning techniques</article-title>
          ,
          <source>NeuroOncology 2015</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Jocelyn</given-names>
            <surname>Barkerc</surname>
          </string-name>
          , Assaf Hoogia, Adrien Depeursingea,b, Daniel L. Rubin,
          <article-title>Automated classification of brain tumor type in whole-slide digital pathology images using local representative tiles</article-title>
          ,
          <source>Elsevier</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Reid</given-names>
            <surname>Trenton</surname>
          </string-name>
          <string-name>
            <surname>Powell</surname>
          </string-name>
          , Adriana Olar, Shivali Narang, Ganesh Rao, Erik Sulman,
          <string-name>
            <given-names>Gregory N.</given-names>
            <surname>Fuller</surname>
          </string-name>
          , Arvind Rao,
          <article-title>Identification of Histological Correlates of Overall Survival in Lower Grade Gliomas Using a Bag-of-words Paradigm: A Preliminary Analysis Based on Hematoxylin and Eosin Stained Slides from the Lower Grade Glioma Cohort of The Cancer Genome Atlas</article-title>
          ,
          <source>2017 Journal of Pathology Informatics</source>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Yan</given-names>
            <surname>Xu</surname>
          </string-name>
          , Zhipeng Jia,
          <string-name>
            <surname>Liang-Bo</surname>
            <given-names>Wang</given-names>
          </string-name>
          ,
          <source>Yuqing Ai</source>
          , Fang Zhang, Maode Lai and
          <string-name>
            <surname>EricI-Chao</surname>
            <given-names>Chang</given-names>
          </string-name>
          ,
          <article-title>Large scale tissue histopathology image classification, segmentation, and visualization via deep convolutional activation features, 2017 BMC Bioinformatics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Yonekura</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kawanaka</surname>
            <given-names>H</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Prasath</surname>
            <given-names>VBS</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aronow</surname>
            <given-names>BJ</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Takase</surname>
            <given-names>H</given-names>
          </string-name>
          ,
          <article-title>Improving the generalization of disease stage classification with deep CNN for glioma histopathological images</article-title>
          , In: International workshop on deep learning in bioinformatics, biomedicine, and
          <article-title>healthcare informatics (DLB2H</article-title>
          );
          <year>2017</year>
          . pp
          <fpage>1222</fpage>
          -
          <lpage>1226</lpage>
          30.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Asami</surname>
            <given-names>Yonekura</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Hiroharu</given-names>
            <surname>Kawanaka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. B.</given-names>
            <surname>Surya Prasath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Bruce J.</given-names>
            <surname>Aronow</surname>
          </string-name>
          , Haruhiko Takase,
          <article-title>Automatic disease stage classification of glioblastoma multiforme histopathological images using deep convolutional neural network</article-title>
          ,
          <source>Korean Society of Medical and Biological Engineering</source>
          and Springer-Verlag GmbH Germany, part of Springer Nature 2018
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Simonyan</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zisserman</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Very deep convolutional networks for large-scale image recognition</article-title>
          .
          <source>arXiv preprint arXiv:1409.1556</source>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Greig</surname>
            ,
            <given-names>D.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Porteous</surname>
            ,
            <given-names>B.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Seheult</surname>
            ,
            <given-names>A.H.</given-names>
          </string-name>
          :
          <article-title>Exact maximum a posteriori estimation for binary images</article-title>
          .
          <source>Journal of the Royal Statistical Society</source>
          . Series B (Methodological) pp.
          <fpage>271</fpage>
          -
          <lpage>279</lpage>
          (
          <year>1989</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Nair</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hinton</surname>
          </string-name>
          , G.E.:
          <article-title>Rectified linear units improve restricted boltzmann machines</article-title>
          .
          <source>In: Proceedings of the 27th international conference on machine learning (ICML-10)</source>
          . pp.
          <fpage>807</fpage>
          -
          <lpage>814</lpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Macenko</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Niethammer</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marron</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Borland</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Woosley</surname>
            ,
            <given-names>J.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Guan</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schmitt</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thomas</surname>
            ,
            <given-names>N.E.</given-names>
          </string-name>
          :
          <article-title>A method for normalizing histology slides for quantitative analysis</article-title>
          .
          <source>In: Biomedical Imaging: From Nano to Macro</source>
          ,
          <year>2009</year>
          . ISBI'09. IEEE International Symposium on. pp.
          <fpage>1107</fpage>
          -
          <lpage>1110</lpage>
          . IEEE (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Kingma</surname>
            ,
            <given-names>D.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ba</surname>
          </string-name>
          , J.:
          <article-title>Adam: A method for stochastic optimization</article-title>
          .
          <source>arXiv preprint arXiv:1412.6980</source>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Araujo</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aresta</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Castro</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rouco</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aguiar</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eloy</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Polonia</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Campilho</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Classification of breast cancer histology images using convolutional neural networks</article-title>
          .
          <source>PloS one 12(6)</source>
          ,
          <year>e0177544</year>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Rakhlin</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shvets</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Iglovikov</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kalinin</surname>
            ,
            <given-names>A.A.</given-names>
          </string-name>
          :
          <article-title>Deep convolutional neural networks for breast cancer histology image analysis</article-title>
          .
          <source>arXiv preprint arXiv:1802</source>
          .
          <volume>00752</volume>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>