<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Agricultural parcel localization on satellite images using U-Net-based neural network</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Maria Pavlova</string-name>
          <email>m.pavlova@visillect.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mikhail Zagarev</string-name>
          <email>mzagarev@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alexey Savchik</string-name>
          <email>savsmail@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Igor Kukoev</string-name>
          <email>kukoevigor72@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lev Teplyakov</string-name>
          <email>teplyakov.l@ya.ru</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anton Grigoryev</string-name>
          <email>grigoryev@visillect.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Digital Agro LLC</institution>
          ,
          <addr-line>Moscow</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Vision systems laboratory, IITP RAS (Kharkevich Institute)</institution>
          ,
          <addr-line>Moscow</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <fpage>168</fpage>
      <lpage>171</lpage>
      <abstract>
        <p>-This work considers the problem of automatic delineation of agricultural parcels on satellite images based on truecolor images and NDVI vegetation index maps from Sentinel-2 satellites (10 m ground sampling distance). The problem is solved using a U-Net-based convolutional neural network. We consider problem formulation either as parcel mask or boundary detection; the multiclass (simultaneous) training did not prove to be effective. The approach looks promising and applicable for automated land mapping for agricultural monitoring systems.</p>
      </abstract>
      <kwd-group>
        <kwd>U-Net</kwd>
        <kwd>convolutional neural network</kwd>
        <kwd>precision agriculture</kwd>
        <kwd>automated mapping</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>In this paper, we consider the problem of automating the
mapping of agricultural fields using satellite data of
10meter spatial resolution. This task is relevant both in
cadastral accounting and in agricultural monitoring. High-precision
manual mapping of parcels is a labor-intensive process, and
knowledge about the boundaries of parcels is an essential
element for solving other tasks of agricultural monitoring, in
particular, evaluating various indicators of productivity and
land condition when using precision farming approaches.</p>
      <p>
        There are many works on the related problem of crop
classification [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]–[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The problem of field mapping automation
is not so studied, although there are some works [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        Some works, including [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], use high-resolution data,
which makes the problem easier. For example, high-resolution
80-cm allow to detect single trees [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. However,
lowerresolution data is more widely available thanks to research
programs such as Sentinel-2 [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. It provides regularly updated
(about two times per week) multispectral satellite imagery in
resolutions from 10 to 60 meters per pixel, depending on the
spectral channel.
      </p>
      <p>
        Determining the most suitable spectral ranges for mapping
is not a trivial task. This paper uses the fact that the so-called
vegetation indices-images calculated from images in different
wavelength ranges [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] - have long been known and widely
used to solve problems of agricultural monitoring.
      </p>
      <p>
        In this paper, in addition to the image in the visible range,
we investigate the use of the NDVI (Normalized Difference
Vegetation Index) as input data for automated mapping. It is a
normalized relative vegetation index, which is useful in crop
monitoring problems [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. NDVI is calculated from the ratio
of the observed intensity of red (RED) and near-infrared (NIR)
channels:
      </p>
      <p>N DV I =</p>
      <p>N I R RED
N I R + RED</p>
      <p>
        Currently, algorithms based on training in artificial neural
networks, in particular using convolutional layers, are widely
used in various problems of image analysis [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
A particular case of full-convolution architecture, showing
good quality in segmentation problems, is the U-Net family of
neural networks [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], which implements a multiscale approach
to image analysis. We can also note one of the universal
segmentation algorithms MaskRCNN [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], in which object
detection is accompanied by further pixel-by-pixel segmentation
using convolutional layers. This algorithm allows one to detect
individual objects and their exact boundaries, including when
their images intersect. However, this more complicated method
is not required for defining masks or parcel boundaries when
objects do not intersect, and there is no problem with partial
obscuration.
      </p>
    </sec>
    <sec id="sec-2">
      <title>II. METHOD</title>
      <p>We used a neural network approach to automatically detect
parcels. In this approach we model detection with a function
fw : X ! Y . The fw function maps multi-channel images
x 2 X = [0; 1]h w c to single-channel images y 2 Y =
[0; 1]h w. The pixels yi;j 2 [0; 1] of such an image contain
confidence estimates that the i; j pixel in the source image
refers to a parcel (or a parcel boundary in case of boundary
detection). The type of fw function depends on the task to
be solved and defines the architecture of an artificial neural
network.</p>
      <p>A sequential network is a simple example of an artificial
neural network. Such a network consists of inputs x = h0
and a few trainable functions (layers) hi = fi(hi 1); i =
f1; :::; ng, applied sequentially. The last layer output y = hn is
treated as the neural network output. For the image processing
tasks, the convolutional layers are usually used. In this case,
the input and the output of the layer are images, i.e., 3d tables
(tensors).</p>
      <p>0</p>
      <p>X
dx;dy;cf;ct</p>
      <p>wdix;dy;cf;ct hix+1dx;y+dy;cf A ;
where ReLU (x) = max(x; 0). The idea of such filters is to
locally process different image parts in the same way. This
also allows to simultaneously optimize weights w on different
parts of the image. Convolutions have typically small kernel
size, usually 3 3, (bigger ones are reducible to several such
ones). To make the output size to be equal with the input size,
an input image is usually zeros-padded.</p>
      <p>Function fw parameters (or weights) w are automatically
tuned on the collected dataset fxi; yig with the expertly
marked position of the parcels (and their borders). The w
is changed so to minimize the loss function (or empirical
1
risk) L(w) = Pi l(fw(xi); yi), where l is the two-class cross
entropy.</p>
      <p>l(y; y^) =
( log(y)
log(1</p>
      <p>; y^ = 1
y) ; y^ = 0:</p>
      <p>
        U-Net-like architectures [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] are commonly used in
problems of image segmentation and pixel-wise classification.
Neural networks of this type have an encoder-decoder
architecture with several layers of different resolutions. Also, such
architecture is not sequential and has shortcut connections.
UNet-like networks have several advantages, including:
a sufficiently large receptive field, which allows a neural
network to make each pixel decision based on a relatively
wide spatial neighborhood;
a computational efficiency due to the multiscale approach.
The network of this architecture works with a small number
of weights in the original resolution image, then with a large
number of weights in the image of a much lower resolution
and, in the end, combines the low-resolution results with
highresolution data from the original image for more accurate pixel
prediction. In this paper, we used a neural network with 32
filters on the first layer and the smallest network scale of 1=8
(see Fig. II).
      </p>
      <p>A. Training</p>
      <p>A dataset consisting of 122 4-channel images of 22 areas of
the Earth surface for the period 04/05/2018 – 11/11/2018 from
the Sentinel-2 satellite imagery archive was prepared. The size
of each image is 1030 1030 pixels, imagery resolution is 10
m/pixel. The first 3 channels of each image are visible colors
(T CI in Sentinel-2 nomenclature), and as the fourth channel,
the N DV I vegetation index map is used (calculated by bands
4 and 8 of the original multispectral image), see Fig. 1ab.</p>
      <p>Fields and similar structures were manually marked on each
full-color image in the form of polygonal contours, resulting
in 400 average objects being marked on all images. The field
boundary mask was constructed as follows: the field mask was
morphologically dilatated with a square window of 10 10,
after which the points included in the dilatated mask but not
in the original ones were considered as the boundary points
(see Fig. 1cd).</p>
      <p>The dataset was divided into training and test parts,
consisting of 17 regions with 94 images total and 5 sections with
28 images, respectively.</p>
      <p>
        The network has been trained for a 25 epochs with 500
batches. Each batch contains 32 random 128 128 patches,
cropped randomly from the original images. The used
optimizer is Adam [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] — one of stochastic gradient descent
methods. We used a loss function consisting of a cross entropy
and l2 — regularization with 0:0001 coefficient to prevent
overfitting.
      </p>
      <p>We use pixel-wise classification metrics. The most common
one is the AUC-ROC metric. To calculate it we consider all
possible thresholds to obtain the all possible classifiers
T
ci;j = [yi;j &gt; T ]; T 2 R:
These classifiers’ parameters on the TPR and FPR coordinates
plot the so-called ROC curve. The area under this curve is the
desired value. However, this metric may be low informative if
the classes are highly unbalanced. In our case, the number
of background non-borders pixels is much bigger than the
number of object pixels (borders). To reduce this effect, we
also calculate the AUC-PR metric built similarly on the values
of precision and recall instead of TPR and FPR.</p>
      <p>We did a few computational experiments to study
dependence by the input channels: TCI, NDVI, TCI+NDVI. Also,
we did a few experiments to check the multiclass method,
in which the neural network learns to predict parcels and its
boundaries simultaneously. In each case, the experiment was
repeated three times to estimate not only the value but its
standard deviation also.</p>
      <p>For comparison, the value of metrics for a random classifier
is given. For boundaries &amp; parcels prediction, different
boundary (b) and parcels (p) results are presented. The results are
listed in Tab. I.</p>
      <p>The presented results show that a neural network is
significantly higher than the random algorithm, which has AU C
ROC and AU C P R equals to 0:5 / 0:4 for the fields and
to 0:5 / 0:06 for the boundaries. The results of the multitask
training are not significantly different from the usual ones.
The quality of border detection by AUC-PR metrics is much
lower than that of AUC-ROC, because of a much lower share
of boundary pixels compared to the share of parcels pixels.
The results of the network look strongly correlated with the
correct answer (see Fig. 1de), which is probably the most
important thing in this task: both the metrics themselves
and the ground truth values are not very reliable, as it is
difficult to check whether it is a field boundary at a given
point or it is another visually similar structure. We see that
this approach is applicable to the construction of agricultural
monitoring systems and the appropriate optimization for a
particular application scenario. In the case of a larger dataset,
there are no obstacles to obtain good results.</p>
    </sec>
    <sec id="sec-3">
      <title>IV. CONCLUSION</title>
      <p>The paper considers the problem of determining the location
of agricultural parcels using multispectral satellite images. The
input images contain visible range imagery, NDVI vegetation
index maps, or both. The output ground truth annotation
contains parcels, boundaries, or both.</p>
      <p>The results demonstrated the applicability of the U-Net
network architecture for this task with at least 0:7 AUC-ROC
metric value for the parcels and boundaries. The comparison of
different variants of the training task setting showed that using
vegetation index maps may be useful in this task. However,
in the absence of infrared images using only the image in the
visible range shows the comparable result. Multiclass training
did not show any advantages.</p>
      <p>In general, the obtained results show the perspective of
UNet architecture neural networks application for solving the
tasks of large-scale automated agricultural monitoring using
freely available satellite data of medium spatial resolution
(10 m/px in the considered case of Sentinel-2). Further
development of this work can be the construction of more
relevant metrics (and, accordingly, loss functions) for the task,
using more multispectral information and historical images of
the same parcels to improve the accuracy and relevance of
recognition results.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Borzov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Guryanov</surname>
          </string-name>
          , and
          <string-name>
            <surname>O. I. Potaturkin</surname>
          </string-name>
          , “
          <article-title>Study of the classification efficiency of difficult-to-distinguish vegetation types using hyperspectral data,” Computer Optics</article-title>
          , vol.
          <volume>43</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>464</fpage>
          -
          <lpage>473</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Bibikov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. L.</given-names>
            <surname>Kazanskii</surname>
          </string-name>
          , and
          <string-name>
            <given-names>V. A.</given-names>
            <surname>Fursov</surname>
          </string-name>
          , “
          <article-title>Vegetation type recognition in hyperspectral images using a conjugacy indicator,” Computer Optics</article-title>
          , vol.
          <volume>42</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>846</fpage>
          -
          <lpage>854</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Varlamova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Y.</given-names>
            <surname>Denisova</surname>
          </string-name>
          , and
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Sergeev</surname>
          </string-name>
          , “
          <article-title>Earth remote sensing data processing for obtaining vegetation types maps</article-title>
          ,”
          <source>Computer Optics</source>
          , vol.
          <volume>42</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>864</fpage>
          -
          <lpage>876</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G. M.</given-names>
            <surname>Musyoka</surname>
          </string-name>
          , “
          <article-title>Automatic delineation of small holder agricultural field boundaries using fully convolutional networks</article-title>
          ,
          <source>” Master's thesis</source>
          , University of Twente, Enschede, The Netherlands,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>M. M. Alemu</surname>
          </string-name>
          , “
          <article-title>Automated farm field delineation and crop row detection from satellite images,” Master's thesis</article-title>
          , University of Twente, Enschede, The Netherlands,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A. Y.</given-names>
            <surname>Denisova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Egorova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Sergeev</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Kavelenova</surname>
          </string-name>
          , “
          <article-title>Requirements for multispectral remote sensing data used for the detection of arable land colonization by tree and shrubbery vegetation</article-title>
          ,”
          <source>Computer Optics</source>
          , vol.
          <volume>43</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>846</fpage>
          -
          <lpage>856</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Drusch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Del Bello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Carlier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Colin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Fernandez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Gascon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hoersch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Isola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Laberinti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Martimort</surname>
          </string-name>
          et al.,
          <article-title>“Sentinel-2: Esa's optical high-resolution mission for gmes operational services</article-title>
          ,
          <source>” Remote sensing of Environment</source>
          , vol.
          <volume>120</volume>
          , pp.
          <fpage>25</fpage>
          -
          <lpage>36</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>N.</given-names>
            <surname>Pettorelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. O.</given-names>
            <surname>Vik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mysterud</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.-M. Gaillard</surname>
            ,
            <given-names>C. J.</given-names>
          </string-name>
          <string-name>
            <surname>Tucker</surname>
            , and
            <given-names>N. C.</given-names>
          </string-name>
          <string-name>
            <surname>Stenseth</surname>
          </string-name>
          , “
          <article-title>Using the satellite-derived ndvi to assess ecological responses to environmental change,” Trends in ecology &amp; evolution</article-title>
          , vol.
          <volume>20</volume>
          , no.
          <issue>9</issue>
          , pp.
          <fpage>503</fpage>
          -
          <lpage>510</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>X.</given-names>
            <surname>Jinru</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Su</surname>
          </string-name>
          , “
          <article-title>Significant remote sensing vegetation indices: A review of developments and applications</article-title>
          ,
          <source>” Journal of Sensors</source>
          , vol.
          <year>2017</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Boori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Choudhary</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Kupriyanov</surname>
          </string-name>
          , “
          <article-title>Crop growth monitoring through sentinel and landsat data based ndvi time-series,” Computer Optics</article-title>
          , vol.
          <volume>44</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>409</fpage>
          -
          <lpage>419</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Davidse</surname>
          </string-name>
          , “
          <article-title>Semi-automatic detection of field boundaries from highresolution satellite imagery</article-title>
          .
          <source>” Master's thesis</source>
          , Wageningen University, The Netherlands,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A.</given-names>
            <surname>Garcia-Pedrero</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Gonzalo-Mart´ın, and M. Lillo-Saavedra, “A machine learning approach for agricultural parcel delineation through agglomerative segmentation</article-title>
          ,”
          <source>International journal of remote sensing</source>
          , vol.
          <volume>38</volume>
          , no.
          <issue>7</issue>
          , pp.
          <fpage>1809</fpage>
          -
          <lpage>1819</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>O.</given-names>
            <surname>Ronneberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Fischer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Brox</surname>
          </string-name>
          , “
          <article-title>U-net: Convolutional networks for biomedical image segmentation,” International Conference on Medical image computing and computer-assisted intervention</article-title>
          , Springer, pp.
          <fpage>234</fpage>
          -
          <lpage>241</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>K.</given-names>
            <surname>He</surname>
          </string-name>
          , G. Gkioxari, P. Dolla´r, and
          <string-name>
            <given-names>R.</given-names>
            <surname>Girshick</surname>
          </string-name>
          , “
          <string-name>
            <surname>Mask</surname>
          </string-name>
          r-cnn,
          <source>” Proceedings of the IEEE international conference on computer vision</source>
          , pp.
          <fpage>2961</fpage>
          -
          <lpage>2969</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>D. P.</given-names>
            <surname>Kingma</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Ba</surname>
          </string-name>
          , “
          <article-title>Adam: A method for stochastic optimization</article-title>
          ,
          <source>” arXiv preprint arXiv:1412.6980</source>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>