<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A learning based feature point detector</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>A. Verichev</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Samara National Research University</institution>
          ,
          <addr-line>34 Moskovskoe Shosse, 443086, Samara</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2017</year>
      </pub-date>
      <fpage>247</fpage>
      <lpage>252</lpage>
      <abstract>
        <p>We propose a learning-based image feature points detector. Instead of giving an explicit definition for feature point we apply the methods of machine learning to infer it inductively using a representative training set. This allows for a flexible tuning of the proposed detector to a specific problem that is described by a training set of desired responses. To increase feature points' repeatability and robustness to various image transformations the feature space of the learning algorithm includes raw image moments and image moment invariants. Experiments demonstrate high flexibility in tuning the detector to a specific task, acceptable repeatability of the feature points and robustness to various image transformations.</p>
      </abstract>
      <kwd-group>
        <kwd>image feature points</kwd>
        <kwd>image feature points detector</kwd>
        <kwd>image moments</kwd>
        <kwd>image moment invariants</kwd>
        <kwd>machine learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Proposed method</title>
      <p>The proposed learning-based feature points detector is based on the idea of transforming detection task into a classification
task as suggested in [13], which boils down to training the detector's classifier on a set of the desired responses.
where norm  and local mean  ̅ are defined:
 ̅ = 811 ∑4=−4 ∑4=−4  [ +  ,  +  ],
 = √∑4=−4 ∑4=−4( [ +  ,  +  ])2.
moments are defined [14]:
used [15, 16]:
  = ∑4=−4 ∑4=−4   ∙   ∙</p>
      <p>[ +  ,  +  ].
1
81</p>
      <sec id="sec-2-1">
        <title>2.1. Feature space</title>
        <p>The first step towards constructing our detector is to define the classifier's feature space, which is an ℝ15 vector space. Each
pixel of an image  [ ,  ] is mapped to a certain vector in this feature space using a locally defined operator  9×9 → ℝ15, where
 = { : 0 ≤  &lt; 256} is a set of intensities of a grayscale image. The features of the feature space are described below.</p>
        <p>The first two features are standard deviation of a standardized local area,  1, and standard deviation divided by the norm of
the local area,  2:</p>
        <p>
          The set of the features   , 1 ≤  ≤ 15, defined by (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) - (
          <xref ref-type="bibr" rid="ref4">4</xref>
          ), with a usual addition and scalar multiplication operations form
The use of these features is motivated by their sensitivity to monotonous and textured areas.
        </p>
        <p>
          The next four features are chosen to be central image moments of a local image area:   +3 =    , 0 ≤  ≤ 3. The central
To induce invariance to rotation transformations the following Hu invariant image moments and Flusser moments are
 15 = √(  −  )2 + (  −  )2,
where   =  10/ 00 and  с =  01/ 00.
the feature vector space.
( 21 +  03)2],
( 21 +  03)2],
 11 = ( 30 − 3 12)( 30 +  12)[( 30 +  12)2 − 3( 21 +  03)2] + (3 21 −  03)( 21 +  03)[3( 30 +  12)2 −
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
 12 = ( 20 −  02)[( 30 +  12)2 − ( 21 +  03)2] + 4( 30 +  12)( 21 +  03),
 13 = (3 21 −  03)( 30 +  12)[( 30 +  12)2 − 3( 21 +  03)2] − ( 30 − 3 12)( 21 +  03)[3( 30 +  12)2 −
 14 =  11[( 30 +  12)2 − ( 03 +  21)2] − ( 20 −  02)( 30 +  12)( 03 +  21).
        </p>
        <p>Moments calculation is an intensive computational task that requires of a lot of operations. To reduce the number of
arithmetical operations we apply the recursive method of moments calculation based on the use of integer factorial
polynomials [16].</p>
        <p>
          The last feature that characterizes misalignment of centre of local area and its centre of mass is defined:
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
(
          <xref ref-type="bibr" rid="ref4">4</xref>
          )
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Tuning the detector</title>
      </sec>
      <sec id="sec-2-3">
        <title>2.2.1. Collecting a training set</title>
        <p>Tuning the detector requires a training set that consists of the desired detector's responses. Depending on the application there
are various ways the set can be obtained:
</p>
        <p>manually, involving experts of the domain;
 automatically, using well-known feature points detectors such as Harris or Canny;
 combining the two.</p>
        <p>In case there is a human involvement of any kind it is inevitable for a training set to contain a so called training noise [17].
Besides, in a typical scenario a number of feature points is small compared to the other points. To alleviate these negative effects
the neighbouring points of the feature points can be considered feature points as well.</p>
        <p>Provided an application requires high level of robustness to certain transformations, a training set can be enlarged to contain
the so called virtual examples [18]. To this end every image used to form a training set is transformed according to some
transformation. Since the parameters of that transformation are known, the elements of the original image can be mapped onto
the transformed image, which makes it possible to extract feature vectors of the points of the transformed image that correspond
to the feature points of the original image. These new feature vectors are the virtual examples that convey information about
various effects the transformation have on the feature vectors.</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.2.2. Training a classifier</title>
        <p>
          With a training set at hand we can pose and solve a supervised learning problem. Since the number of the feature vectors in a
training set is typically quite large we chose to apply nonparametric probability density estimation approach. Let 
=
corresponds to the other points. Then, an estimation of conditional probability density function is defined as follows:
{(  ,   )} =1 denote training set, where   is a feature vector,   is its label,   ∈ { 1,  2}.  1 corresponds to feature points and  2
where  ̂ is an estimate of prior probability of ith class:
where  is a kernel function, ℎ is kernel’s width parameter. By the Bayes’ Theorem:
(
          <xref ref-type="bibr" rid="ref5">5</xref>
          )
(
          <xref ref-type="bibr" rid="ref6">6</xref>
          )
(
          <xref ref-type="bibr" rid="ref7">7</xref>
          )
(
          <xref ref-type="bibr" rid="ref8">8</xref>
          )
(
          <xref ref-type="bibr" rid="ref9">9</xref>
          )
(
          <xref ref-type="bibr" rid="ref10">10</xref>
          )
Define a characteristic function of a feature point  ( ):
 ( ) =  ( ̂( 1| )) −  ( ̂( 2| )).
̃( ) = {
        </p>
        <p>0,  ℎ
 ( ),  ( ) &gt;  ( ) + 
∀ ∈ 
∖ { }</p>
        <p>
          ,
From (
          <xref ref-type="bibr" rid="ref8">8</xref>
          ) and (
          <xref ref-type="bibr" rid="ref9">9</xref>
          ) we infer the decision rule:
 ( ) = {
 1, ̃( ) &gt;  =  (̂2)
        </p>
        <p>is a set of all feature vectors from the local neighbourhood,  is some threshold.</p>
        <p>In order to smooth the detector’s response we filter the characteristic function  ( ) using a local peak filter. The peak filter
suppresses non-maximal values in a local 3×3 neighbourhood of the point  :</p>
        <p>To experimentally evaluate the proposed detector we built a set of images. The set contains a series of 10 overlapping images
of 6 different scenes, 60 images in total. Figure 1 shows three images of one of these scenes. Each of the 6 groups of images was
split in relation 8:2 to form training set  and test set  , respectively. We chose to use Harris [6] corner detector to detect feature
points. The training set was enlarged by the virtual examples as described in section 2.2.1 and the transformations that were
applied are described in section 3.3.
3.2. Evaluation of training accuracy
 ( ) = 1 ∑</p>
        <p>=1[ (  ) =   ].</p>
        <p>Precision and recall are defined:
 ( ) =
 ( ) =</p>
        <p>( )
 ( )+ ( )</p>
        <p>( )
 ( )+ ( )
,
.</p>
        <p>Let  = {(  ,   )} =1 be training or test set. The primary criterion of detector’s performance on the set V is its accuracy:
Besides the accuracy two more criteria are used: precision  and recall  [19]. Precision is the fraction of relevant instances
over the retrieved instances, while recall is the fraction of relevant instances among the retrieved ones over the total number of
relevant instances in the set.</p>
        <p>
          Let 
,   and 
denote false positives, false negatives and true positives, respectively. Then,
(
          <xref ref-type="bibr" rid="ref11">11</xref>
          )
(
          <xref ref-type="bibr" rid="ref12">12</xref>
          )
(
          <xref ref-type="bibr" rid="ref13">13</xref>
          )
        </p>
        <p>The proposed detector was first trained on the training set. Accuracy, precision and recall were evaluated on the training set
 and test set  . The results are shown in table 1. Taking into account a fairly large size of the sets, the data suggests an
adequate quality of training.</p>
      </sec>
      <sec id="sec-2-5">
        <title>3.3. Repeatability evaluation of the detector</title>
        <p>As mentioned in introduction, repeatability is one of the most important properties of the feature points. Along with its
importance, repeatability allows for an objective and qualitative evaluation. Hence, we used repeatability to evaluate the
performance of the proposed detector.</p>
        <p>An original image is used to find a set of feature points   .
 The original image is transformed by one of the transformations (cf. the next list below).
 The transformed image is used to find a set of feature points   .
 Since parameters of the transformation are known, coordinates of the points   of the original image can be mapped onto the
transformed image. Thus, the points in the set   are mapped onto the transformed image, forming a set   .
 The sets   and   are matched. Two points  ∈   and  ∈   are considered equal if  ∈   ( ),  = 2.0.</p>
        <p>Image Processing, Geoinformation Technology and Information Security / A. Verichev
As a result of the comparison performed in the previous step we find three sets of points:   are the points found on both
sets,   are new points that were not found on the original image but were found on the transformed image,   are the
missed points that were found on the original image and were not found on the transformed image. The cardinalities of these
sets are, respectively,  ,  and   values of the proposed detector. These values are used to calculate the detector’s
accuracy, precision and recall.</p>
        <p>To evaluate repeatability we used the following transformations of the images:
 rotation by angle  , −45° ≤  ≤ 45°,  is increased by 3°;
 sub-pixel shift by  , 0.25 ≤  ≤ 0.75,  is increased by 0.05;
 scaling by  , 0.5 ≤  &lt; 1.5,  is increased by 0.1</p>
        <p>The results of the repeatability evaluation of the proposed detector that was trained on the training set  are shown on fig. 2.
The detector’s performance can be considered adequate on rotated images for −9° &lt;  &lt; 9° and on scaled images for 0.8 ≤
 ≤ 1.2. The performance on shifted images is high for the whole range of the parameter  .</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Conclusion</title>
      <p>In this paper we investigated a relatively new approach to feature point detection. Contrary to the standard approach to the
problem, we didn’t formulate any heuristics-based definition of the term feature point but tried to infer it inductively using the
methods of machine learning and a representative training set. This enabled us to tune the proposed detector to a specific
problem at hand. The results of the experimental evaluation of the detector verify that such a tuning is in fact possible.
Moreover, the detector showed acceptable robustness to rotation and scaling transformation, and high robustness to sub-pixel
shift transformation. This suggests a great potential of the learning-based approach to feature points detection.</p>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgements</title>
      <p>The reported study was funded by RFBR according to the research project №17-29-03190-ofi.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Szeliski</surname>
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Computer</surname>
          </string-name>
          <article-title>Vision: Algorithms and Applications</article-title>
          . London: Springer,
          <year>2011</year>
          ; 812 p.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Denisova</surname>
            <given-names>AY</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Myasnikov</surname>
            <given-names>VV</given-names>
          </string-name>
          .
          <article-title>Anomaly detection for hyperspectral imaginary</article-title>
          .
          <source>Computer Optics</source>
          <year>2014</year>
          ;
          <volume>38</volume>
          (
          <issue>2</issue>
          );
          <fpage>287</fpage>
          -
          <lpage>296</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Tuytelaars</surname>
            <given-names>T</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mikolajczyk</surname>
            <given-names>R</given-names>
          </string-name>
          .
          <article-title>Local invariant feature detectors: a survey. Foundations and trends® in computer graphics</article-title>
          and vision
          <year>2008</year>
          ;
          <volume>3</volume>
          (
          <issue>3</issue>
          ):
          <fpage>177</fpage>
          -
          <lpage>280</lpage>
          . DOI:
          <volume>10</volume>
          .1561/0600000017.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Li</surname>
            <given-names>Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tian</surname>
            <given-names>Q</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ding</surname>
            <given-names>X.</given-names>
          </string-name>
          <article-title>A survey of recent advances in visual feature detection</article-title>
          .
          <source>Neurocomputing</source>
          <year>2015</year>
          ;
          <volume>149</volume>
          :
          <fpage>736</fpage>
          -
          <lpage>751</lpage>
          . DOI:
          <volume>10</volume>
          .1016/j.neucom.
          <year>2014</year>
          .
          <volume>08</volume>
          .003.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Förstner</surname>
            <given-names>W</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gülch</surname>
            <given-names>E.</given-names>
          </string-name>
          <article-title>A fast operator for detection and precise location of distinct points, corners and centres of circular features</article-title>
          .
          <source>Proc. ISPRS intercommission conference on fast processing of photogrammetric data</source>
          <year>1998</year>
          ;
          <fpage>281</fpage>
          -
          <lpage>305</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Harris</surname>
            <given-names>C</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stephens</surname>
            <given-names>M.</given-names>
          </string-name>
          <article-title>A combined corner and edge detector</article-title>
          .
          <source>Alvey vision conference</source>
          <year>1988</year>
          ;
          <volume>15</volume>
          (
          <issue>50</issue>
          ):
          <fpage>147</fpage>
          -
          <lpage>151</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Shi</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tomasi</surname>
            <given-names>C</given-names>
          </string-name>
          .
          <article-title>Good features to track</article-title>
          .
          <source>Proc. Intl Conf. on Comp. Vis. and Pat</source>
          .
          <source>Recog (CVPR)</source>
          <year>1994</year>
          ;
          <fpage>593</fpage>
          -
          <lpage>600</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Smith</surname>
            <given-names>SM</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brady J.M. SUSAN -</surname>
          </string-name>
          <article-title>A new approach to low level image processing</article-title>
          .
          <source>International Journal of Computer Vision</source>
          <year>1997</year>
          ;
          <volume>23</volume>
          (
          <issue>1</issue>
          ):
          <fpage>45</fpage>
          -
          <lpage>78</lpage>
          . DOI:
          <volume>10</volume>
          .1023/A:
          <fpage>1007963824710</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Rosten</surname>
            <given-names>E</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Drummond</surname>
            <given-names>T.</given-names>
          </string-name>
          <article-title>Machine learning for high-speed corner detection</article-title>
          .
          <source>European Conference on Computer Vision</source>
          <year>2006</year>
          ;
          <fpage>430</fpage>
          -
          <lpage>443</lpage>
          . DOI:
          <volume>10</volume>
          .1007/11744023_
          <fpage>34</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Mair</surname>
            <given-names>E</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hager</surname>
            <given-names>GD</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Burschka</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suppa</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hirzinger</surname>
            <given-names>G</given-names>
          </string-name>
          .
          <article-title>Adaptive and generic corner detection based on the accelerated segment test</article-title>
          .
          <source>European conference on Computer Vision</source>
          <year>2010</year>
          ;
          <fpage>183</fpage>
          -
          <lpage>196</lpage>
          . DOI:
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          -15552-9_
          <fpage>14</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Zhang</surname>
            <given-names>X</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            <given-names>HA</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smith</surname>
            <given-names>WB</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ling</surname>
            <given-names>X</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lovell</surname>
            <given-names>BC</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            <given-names>D</given-names>
          </string-name>
          .
          <article-title>Corner detection based on gradient correlation matrices of planar curves</article-title>
          .
          <source>Pattern Recognition</source>
          <year>2010</year>
          ;
          <volume>43</volume>
          (
          <issue>4</issue>
          ):
          <fpage>1207</fpage>
          -
          <lpage>1223</lpage>
          . DOI:
          <volume>10</volume>
          .1016/j.patcog.
          <year>2009</year>
          .
          <volume>10</volume>
          .017.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Shui</surname>
            <given-names>PL</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            <given-names>WC</given-names>
          </string-name>
          .
          <article-title>Corner detection and classification using anisotropic directional derivative representations</article-title>
          .
          <source>IEEE Transactions on Image Processing</source>
          <year>2013</year>
          ;
          <volume>22</volume>
          (
          <issue>8</issue>
          ):
          <fpage>3204</fpage>
          -
          <lpage>3218</lpage>
          . DOI:
          <volume>10</volume>
          .1109/TIP.
          <year>2013</year>
          .
          <volume>2259834</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Chernov</surname>
            <given-names>AV</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Myasnikov</surname>
            <given-names>VV</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sergeyev</surname>
            <given-names>VV</given-names>
          </string-name>
          .
          <article-title>Fast Method for Local Image Processing and Analysis</article-title>
          .
          <source>Pattern Recognition and Image Analysis</source>
          <year>1999</year>
          ;
          <volume>9</volume>
          (
          <issue>2</issue>
          ):
          <fpage>237</fpage>
          -
          <lpage>238</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Flusser</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suk</surname>
            <given-names>T.</given-names>
          </string-name>
          <article-title>Pattern recognition by affine moment invariants</article-title>
          .
          <source>Pattern Recognition and Image Analysis</source>
          <year>1993</year>
          ;
          <volume>26</volume>
          (
          <issue>1</issue>
          ):
          <fpage>167</fpage>
          -
          <lpage>174</lpage>
          . DOI:
          <volume>10</volume>
          .1016/
          <fpage>0031</fpage>
          -
          <lpage>3203</lpage>
          (
          <issue>93</issue>
          )
          <fpage>90098</fpage>
          -
          <lpage>H</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Hu</surname>
            <given-names>MK</given-names>
          </string-name>
          .
          <article-title>Visual pattern recognition by moment invariants</article-title>
          .
          <source>IRE transactions on information theory</source>
          <year>1962</year>
          ;
          <volume>8</volume>
          (
          <issue>2</issue>
          ):
          <fpage>179</fpage>
          -
          <lpage>187</lpage>
          . DOI:
          <volume>10</volume>
          .1109/TIT.
          <year>1962</year>
          .
          <volume>1057692</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Myasnikov</surname>
            <given-names>VV</given-names>
          </string-name>
          .
          <article-title>Constructing efficient linear local features in image processing and analysis problems</article-title>
          .
          <source>Automation and Remote Control</source>
          <year>2010</year>
          ;
          <volume>72</volume>
          (
          <issue>3</issue>
          ):
          <fpage>514</fpage>
          -
          <lpage>527</lpage>
          . DOI:
          <volume>10</volume>
          .1134/S0005117910030124.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Theodoridis</surname>
            <given-names>S.</given-names>
          </string-name>
          <article-title>Machine learning: a Bayesian and optimization perspective</article-title>
          . San Diego: Academic Press,
          <year>2015</year>
          ; 1062 p.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Alpaydin</surname>
            <given-names>E. Introduction</given-names>
          </string-name>
          <article-title>to machine learning</article-title>
          . Cambridge: MIT press,
          <year>2014</year>
          ; 584 p.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Hastie</surname>
            <given-names>T</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tibshirani</surname>
            <given-names>R</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frieman</surname>
            <given-names>J</given-names>
          </string-name>
          .
          <article-title>Elements of statistical learning: data mining, inference, and prediction</article-title>
          . London: Springer,
          <year>2011</year>
          ; 745 p.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>