<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Unsupervised Palm Vein Image Segmentation*</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>E. Safronova</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>E. Pavelyeva</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University</institution>
          ,
          <addr-line>Moscow</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this article the new hybrid algorithm for palm vein image segmentation using convolutional neural network and principal curvatures is proposed. After palm vein image preprocessing vein structure is detected using unsupervised learning approach based on W-Net architecture, that ties together into a single autoencoder two fully convolutional neural network architectures, each similar to the U-Net. Then segmentation results are improved using principal curvatures technique. Some vein points with highest maximum principal curvature values are selected, and the other vein points are found by moving from starting points along the direction of minimum principal curvature. To obtain the final vein image segmentation the result of intersection of the principal curvaturesbased and neural network-based segmentations is taken. The evaluation of the proposed unsupervised image segmentation method based on palm vein recognition results using multilobe differential filters is given. Test results using CASIA multi-spectral palmprint image database show the effectiveness of the proposed segmentation approach.</p>
      </abstract>
      <kwd-group>
        <kwd>Biometrics</kwd>
        <kwd>Image Segmentation</kwd>
        <kwd>Palm Vein Recognition</kwd>
        <kwd>Unsupervised Learning</kwd>
        <kwd>Principal Curvatures</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Nowadays information security plays crucial role in human life and, as it turned out,</title>
      <p>
        accustomed keys and passwords are not reliable enough. Instead, the biometric
characteristics that uniquely identify a person from an entire population based on intrinsic
physical or behavioral traits [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], provide stable and safe data protection. Biometrics
recognizes individuals based on these characteristics.
      </p>
    </sec>
    <sec id="sec-2">
      <title>One of the most advanced and progressive personal identification technologies is</title>
      <p>palm vein recognition. Veins are usually not visible to others that provides low risk of
fake or theft. As for other important advantages, vein patterns are quite unique to the
owners, image acquisition does not require physical contact and the system can be made
compact. Deoxygenated hemoglobin in the vein blood absorbs near infrared light so
infrared camera captures images containing veins.</p>
    </sec>
    <sec id="sec-3">
      <title>Palm vein recognition algorithm consists of several steps. Firstly, the region of in</title>
      <p>
        terest (ROI) is extracted and segmentation of ROI image into two classes is performed.
Vein points are marked in white while the other points are marked in black. The second
step, feature vector extraction, represents the main difference between existing
approaches. Since vein recognition is relatively young study, some feature extraction
methods could be derived from other biometric recognition algorithms based on
statistical information [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], image key points [
        <xref ref-type="bibr" rid="ref3 ref4 ref5">3, 4, 5</xref>
        ], subspace-based methods [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], phase
based methods [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ], etc. Some approaches were developed specifically for vein
recognition [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The last algorithm step, image matching, is based on feature vector type. At
this step the distance between palm vein images is calculated. Much recent work has
been focused on employing deep convolutional neural networks (CNN) in biometrics.
      </p>
    </sec>
    <sec id="sec-4">
      <title>Deep learning methods can be applied to any step of palm vein recognition algorithm</title>
      <p>
        [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13">10, 11, 12, 13</xref>
        ].
      </p>
      <p>
        In this paper we propose a hybrid approach based on unsupervised machine learning
and mathematical methods to obtain good vein segmentation. The problem of
unsupervised image segmentation is one of the major challenge in computer vision which has
been deeply researched. The range of well-known techniques for solving this issue
contains normalized-cuts [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], Markov random field-based methods [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], CNN-based
approaches [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], etc. However, the results of applying of these methods may include
inaccuracy due to specific features of a technique and lack of correct ground truth. So the
mathematical methods shall control the results of CNN. In this paper we propose a
hybrid segmentation method consisting of two approaches: based on CNN and principal
curvatures (Fig.1).
      </p>
    </sec>
    <sec id="sec-5">
      <title>The rest of this paper is organized as follows. In Section 2 the palm vein image prepro</title>
      <p>cessing and ROI extraction algorithms are described. The vein structure extraction is
described in Section 3 where Subsections 3.1 and 3.2 present principal curvatures and</p>
    </sec>
    <sec id="sec-6">
      <title>CNN-based segmentation algorithms, the hybrid approach is described in Subsection</title>
    </sec>
    <sec id="sec-7">
      <title>3.3. The evaluation of the proposed unsupervised image segmentation method based on</title>
      <sec id="sec-7-1">
        <title>Unsupervised Palm Vein Image Segmentation 3</title>
        <p>multilobe differential filters for palm vein feature extraction is described in Section 4.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>The experimental results for images from CASIA multi-spectral palmprint image database [17] are given in Section 5. Finally, Section 6 concludes this paper.</title>
      <p>2</p>
      <sec id="sec-8-1">
        <title>Palm vein image preprocessing</title>
        <p>The proposed palm vein region of interest (ROI) detection and enhancement scheme is
illustrated in Fig. 2.</p>
        <p>
          First, hand boundary is detected by OTSU binarization algorithm [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] and points
between the fingers are found as points where local minimum of Euclidean distances
between the center of palm and all points on the hand contour is reached. The points
between index and middle fingers,  1, and forth and little fingers,  2, can be taken as
landmarks for extraction of square ROI (Fig. 2 d) [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]. To eliminate the influence of
palm rotation, the image is rotated to the angle θ which is the angle between the line
 1 2 and the horizontal line. To reduce the non-uniform illumination appearing in palm
vein images the background is subtracted and the histogram is stretched (Fig. 2 f). To
emphasize vein structure contrast-limited adaptive histogram equalization (CLAHE)
technique [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ] is used (Fig. 2 g). After contrast enhancement all image details including
noise and glares are sharper. In order to smooth the undesirable details, non-local means
(NLM) algorithm [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ] is used to reduce noise (Fig. 2 h). NLM smoothies also veins a
little so CLAHE is applied again to obtain distinguishable veins (Fig. 2 i). Fig. 2 shows
the ROI of palm vein image and the results of preprocessing algorithm. After
preprocessing veins become sharper and more distinguishable [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ].
3
3.1
        </p>
      </sec>
      <sec id="sec-8-2">
        <title>Vein structure extraction</title>
        <p>Principal curvatures</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>The next step is the vein structure extraction. Consider an image as a surface in a three</title>
      <p>
        dimensional space, where the brightness value of the pixels is the z-coordinate. We are
going to extract vein structure using principal curvatures method [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ].
      </p>
    </sec>
    <sec id="sec-10">
      <title>Let  ( ,  )denote the image intensity at the pixel position,  ( ,  )be the image</title>
      <p>
        gradient vector. Then the normalized gradient after a hard thresholding is defined as:
Let  1,  2 be the eigenvalues and  1 ,  2 be the corresponding eigenvectors of
  ( ,  ), | 1| &gt; | 2|. Then two principal directions, the directions of the maximum and
minimum curvatures, are determined by two eigenvectors  1 and  2. Consequently,
two eigenvalues  1 ,  2 represent the principal curvatures (the curvatures along the
principal directions) [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The tubular-shaped regions have maximum principal
curvature  1 higher than other regions and vector  1 is directed across tubular direction,
vector  2 – along tubular direction [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ].
      </p>
      <p>( , ) , ‖ ( ,  )‖ ≥ 
  ( ,  )= {‖ ( , )‖
0, ‖ ( ,  ‖ &lt; 
where γ is a threshold level. In the experiments we use γ = 4. The normalized gradient
field contains noisy components so we smooth it with Gaussian function  ( ,  ):
Let   ( ,  )= (ℎ ( ,  ), ℎ ( ,  )). The local shape characteristics of an image at a
point ( ,  )can be described by the Hessian matrix   ( ,  ).
(1)
(2)
(3)</p>
      <sec id="sec-10-1">
        <title>Unsupervised Palm Vein Image Segmentation 5</title>
      </sec>
    </sec>
    <sec id="sec-11">
      <title>In order to catch veins of different widths, consider the set of parameters σ for the</title>
      <p>Gauss function:  0, … ,   −1, where  = 10,   =  0 ∙ 4√2 ,  0 = 2,  = 0, 1, … , 9. For
each value of  the Hessian matrix is constructed, at each point the maximum positive
eigenvalue  1 and an eigenvector  2 corresponding to  2 are calculated. Then, at each
point of the image, the largest value of  1 over all  and the corresponding vector  2
are taken.</p>
    </sec>
    <sec id="sec-12">
      <title>We select points with highest maximum principal curvature values as points that</title>
      <p>
        certainly belong to veins. The other vein points can be found [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] from starting points
by moving along direction of vector  2 by | 1|. The results of this approach is shown
in Fig.3.
As we do not have the ground truth for the task of ROIs segmentation, the unsupervised
method is required, so the approach based on W-Net architecture [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] is proposed. The
authors of W-Net method present a new architecture which ties two fully convolutional
network (FCN) architectures, each similar to the U-Net [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], together into a single
autoencoder (Fig. 4). The first FCN encodes an input image into a k-way soft
segmentation:   : ℝ × ×3 → ℝ × × , where  ×  denotes a size of input image,
  ( ) =  (  =   )∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] measures the probability of pixel   belonging to
class k (  is set of pixels in segment k). The second FCN, decoder, reverses this
process, going from the segmentation layer back to a reconstructed image:
  : ℝ × × → ℝ × ×3 (Fig.4).
      </p>
      <p>The both reconstruction errors of the autoencoder and soft normalized cut loss
function on the encoding layer are used during training. The reconstruction loss is standard
‖ ,  
( 
for training encoder-decoder architecture and can be defined as:  
=
where   is set of pixels in segment k, V is the set of all pixels, and w measures the
weight between two pixels.</p>
      <p>
        However, since the argmax function is non-differentiable, it is impossible to
calculate the corresponding gradient during backpropagation. Instead, it is proposed to use a
soft version of the Ncut loss which is differentiable [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]:


−
      </p>
      <p>( ,  )= ∑ =1 
(  , −  )=  − ∑ =1</p>
      <p>(  , )
(  , )
(  ,  )=  −

∑ =1</p>
      <p>∑ ∈  , ∈  ( , ) ( =  )
∑ ∈ , ∈  ( , ) ( =  ) ( =  )=  − ∑
 =
∑ ∈  ( =  )∑ ∈  ( , ) ( =  ),
∑ ∈  ( =  )∑ ∈  ( , )
where  ( =   )measures the probability of node u belonging to class k that is directly
computed by the encoder. The weight matrix W for  
−
is defined as:
  , = 
−‖ ( )− ( )‖22
 2
∗ {
−‖ ( )− ( )‖22</p>
      <p>
        2
0
if ‖ ( )−  ( )‖2 &lt;  ,
otherwise,
where  ( )and  ( )are the spatial location and pixel value of node i, respectively.
Since the size of our ROI images is 128×128 which is smaller than in the original work
[
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], the depth of W-Net was decreased in our experiments, as it is shown in Fig. 5. We
use  
: ℝ128×128×1 → ℝ128×128× and  
: ℝ128×128× → ℝ128×128×1.
      </p>
      <p>(4)
(5)
(6)</p>
      <sec id="sec-12-1">
        <title>Unsupervised Palm Vein Image Segmentation 7</title>
        <p>As vein images have several semantic classes, such as veins with different intensity,
background, skin wrinkles, the neural network was applied for the overclustering,  =
16. The training dataset contains 120 images. In Fig.6 one can see the results of this
approach: the first column in the figure shows input images (Fig. 6 a) while the second
one illustrates their reconstruction (Fig. 6 b); the third column presents the result of
overclustering (Fig. 6 c). After unification of classes corresponding to veins the
obtained vein image binarization is shown in the fourth column (Fig. 6 d).
based and CNN-based vein segmentations is taken (Fig. 7).</p>
        <p>
          (a)
(b)
(c)
(d)
(MLDF) [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ] that highlight vein branch points (Fig. 8) for palm vein feature extraction
and normalized root-mean-square error for feature maps matching [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]. Mathematically
the MLDFs are given as follows:

=   ∑
        </p>
        <p>1
 =1 √2  

−( −  )2
2 
2
−   ∑</p>
        <p>1
 =1 √2  

−( −  )2
2 
2
where the variables µ and σ denote the central positions and the scales of a 2D Gaussian
filters respectively,   denote the number of positive lobes, and   denote the number</p>
      </sec>
      <sec id="sec-12-2">
        <title>Unsupervised Palm Vein Image Segmentation 9</title>
        <p>of negative lobes. Constant coefficients   and   are used to ensure zero sum of the</p>
      </sec>
    </sec>
    <sec id="sec-13">
      <title>MLDF. We take the convolution results of the ROI images in vein points obtained after</title>
      <p>
        vein image segmentation with the proposed MLDF kernels to obtain the feature maps
of vein images. In order to provide slight translation and rotation invariance matching,
the normalized root-mean-square error (NRMSE) [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] is proposed for feature maps
matching [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ].
      </p>
    </sec>
    <sec id="sec-14">
      <title>Given the intra- and interclass vein matching results, the recognition performance is</title>
      <p>measured by the following indicators: the distribution of genuine and impostor scores,</p>
    </sec>
    <sec id="sec-15">
      <title>False Acceptance Rate (FAR), False Reject Rate (FRR) and Equal Error Rate (EER) − the cross-over error rate when FAR is equal to FRR. Lower EER means higher accuracy of a biometric matcher.</title>
      <p>5</p>
      <sec id="sec-15-1">
        <title>Experimental results</title>
        <p>
          Experimental results using CASIA Multi-Spectral Palmprint Image Database [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] are
presented. The database contains 7200 palm images captured from 100 different people
using a self-designed multiple spectral imaging device. Each sample contains six palm
images which are captured at the same time with six different electromagnetic
spectrums. Each hand of any person in the database is represented by six images at one
wavelength. In our study the images from CASIA database obtained at 850 nm are
taken.
        </p>
      </sec>
    </sec>
    <sec id="sec-16">
      <title>The CNN model was implemented in PyTorch [27] framework and trained for 100</title>
      <p>
        epochs with Google Colaboratory with a batch size of 16 using Adam optimizer [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ]
with the learning rate of 0.001. In order to train W-Net we randomly selected 20 hands
from the dataset and took all 6 corresponding images, so we got 120 images in training
set.
      </p>
      <p>To test the proposed hybrid segmentation method, the recognition results using a
part of CASIA database are presented in Fig. 9. Recognition results after image
segmentation with principal curvatures and without W-Net based CNN (Fig. 3) is shown
in Fig.9 a. Recognition results after image segmentation with W-Net based CNN and
without principal curvatures (Fig. 6) is shown in Fig.9 b. Recognition results after
proposed hybrid segmentation (Fig. 7) is shown in Fig.9 c.</p>
    </sec>
    <sec id="sec-17">
      <title>In this article the new palm vein image segmentation method based on principal curva</title>
      <p>tures and unsupervised convolutional neural network is proposed. It is shown that the
method based on principal curvatures improve segmentation results obtained by CNN.</p>
    </sec>
    <sec id="sec-18">
      <title>Experimental results using CASIA multi-spectral palmprint image database are presented.</title>
      <sec id="sec-18-1">
        <title>Unsupervised Palm Vein Image Segmentation 11</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Jain</surname>
            ,
            <given-names>A. K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bolle</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pankanti</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Biometrics: personal identification in networked society</article-title>
          , Vol.
          <volume>479</volume>
          . Springer Science &amp; Business
          <string-name>
            <surname>Media</surname>
          </string-name>
          (
          <year>2006</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Rosdi</surname>
            ,
            <given-names>B. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shing</surname>
            ,
            <given-names>C. W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suandi</surname>
            ,
            <given-names>S. A.</given-names>
          </string-name>
          :
          <article-title>Finger vein recognition using local line binary pattern</article-title>
          .
          <source>Sensors</source>
          <volume>11</volume>
          (
          <issue>12</issue>
          ),
          <fpage>11357</fpage>
          -
          <lpage>11371</lpage>
          (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Matsuda</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miura</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nagasaka</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kiyomizu</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miyatake</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Finger-vein authentication based on deformation-tolerant feature-point matching</article-title>
          .
          <source>Machine Vision and Applications</source>
          <volume>27</volume>
          (
          <issue>2</issue>
          ),
          <fpage>237</fpage>
          -
          <lpage>250</lpage>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Protsenko</surname>
            ,
            <given-names>M. А.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pavelyeva</surname>
            ,
            <given-names>E. A.</given-names>
          </string-name>
          :
          <article-title>Iris image key points descriptors based on phase congruency</article-title>
          . ISPRS - International
          <source>Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences</source>
          <volume>42</volume>
          (
          <issue>2</issue>
          /W12),
          <fpage>167</fpage>
          -
          <lpage>171</lpage>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leedham</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cho</surname>
          </string-name>
          , D. S. Y.:
          <article-title>Minutiae feature analysis for infrared hand vein pattern biometrics</article-title>
          .
          <source>Pattern recognition 41(3)</source>
          ,
          <fpage>920</fpage>
          -
          <lpage>929</lpage>
          (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>J. D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
          </string-name>
          , C. T.:
          <article-title>Finger-vein pattern identification using principal component analysis and the neural network technique</article-title>
          .
          <source>Expert Systems with Applications</source>
          <volume>38</volume>
          (
          <issue>5</issue>
          ),
          <fpage>5423</fpage>
          -
          <lpage>5427</lpage>
          (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7. Han, W. Y.,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>J. C.</given-names>
          </string-name>
          :
          <article-title>Palm vein recognition using adaptive Gabor filter</article-title>
          .
          <source>Expert Systems with Applications</source>
          <volume>39</volume>
          (
          <issue>18</issue>
          ),
          <fpage>13225</fpage>
          -
          <lpage>13234</lpage>
          (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Pavelyeva</surname>
            ,
            <given-names>E. A.</given-names>
          </string-name>
          :
          <article-title>Image processing and analysis based on the use of phase information</article-title>
          .
          <source>Computer Optics</source>
          <volume>42</volume>
          (
          <issue>6</issue>
          ),
          <fpage>1022</fpage>
          -
          <lpage>1034</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Choi</surname>
            ,
            <given-names>J. H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Song</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>S. R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kim</surname>
          </string-name>
          , H. T.:
          <article-title>Finger vein extraction using gradient normalization and principal curvature</article-title>
          .
          <source>In: Image Processing: Machine Vision Applications II</source>
          , pp.
          <fpage>725111</fpage>
          .
          <source>International Society for Optics and Photonics</source>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Jha</surname>
            ,
            <given-names>R. R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thapar</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Patil</surname>
            ,
            <given-names>S. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nigam</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Ubsegnet: Unified biometric region of interest segmentation network</article-title>
          .
          <source>In: 2017 4th IAPR Asian Conference on Pattern Recognition (ACPR)</source>
          , pp.
          <fpage>923</fpage>
          -
          <lpage>928</lpage>
          . IEEE (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Lefkovits</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lefkovits</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Szilágyi</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Applications of different CNN architectures for palm vein identification</article-title>
          .
          <source>In: International Conference on Modeling Decisions for Artificial Intelligence</source>
          , pp.
          <fpage>295</fpage>
          -
          <lpage>306</lpage>
          . Springer, Cham (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Thapar</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jaswal</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nigam</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kanhangad</surname>
          </string-name>
          , V.:
          <article-title>PVSNet: Palm Vein Authentication Siamese Network Trained using Triplet Loss and Adaptive Hard Mining by Learning Enforced Domain Specific Features</article-title>
          .
          <source>In: 2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          . IEEE (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pan</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Minutiae-Based Weighting Aggregation of Deep Convolutional Features for Vein Recognition</article-title>
          .
          <source>IEEE Access 6</source>
          ,
          <fpage>61640</fpage>
          -
          <lpage>61650</lpage>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Shi</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Malik</surname>
          </string-name>
          , J.:
          <article-title>Normalized cuts and image segmentation</article-title>
          .
          <source>IEEE Transactions on pattern analysis and machine intelligence</source>
          <volume>22</volume>
          (
          <issue>8</issue>
          ),
          <fpage>888</fpage>
          -
          <lpage>905</lpage>
          (
          <year>2000</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brady</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smith</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm</article-title>
          .
          <source>IEEE transactions on medical imaging 20(1)</source>
          ,
          <fpage>45</fpage>
          -
          <lpage>57</lpage>
          (
          <year>2001</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Xia</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kulis</surname>
          </string-name>
          , B.:
          <string-name>
            <surname>W-Net</surname>
          </string-name>
          :
          <article-title>A deep model for fully unsupervised image segmentation</article-title>
          .
          <source>arXiv preprint arXiv:1711.08506</source>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <given-names>CASIA</given-names>
            <surname>Multi-Spectral Palmprint</surname>
          </string-name>
          Image Database, http://biometrics.idealtest.org/.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Otsu</surname>
          </string-name>
          , N.:
          <article-title>A threshold selection method from gray-level histograms</article-title>
          .
          <source>IEEE transactions on systems, man, and cybernetics 9</source>
          (
          <issue>1</issue>
          ),
          <fpage>62</fpage>
          -
          <lpage>66</lpage>
          (
          <year>1979</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Lin</surname>
            ,
            <given-names>C. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chuang</surname>
            ,
            <given-names>T. C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fan</surname>
            ,
            <given-names>K. C.</given-names>
          </string-name>
          :
          <article-title>Palmprint verification using hierarchical decomposition</article-title>
          .
          <source>Pattern Recognition</source>
          <volume>38</volume>
          (
          <issue>12</issue>
          ),
          <fpage>2639</fpage>
          -
          <lpage>2652</lpage>
          (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Zuiderveld</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Contrast limited adaptive histogram equalization</article-title>
          .
          <source>Graphics gems</source>
          ,
          <fpage>474</fpage>
          -
          <lpage>485</lpage>
          (
          <year>1994</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Buades</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Coll</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morel</surname>
            ,
            <given-names>J. M.:</given-names>
          </string-name>
          <article-title>A non-local algorithm for image denoising</article-title>
          .
          <source>In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05)</source>
          , vol.
          <volume>2</volume>
          , pp.
          <fpage>60</fpage>
          -
          <lpage>65</lpage>
          . IEEE (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Safronova</surname>
            ,
            <given-names>E. I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pavelyeva</surname>
            ,
            <given-names>E. A.</given-names>
          </string-name>
          :
          <article-title>Palm Vein Recognition Algorithm using Multilobe Differential Filters</article-title>
          .
          <source>In: Proceedings of 29-th International Conference on Computer Graphics and Vision GraphiCon</source>
          , vol.
          <volume>1</volume>
          , pp.
          <fpage>117</fpage>
          -
          <lpage>121</lpage>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Renault</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Desvignes</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Revenu</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>3D curves tracking and its application to cortical sulci detection</article-title>
          .
          <source>In: Proceedings 2000 International Conference on Image Processing (Cat. No. 00CH37101)</source>
          , vol.
          <volume>2</volume>
          , pp.
          <fpage>491</fpage>
          -
          <lpage>494</lpage>
          . IEEE (
          <year>2000</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Ronneberger</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fischer</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brox</surname>
          </string-name>
          , T.:
          <article-title>U-net: Convolutional networks for biomedical image segmentation</article-title>
          . In: International Conference on
          <article-title>Medical image computing and computer-assisted intervention</article-title>
          , pp.
          <fpage>234</fpage>
          -
          <lpage>241</lpage>
          . Springer, Cham (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tan</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Ordinal measures for iris recognition</article-title>
          .
          <source>IEEE Transactions on pattern analysis and machine intelligence</source>
          <volume>31</volume>
          (
          <issue>12</issue>
          ),
          <fpage>2211</fpage>
          -
          <lpage>2226</lpage>
          (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Fienup</surname>
            ,
            <given-names>J. R.</given-names>
          </string-name>
          :
          <article-title>Invariant error metrics for image reconstruction</article-title>
          .
          <source>Applied optics</source>
          <volume>36</volume>
          (
          <issue>32</issue>
          ),
          <fpage>8352</fpage>
          -
          <lpage>8357</lpage>
          (
          <year>1997</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Paszke</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          et al.
          <article-title>Automatic differentiation in pytorch (</article-title>
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Kingma</surname>
            ,
            <given-names>D. P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ba</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <article-title>Adam: A method for stochastic optimization</article-title>
          .
          <source>arXiv preprint arXiv:1412.6980</source>
          (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>