<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>International Workshop of IT-professionals on Artificial Intelligence, October</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Fingerprint Descriptor Model Utilizing Euclidean Minutiae Features</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yurii Pohuliaiev</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kirill Smelyakov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Kharkiv National University of Radio Electronics</institution>
          ,
          <addr-line>14 Nauky Ave, Kharkiv, 61166</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>1</volume>
      <fpage>5</fpage>
      <lpage>17</lpage>
      <abstract>
        <p>This paper introduces a new way to describe fingerprint details using basic geometric properties to improve how well we can identify people biometrically. The method concentrates on where fingerprint lines end or split, fixing issues with older methods like SIFT and HOG, and avoiding the heavy load of deep learning. The descriptor uses distances and angles around the center of fingerprint details, making it resistant to changes in position, rotation, and size. The goals include checking how well the details stay the same, creating a model based on geometry, and testing it on regular data. By mixing math proofs with real-world adjustments, the model can spot shifts 92.85% of the time, which is helpful for aligning fingerprints, but needs extra work for negative matches. This simple, non-machine learning method can be easily expanded for diferent biometric uses.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;ifngerprint</kwd>
        <kwd>minutiae</kwd>
        <kwd>descriptor</kwd>
        <kwd>biometrics</kwd>
        <kwd>Euclidean features</kwd>
        <kwd>afine transformations</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <sec id="sec-1-1">
        <title>1.1. Motivation</title>
        <p>
          Fingerprint features, like ridge endings and bifurcations, are identity markers because they are unique
and stable [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. Current systems have such disadvantage as dependency from image quality and alignment,
especially touch-free systems popular after COVID [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. Older methods convert fingerprint details into
vector forms, losing detail arrangement. Machine learning can reach high accuracy (e.g., 97% on datasets)
but demands computation and can be biased [3, 4].
        </p>
        <p>
          This paper introduces a fingerprint descriptor using Euclidean geometry to locate alignment shifts
quickly. By using distances and angles between point pairs, focused on the center of the minutiae, the
method remains constant despite shifts, rotations, and scaling, avoiding reliance on machine learning[5].
This solution supports alignment in places with limited resources, like phones, ofering a flexible option
to neural networks [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. It finds shifts, a step to prepare fingerprints for matching, rather than full
identification. It facilitates matching fingerprints from diferent scanners and conditions.
        </p>
      </sec>
      <sec id="sec-1-2">
        <title>1.2. State of the Art</title>
        <p>Fingerprint recognition involves looking at minutiae, using local descriptors, and using deep learning.
Local descriptor methods, like Orientation Local Binary Patterns (OLBP), extract ridge and valley details
successfully, but often miss topological minutiae details, resulting in extra features in unclear images
[6].</p>
        <p>Deep learning has accuracies of 97% on datasets like FVC2000 using convolutional neural networks
(CNNs) [3, 7]. These methods demand training data and computation, challenging real-time use, and
bring up bias questions [4, 8].</p>
        <p>
          Contactless fingerprint recognition is more useful because of hygiene concerns [
          <xref ref-type="bibr" rid="ref2">2, 9</xref>
          ] but struggles
with image quality and spatial orientation in uncontrolled conditions. Incomplete fingerprints require
alignment methods to handle distortions [9]. The descriptor described uses invariant Euclidean metrics
to address problems, aligning images based on shifts without neural networks. By keeping the topological
features of minutiae unlike deep learning, this method helps with alignment in biometric conditions.
        </p>
      </sec>
      <sec id="sec-1-3">
        <title>1.3. Objectives and Tasks</title>
        <p>The goal is to create a fingerprint descriptor based on Euclidean geometry for shift-based matching,
without machine learning, tackling alignment issues while remaining eficient, which involves these
steps:
1. Study minutiae traits (location, direction, type) for stability in images that are changed or distorted.
2. Develop a descriptor model that uses relative coordinates focused on the center of mass of the
minutiae, along with tables of Euclidean measurements (distances and angles).
3. Create a method for picking out a key group of minutiae near the center of mass, to reduce edge
problems and boost dependability.
4. Test the model using the FVC2000 (DB1_B) dataset [10], adjusting settings (like square side length
and distance/angle limits) to measure how well shifts are detected.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Fingerprint Euclidean Descriptor Model</title>
      <sec id="sec-2-1">
        <title>2.1. Summary</title>
        <p>The descriptor is a tensor of Euclidean minutiae attributes, processed via coordinate alignment, metric
calculation, feature extraction, and matching, per Figure 1. Minutiae are extracted using open-source
algorithms [11] for compatibility with fingerprint processing pipelines. This favors computation, making
it suited to mobile biometric systems.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Mathematical Model</title>
        <p>A fingerprint image is represented as a pixel matrix in 24-bit RGB format:
 = Z3 ∩ [0, 255]3 ,  ∈  ×
(1)
where  is the set of pixels,  is the image, and  and  are height and width.</p>
        <p>Minutiae extraction can be represented as a four-stage process:
• Binarization: converting the image to a binary mask,  ∈ {0, 1}× , using a functional
transformation  =  ().
• Skeletonization: constructing a discrete skeleton,  =  (), to highlight ridge structures.
• Detection: identifying terminations (1 = { ∈ | () = 1}) and bifurcations (2 = { ∈
| () ≥ 3}).
• Filtering: removing artifacts, 1′ = { ∈ 1|() ≤  }, 2′ = { ∈ 2|() ≤  }, where
() is the distance to the nearest ridge and   is the error threshold.</p>
        <sec id="sec-2-2-1">
          <title>As a result we get a minutiae set:</title>
          <p>= {(, , , )| ∈ [0,  ],  ∈ [0, ],  ∈ [0, 2),  ∈ {0, 1}}
(2)
where ,  are coordinates,  is the orientation angle, and  denotes minutiae type (0 for terminations,
1 for bifurcations).</p>
          <p>To ensure invariance to afine transformations, minutiae coordinates should be transformed relatively
to the center of mass:
 ′ = ( ) = {(−¯, −,¯ , ) ∈  }, ¯ =
1
| |
∑︁ , ¯ =</p>
          <p>
            1
| |
∑︁ 
The center of mass is stable under afine transformations, with mathematical expectations [¯] =  ,
[]¯ =  , and variances [¯] =  2 . This stability is derived from the consistent distribution of
minutiae across papillary patterns, which remain unique to each finger [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ]. Thus a subset of minutiae is
selected within a square region centered on the center of mass to reduce edge artifacts:
 ′ = Φ ( ′) = { ∈  ′ : || ≤

2 ∧ || ≤

2 }
where  is the square side length, optimized empirically, || and || represents absolute diference to
square center respectively.
          </p>
          <p>The descriptor comprises two matrices:
 = √︀∆ 2 + ∆ 2,  = atan2(∆, ∆)
 = [ ]1..|′|,1..|′|,  = [ ]1..|′|,1..|′|
(5)
(3)
(4)
(6)
(7)
(8)
(9)
where  contains pairwise Euclidean distances and  contains angles relative to the x-axis.</p>
          <p>Matching is performed using thresholds  1 (distance) and  2 (angle):
The matching score is:
∆ = | 1(1, ′1) −  2(2, ′2)|, ′ =
{︃∆/
∞
1 ∆ ≤ 
∆ &gt; 
1
1
 = ′ + ′,  = []1∈1′ ,2∈2′
(1, 2) =</p>
          <p>min(|1′ |, |2′ |)
where  is the order of the maximum minor of the score matrix . This score quantifies the proportion
of matched minutiae, enabling shift detection.</p>
          <p>In a practical scenario with two fingerprints made from the same finger, the descriptor should be
able to identify a translation ofset by matching minutiae pairs with high  values validated by their
orientation and type which ensures robustness against scanner variations.</p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Software Implementation</title>
        <p>The descriptor is implemented in C#, Python, and OpenCV, processing the FVC2000 (DB1_B) dataset.
The workflow, visualized in Figures 2 to 7, includes image preprocessing, minutiae extraction, descriptor
computation, and matching. The system is optimized for eficiency, suitable for real-time applications
on mobile devices.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Experiments Methodology</title>
      <sec id="sec-3-1">
        <title>3.1. Hardware</title>
        <p>
          Experiments used an Intel i5-12400F, 16GB RAM, GTX 1650, and Windows 11 for processing and
replication.
3.2. Data
The FVC2000 DB1_B dataset (100 fingers, 8 impressions per finger) [ 10] went through preprocessing to
improve contrast and reduce noise, addressing common image issues [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ].
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.3. Plan and Metric</title>
        <p>The experiments tuned four parameters: square side length (), distance threshold ( 1), angle threshold
( 2), and matching score threshold (ℎℎ). Shift detection success is:</p>
        <p>(1, 2) = 1 if ∃[, ] : [, ] = 1 else 0
where  combines experimental and verified matches. A score balances performance:
′ = 0.7 ·  + 0.3 · ∆
This emphasizes detection rates and eficiency, tested for statistical validity.
4. Results and Discussion</p>
      </sec>
      <sec id="sec-3-3">
        <title>4.1. Parameter Tuning</title>
        <p>(10)
(11)
• Experiment 1 (): Testing square side length  from 20 to 200 pixels,  = 160 was best, with
a 90.00% detection rate and 66.43% score (Table 1). Larger values improved accuracy but cut
eficiency.
• Experiment 2 ( 1): A distance threshold of  1 = 9 gave a 65.42% score, balancing accuracy and
cost.
• Experiment 3 ( 2): An angle threshold of  2 = 48∘ scored 65.64
• Experiment 4 (ℎℎ): A matching score threshold of  = 30% had a 67.35% score.
The system was also tested for noise and partial fingerprints, using forensic scenarios [ 9]. Early data
shows accuracy with noise, but more work is needed for distortions, setting up improvements.</p>
      </sec>
      <sec id="sec-3-4">
        <title>4.2. Discussion</title>
        <p>
          The model had a 92.85% shift detection rate, like deep learning [3], but used less computation. It
managed only 1.6% of the data of a brute-force approach (Table 2), suiting low-resource uses. Despite
brute-force approach is more time-eficient, it proposes much more combinations, not really relevant as
a shift candidates. Model also needs post-processing to remove false negatives, like other methods [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ].
The descriptor resists afine changes, aiding alignment across scanners, but struggles with rotations.
The lightweight design is good for real-time uses like mobile authentication or fingerprint analysis [ 9].
Later studies could mix metrics with learning for filtering or extend to palm prints [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. These ideas will
be implemented in further researches, including datasets and analysis.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>5. Conclusions</title>
      <p>Metric
Pairs
Time (ms)</p>
      <p>Proposed</p>
      <p>Brute-force
The descriptor uses Euclidean characteristics for a 92.85% shift detection rate, resisting afine changes.
It aligns fingerprints well, working as a light choice, but requires post-processing, restricting use for
identification. Next steps:
• Improve rotation invariance by gauging fingerprint angles.
• Add hybrid ML for filtering.</p>
      <p>• Add other modalities, like vein patterns.</p>
      <p>These changes will be studied, building on this data to address more biometric issues.</p>
    </sec>
    <sec id="sec-5">
      <title>Contributions of Authors</title>
      <p>Problem formulation – Kirill Smelyakov; methods, theory, implementation, analysis, article – Yurii
Pohuliaiev.</p>
    </sec>
    <sec id="sec-6">
      <title>Conflict of Interest</title>
      <sec id="sec-6-1">
        <title>No conflict of interest.</title>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Financing</title>
      <sec id="sec-7-1">
        <title>Self-funded.</title>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>Data Availability</title>
      <sec id="sec-8-1">
        <title>Data and source code available upon request [11].</title>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>Use of Artificial Intelligence</title>
      <sec id="sec-9-1">
        <title>No AI methods used.</title>
      </sec>
    </sec>
    <sec id="sec-10">
      <title>Acknowledgments</title>
      <sec id="sec-10-1">
        <title>All authors approved the manuscript.</title>
      </sec>
    </sec>
    <sec id="sec-11">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
[3] S. Garg, et al., CNN with inversion and several augmentation approaches on the FVC2000_DB4
dataset, Systems and Soft Computing 6 (2024). doi:10.1016/j.sasc.2024.200106.
[4] X. Guo, et al., Deep contrastive learning demonstrating commonalities among the fingerprints,</p>
      <p>Science Advances 10 (2024). doi:10.1126/sciadv.adi0329.
[5] M. R. Chen, H., M. Zhang, Recent progress in visualization and analysis of finger-print level 3
features, ChemistryOpen 11 (2022). doi:10.1002/open.202200091.
[6] A. Kumar, Orientation Local Binary Patterns (OLBP) attaining competitive outcomes on the
FVC2002, FVC2004, and FVC2006 datasets, SN Computer Science 1 (2020) 67. doi:10.1007/
s42979-020-0068-y.
[7] M.-S. Pan, et al., Optimization fingerprint reconstruction using deep learning algorithm, 2022 17th
International Microsystems, Packaging, As-sembly and Circuits Technology Conference (IMPACT)
(2002) 1–3. doi:10.1109/impact56280.2022.9966693.
[8] Y. Guo, Creation of counterfeit fingerprints with DCGAN and verification through a Siamese
Network, in: International Conference on Big Data, Artificial Intelligence and Internet of Things
Engineering (ICBAIE), 2023. doi:10.1109/ICBAIE59714.2023.10281210.
[9] J. Martins, et al., Use of incomplete fingerprints in forensic applications, Sensors 24 (2024).</p>
      <p>doi:10.3390/s24020664.
[10] D. Maio, et al., Fvc2000: Fingerprint verification competition 24 (2002) 402–412. doi: 10.1109/34.</p>
      <p>990140.
[11] Y. Pohuliaiev, Euclidean descriptor software implementation, https://drive.google.com/file/d/
1bRD2yqc7Zg6NSBpGYvtAVv-hiJrekJJh/view?usp=sharing, 2025. Google Drive repository.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hemalatha</surname>
          </string-name>
          ,
          <article-title>Relevance of contemporary fingerprint-based biometric methods in authentication</article-title>
          ,
          <source>in: International Conference on Emerging Trends in Information Technology and Engineering (ic-ETITE)</source>
          ,
          <year>2020</year>
          . doi:
          <volume>10</volume>
          .1109/ic-
          <fpage>ETITE47903</fpage>
          .
          <year>2020</year>
          .
          <volume>342</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>K.</given-names>
            <surname>Sreehari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Anzar</surname>
          </string-name>
          ,
          <article-title>Recent advancements and inventions in contactless fingerprint recognition</article-title>
          ,
          <source>Computers and Electrical Engineering</source>
          <volume>122</volume>
          (
          <year>2025</year>
          ). doi:
          <volume>10</volume>
          .1016/j.compeleceng.
          <year>2024</year>
          .
          <volume>109894</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>