<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Dense Ground Truth for Indoor Localization Competitions: Foot-mounted IMU-Enhanced Evaluation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Antonio R. Jiménez</string-name>
          <email>antonio.jimenez@csic.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Joaquín Torres-Sospedra</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Luisa Ruiz-Ruiz</string-name>
          <email>luisa.ruiz@csic.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Centre for Automation and Robotics (CSIC-UPM)</institution>
          ,
          <addr-line>Ctra. Campo Real km. 0,2, 28500, Arganda del Rey (Madrid)</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Computer Science, Universitat de València</institution>
          ,
          <addr-line>Avda. Universitat s/n, 46100 Burjassot, Valencia</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Positioning and Indoor Navigation</institution>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>VALGRAI, Valencian Graduate School and Research Network of Artificial Intelligence, Campus de Vera</institution>
          ,
          <addr-line>46022 Valencia</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Indoor localization competitions, such as IPIN Track 3, are crucial for benchmarking smartphone-based localization solutions. However, their current evaluation relies on a limited number of sparse, manually placed, and individually georeferenced ground truth (GT) points. This deployment process is costly and labor-intensive, and the inherent GT sparsity restricts evaluation granularity. It frequently obscures performance nuances and significant position estimation errors that occur between sparse points, consequently limiting comprehensive scoring. This paper proposes a novel method to generate a dense, high-fidelity GT trajectory for competition evaluation without requiring additional manual GT deployment. Our approach fuses a dense, relative trajectory, derived from a footmounted Inertial Measurement Unit (IMU) using Zero-Velocity Updates (ZUPT), with the existing sparse, highly accurate surveyed GT points. This fusion is achieved through a robust segment-wise rigid alignment method, precisely translating and rotating individual trajectory segments, followed by a final global trajectory smoothing. The resulting dense GT enables a more granular and continuous evaluation of competitor trajectories at their native IMU output frequency (e.g., 100 Hz). This ofers a more comprehensive and fair assessment, providing enhanced diagnostic capabilities. We outline the methodology, discuss key implementation considerations, and propose an evaluation strategy for the generated GT, highlighting its potential to significantly enhance future indoor localization benchmarks.</p>
      </abstract>
      <kwd-group>
        <kwd>IPIN competition</kwd>
        <kwd>Track 3</kwd>
        <kwd>Foot-mounted IMU</kwd>
        <kwd>Ground-truth generation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Accurate indoor localization is a critical capability for a wide range of applications, from emergency
services and autonomous robotics to augmented reality and asset tracking. Unlike Global Navigation
Satellite Systems (GNSS), which are ubiquitous outdoors, indoor positioning faces unique challenges
due to signal propagation issues (e.g., multipath, attenuation) and the absence of reliable infrastructure.
Competitions like the one organized by the International Conference on Indoor Positioning and Indoor
Navigation (IPIN) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] play a key role by providing standardized benchmarks and encouraging innovation
in diverse environments.
      </p>
      <p>Despite their fairness and rigor, current evaluation methodologies—particularly in IPIN Track 3
“Smartphone (ofsite-online)”—sufer from a major limitation: the reliance on a sparse set of manually
surveyed Ground Truth (GT) points. These highly accurate references are typically spaced 20–50
meters apart due to the cost and efort involved in deploying them across large, multi-floor buildings.
Participants submit high-frequency position estimates (e.g., at 2 Hz), which are then evaluated against
these sparse GT points.</p>
      <p>This sparse evaluation introduces several key challenges:
https://lopsi.car.upm-csic.es/ (A. R. Jiménez); http://jtorr.es/ (J. Torres-Sospedra)</p>
      <p>CEUR</p>
      <p>ceur-ws.org
• Masked Error Accumulation: Localization drift and errors between GT points may go unnoticed
or be insuficiently penalized, obscuring actual system performance.
• Misleading Fidelity Metrics: Simplified algorithms (e.g., using straight-line interpolation or
ignoring lateral movement) can appear accurate at checkpoints, misrepresenting true trajectory
quality.
• Exploitable GT Placement Biases: The predictable placement of GT points (e.g., near doors or
intersections) may lead to overfitting, undermining the fairness of the benchmark.</p>
      <p>
        To overcome these issues, the development of dense, high-fidelity reference trajectories is essential.
Foot-mounted Inertial Measurement Units (IMUs), leveraging Inertial Navigation System (INS) principles
and Zero-Velocity Updates (ZUPT), can provide dense, relative positioning with high short-term accuracy
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. However, their performance degrades over time due to drift, requiring fusion with absolute
references. While complex fusion techniques such as Kalman filters or graph-based optimization are
available, the strong consistency of foot-mounted IMU trajectories allows for more lightweight and
robust alternatives.
      </p>
      <p>
        This paper proposes a novel methodology for generating a dense, high-fidelity ground truth
trajectory for indoor localization evaluation. It fuses a continuous, drifting trajectory from a
footmounted IMU with existing sparse GT points using a segment-wise rigid alignment method—a
combination of localized translation and rotation corrections followed by global trajectory smoothing
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The resulting trajectory allows for continuous, fine-grained evaluation at native IMU frequencies
(typically over 100 Hz), ofering enhanced accuracy and diagnostic power.
      </p>
      <p>The main contributions of this work are:
1. A critical analysis of the limitations of sparse GT-based evaluation in current indoor localization
competitions, particularly IPIN Track 3.
2. A robust method to generate dense ground truth by fusing IMU-based relative trajectories with
sparse surveyed GT points.
3. A segment-wise rigid alignment strategy that preserves local morphology and corrects drift.
4. An internal validation framework based on held-out GT points to assess the quality of the
generated dense GT.</p>
      <p>The remainder of this paper is structured as follows: Section 2 describes the enhanced data collection
procedure. Section 3 details the proposed fusion method. Section 4 presents experimental results,
validation, and implications. Section 5 concludes the paper.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Proposed New Data Collection Methodology</title>
      <p>
        The 2025 edition of the IPIN Track 3 competition introduces enhancements to the standard data collection
procedure, which gathers smartphone sensor data (WiFi, BLE, etc.) via the GetSensorData App [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ].
As a key novelty, the actor is now equipped with a foot-mounted inertial sensor. This setup enables
synchronized acquisition of smartphone and IMU measurements under a shared time base. The IMU
data allow reconstruction of the actor’s trajectory using INS-ZUPT algorithms. Together with sparse
ground truth marks (POSI marks) recorded in the mobile logfile when crossing surveyed GT points,
this setup provides the required information to fuse both datasets and generate a dense GT reference.
      </p>
      <sec id="sec-2-1">
        <title>2.1. The New IPIN Track 3 Data Collection Procedure</title>
        <p>
          Between 2015 and 2024, IPIN Track 3 data collection involved deploying physical GT marks on the
lfoor, measuring their distances to nearby structural features (e.g., walls, columns), and extracting their
coordinates from georeferenced maps. An actor equipped with one or more smartphones (running
GetSensorData [
          <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
          ]) would follow predefined routes, pressing a button to log the exact timestamp
when stepping over each GT mark.
        </p>
        <p>Figure 1 shows the upgraded setup used in the 2025 edition.</p>
        <p>The main innovations introduced this year are:</p>
        <p>
          • Foot-Mounted IMU: A custom-made MEMS IMU (model IMUE-CSIC) developed at the Center
for Automation and Robotics (CAR) CSIC-UPM [
          <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
          ], attached to the actor’s foot. It provides
104 Hz accelerometer and gyroscope readings and transmits data in real-time via BLE to the
smartphone.
• Updated GetSensorData App: The new Android release (June 2025, https://gitlab.com/
getsensordatatools) supports modern phones (e.g., SG24) and logs new data types (e.g., step
count, uncalibrated magnetometer, GNSS raw). Crucially, it can now record synchronized streams
from the IMUE-CSIC, ensuring all data share a common time reference for fusion.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Collected Foot-Mounted IMU Data and Estimated Trajectory with Drift</title>
        <p>
          Raw data from the foot-mounted IMU is processed using an Inertial Navigation System (INS) with an
Extended Kalman Filter (EKF) [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. The EKF state vector includes position, velocity, and orientation
errors. Zero-Velocity Updates (ZUPT) are applied during foot-flat phases to constrain drift.
        </p>
        <p>This INS-EKF-ZUPT approach yields a dense trajectory P   = (  ,   ,   ) at 104 Hz, along with
orientation estimates and state covariances. Figure 3 illustrates the drift observed in a training trial
(shown previously in Fig. 2).</p>
        <p>Even in short trajectories, notable horizontal and vertical drift can be observed. The efect is magnified
in longer, multi-floor trials, such as the one in Figure 4, recorded during testing.</p>
        <p>These results underscore the need for fusion with absolute GT points to correct drift and enable the
IMU data to support rigorous competition evaluation.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed Dense Ground Truth Generation</title>
      <p>The core of dense Ground Truth (GT) generation lies in efectively fusing the sparse, accurate GT
points with the dense, but drifting, IMU trajectory. Various approaches exist for this task, broadly
categorised into segment-wise corrections and global optimization techniques. Global optimization
methods, such as batch smoothing or graph-based optimization, typically aim to find a globally optimal
trajectory by minimizing a cost function that considers all sensor measurements simultaneously. While
theoretically powerful and able to distribute errors more smoothly across the entire path, these methods
often introduce significant computational complexity and may struggle to preserve sharp,
piecewiselinear features of the trajectory without highly sophisticated motion models or specific constraints.
Furthermore, some implementations can be computationally prohibitive for very dense IMU data due
to the large number of variables involved in the optimization process. Given these considerations and
the specific requirement to maintain the precise local geometry of abrupt turns and straight segments,
we propose a robust segment-wise correction methodology.</p>
      <p>This method provides a robust approach to correct IMU trajectory drift using sparse Ground Truth
(GT) points, meticulously preserving the local shape and sharp turns inherent in the original IMU data.
The process integrates three main stages: GT point conversion and normalization, segment-wise rigid
alignment, and final trajectory smoothing.</p>
      <sec id="sec-3-1">
        <title>3.1. GT Point Conversion and Normalization</title>
        <p>Sparse GT points, initially recorded with latitude, longitude, timestamp (  ), and a FloorID, are
transformed into a local Cartesian (X, Y, Z) coordinate system. A planar projection approximation is used
for (X, Y) coordinates, where a reference point (Lat , Lon ), typically the first GT point, serves as the
origin:
 =  ℎ</p>
        <p>⋅ (Lon − Lon ) ⋅ cos(Lat )
 =  ℎ</p>
        <p>⋅ (Lat − Lat )
where  ℎ is Earth’s radius and angles are in radians. The Z-coordinate is derived from the FloorID,
assuming a constant height per floor,  = ( FloorID − min(FloorID)) ⋅ HeightPerFloor. Finally, converted
GT points are normalized such that their first point aligns with the IMU trajectory’s origin.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Segment-Wise Alignment with Rigid Transformation and Residual Adjustment</title>
        <p>This core stage aligns the IMU trajectory,    () , with the GT points,   (  ), while preserving its
local geometry. The IMU trajectory is divided into segments defined by consecutive GT points (    at
1. Initial Translation: The segment is first translated by  1 =   
−     . The translated segment
2. Rigid Rotation (XY-plane): A 2D rotation ()
pivoting around   
with    →   +1
′
in the XY-plane. The rotated segment is denoted   
″
() .

(which is now</p>
        <p>). The rotation angle  aligns the vector   ′  →   ′ +1
3. Linear Residual Adjustment: A final residual translation vector  
computed. This residual is linearly interpolated across the segment and applied:
=   +1 −    +1
″</p>
        <p>is
is applied to the XY components of   
′
() ,
 
() =   
″ () +   ⋅</p>
        <p>−   

 +1 −   
For static periods before the first GT and after the last GT, a constant translation ofset, derived
from the nearest GT point, is applied.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Final Trajectory Smoothing</title>
        <p>
          To mitigate typical ZUPT position correction at foot stances or any other high-frequency noise while
preserving essential trajectory features (e.g., straight lines, sharp turns), a Savitzky-Golay filter [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] is
applied to the corrected trajectory. This filter is applied independently to each dimension (X, Y, Z).
Optimal results were achieved with a low polynomial order (1) and a window length of 211 points (
about 2 seconds for an IMU sampling at 104 Hz).
        </p>
        <p>The outcome of this methodology is a high-fidelity Ground Truth estimate, efectively combining the
local robustness of INS-ZUPT with the global accuracy from sparse GT points, yielding a smooth and
geometrically faithful path representation, as will see in next section.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Quality of Experimental Results and Validation</title>
      <p>Our methodology for generating a dense, high-fidelity Ground Truth (GT) will be rigorously validated
using data collected from the multi-floor Tampere University building for the IPIN 2025 Track 3
competition. The validation will be conducted through two primary approaches: a) preliminary visual
observation of the corrected trajectories and b) a quantitative assessment using a subsample of GT
points for fusion and the remaining as an independent test set.</p>
      <sec id="sec-4-1">
        <title>4.1. Visual Observation Validation</title>
        <p>This subsection presents visual evidence of the achieved improvements. We will utilize the same IMU
trajectories, initially presented with drift in Section</p>
        <sec id="sec-4-1-1">
          <title>2.2 (Figures 3 and 4), as illustrative examples.</title>
          <p>After applying the proposed fusion algorithm described in Section</p>
        </sec>
        <sec id="sec-4-1-2">
          <title>3, the significant improvement</title>
          <p>in trajectory accuracy and coherence becomes evident. Figures 5 and 6 visually demonstrate this
enhancement. It is clear that the corrected IMU trajectory (represented in blue, now serving as the
dense GT) not only precisely aligns with the sparse GT points but also efectively preserves the intricate
details and local geometry of the original drifted trajectory (shown in red).</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Quantitative Validation through GT Subsampling</title>
        <p>To rigorously validate the accuracy of our generated dense Ground Truth, we propose a cross-validation
inspired approach. Specifically, we will intentionally
withhold a distinct subset of the surveyed GT
points from the fusion process. After generating the dense GT trajectory using only the remaining GT
points, we will evaluate the accuracy of our fused trajectory at the precise locations of the withheld GT
points.</p>
        <p>For this validation, 10% of the total GT points were reserved for evaluation in each fold, with the
remaining 90% utilized in the fusion process. A total of ten folds were generated for each trajectory,
ensuring a comprehensive assessment across diferent subsets of data.</p>
        <p>Table 1 presents the key error metrics (mean Euclidean distance error, median, 3rd quartile, and
maximum error) obtained at these withheld GT points, demonstrating the precision of our fused
trajectory compared to the actual surveyed points.</p>
        <p>These quantitative error values provide strong evidence of the proposed method’s accuracy.
Specifically, the maximum observed horizontal error of 0.74 meters, occurring when an intermediate GT
point is withheld, confirms that the interpolation between consecutive GTs (even across larger gaps)
remains well below 1 meter. Furthermore, the 0.0% Floor ID Error unequivocally demonstrates the
method’s robustness in maintaining vertical consistency across multiple floors. Given these results, it is
reasonable to expect an even lower error when all available GT points are utilized for fusion at their
typical density, likely yielding average errors around 0.5 meters. Critically, the error is expected to
diminish as the generated dense GT points approach any of the sparse surveyed GT points. Therefore,
we confidently conclude that our generated dense GT is indeed suitable as a high-accuracy, continuous
reference for rigorous localization system evaluation.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Impact on Competitor Evaluation: Future Work and Discussion</title>
        <p>In the current IPIN 2025 Track 3 competition, our newly proposed dense GT generation methodology
will not be used for computing the oficial scores that determine the competition winners. However,
we will compute alternative scores using this dense GT in parallel, specifically to analyze its potential
influence on competitor rankings.</p>
        <p>Through this future analysis, we aim to systematically compare the evaluation results obtained with
the traditional sparse GT against those derived from our proposed dense GT. We expect this comparison
to reveal several key diferences:
• A diferent distribution of errors across competitor trajectories, which will expose sustained drifts
or localized accuracy issues not evident with sparse GT evaluation.
• Tangible changes in final scoring metrics (e.g., 3rd quartile error, mean error), arguing that
the dense GT provides a more representative and robust assessment of real-world continuous
localization accuracy.
• Enhanced diagnostic capabilities for competitor algorithms, allowing for clearer identification of
their strengths and weaknesses across the entire trajectory, rather than just at sparse checkpoints.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>This paper has presented a novel and robust methodology for generating a dense, high-fidelity Ground
Truth (GT) trajectory, directly addressing the limitations of sparse GT evaluation in indoor localization
competitions, particularly within the IPIN Track 3 framework. Our approach efectively fuses a dense,
relative trajectory derived from a foot-mounted Inertial Measurement Unit (IMU) using Zero-Velocity
Updates (ZUPT) with existing sparse, highly accurate surveyed GT points. The core of our solution lies
in a segment-wise rigid alignment method that uniquely preserves the detailed local morphology and
sharp turns of the original IMU path, followed by a global trajectory smoothing. This methodology yields
a dense GT that significantly enhances the granularity and fairness of localization system evaluation,
enabling continuous assessment of competitor trajectories at their native output frequencies. Visual
and quantitative experimental results confirm the high accuracy and fidelity of the generated dense
GT, proving its suitability as a robust reference for benchmarking and laying the groundwork for more
sophisticated and diagnostically rich evaluations in future indoor localization challenges.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>We thank the local IPIN organizers in Tampere: Lucie, Jari and Roman for their full support during the
measurement campaign; Miguel from Université Gustave Eifel; and the Competition Chairs at CNR:
Francesco, Filippo, and Davide, for their assistance with the georeferencing of GT points.</p>
      <p>This work was supported by the INDRI Project (PID2021-122642OB-C43 and
PID2021-122642OBC42, funded by AEI/10.13039/501100011033 and FEDER, EU) and the DISTRIMUSE Project (Horizon
Europe, KDT-JU-2023-2-RIA, project no. 101139769). J. Torres-Sospedra also acknowledges funding
from Generalitat Valenciana (CIDEXG/2023/17, Conselleria d’Educació, Universitats i Ocupació).</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used ChatGPT 4o and Gemini 2.5 Flash in order to:
Grammar and spelling check. Figures are generated using Matlab which is the programming language
for script implementation. Intellectual ideas are original from the authors. The authors reviewed and
edited the content as needed and take full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>F.</given-names>
            <surname>Potorti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Crivello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Vladimirov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhuge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Perez-Navarro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Jiménez</surname>
          </string-name>
          ,
          <article-title>Ofsite evaluation of localization systems: Criteria, systems, and results from ipin 2021 and 2022 competitions</article-title>
          ,
          <source>IEEE Journal of Indoor and Seamless Positioning and Navigation</source>
          <volume>2</volume>
          (
          <year>2024</year>
          )
          <fpage>92</fpage>
          -
          <lpage>129</lpage>
          . doi:
          <volume>10</volume>
          .1109/JISPIN.
          <year>2024</year>
          .
          <volume>3355840</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Jiménez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Seco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Prieto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Guevara</surname>
          </string-name>
          ,
          <article-title>Indoor pedestrian navigation using an ins/ekf framework for yaw drift reduction and a foot-mounted imu</article-title>
          ,
          <source>in: 2010 7th Workshop on Positioning, Navigation and Communication</source>
          ,
          <year>2010</year>
          , pp.
          <fpage>135</fpage>
          -
          <lpage>143</lpage>
          . doi:
          <volume>10</volume>
          .1109/WPNC.
          <year>2010</year>
          .
          <volume>5649300</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P. A.</given-names>
            <surname>Gorry</surname>
          </string-name>
          ,
          <article-title>General least-squares smoothing and diferentiation by the convolution (SavitzkyGolay) method</article-title>
          ,
          <source>Analytical Chemistry</source>
          <volume>62</volume>
          (
          <year>1990</year>
          )
          <fpage>570</fpage>
          -
          <lpage>573</lpage>
          . URL: https://doi.org/10.1021/ac00205a007. doi:
          <volume>10</volume>
          .1021/ac00205a007.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Jiménez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Seco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Torres-Sospedra</surname>
          </string-name>
          ,
          <article-title>Tools for smartphone multi-sensor data registration and gt mapping for positioning applications</article-title>
          , in: 2019
          <source>International Conference on Indoor Positioning and Indoor Navigation (IPIN)</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          . doi:
          <volume>10</volume>
          .1109/IPIN.
          <year>2019</year>
          .
          <volume>8911784</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Gutierrez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Jiménez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Seco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. J.</given-names>
            <surname>Álvarez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Aguilera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Torres-Sospedra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Melchor</surname>
          </string-name>
          ,
          <string-name>
            <surname>Getsensordata:</surname>
          </string-name>
          <article-title>An extensible android-based application for multi-sensor data registration</article-title>
          ,
          <source>SoftwareX</source>
          <volume>19</volume>
          (
          <year>2022</year>
          )
          <article-title>101186</article-title>
          . URL: https://www.sciencedirect.com/science/article/pii/S2352711022001121. doi:https://doi.org/10.1016/j.softx.
          <year>2022</year>
          .
          <volume>101186</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Ruiz-Ruiz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Seco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jiménez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Garcia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. J.</given-names>
            <surname>García</surname>
          </string-name>
          ,
          <article-title>Evaluation of gait parameter estimation accuracy: a comparison between commercial imu and optical capture motion system</article-title>
          ,
          <source>in: 2022 IEEE International Symposium on Medical Measurements and Applications (MeMeA)</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>2</lpage>
          . doi:
          <volume>10</volume>
          .1109/MeMeA54994.
          <year>2022</year>
          .
          <volume>9856475</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L.</given-names>
            <surname>Ruiz-Ruiz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. J.</given-names>
            <surname>García-Domínguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Jiménez</surname>
          </string-name>
          ,
          <article-title>A novel foot-forward segmentation algorithm for improving imu-based gait analysis</article-title>
          ,
          <source>IEEE Transactions on Instrumentation and Measurement</source>
          <volume>73</volume>
          (
          <year>2024</year>
          )
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          . doi:
          <volume>10</volume>
          .1109/TIM.
          <year>2024</year>
          .
          <volume>3449951</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>