<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>International Journal of
Human</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.3390/app9153171</article-id>
      <title-group>
        <article-title>head-mounted displays for total knee arthroplasty⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ánxela Pérez Costa</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anna De Liddo</string-name>
          <email>anna.deliddo@open.ac.uk</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nieves Pedreira Souto</string-name>
          <email>nieves.pedreira@udc.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Florian Michaud</string-name>
          <email>florian.michaud@udc.es</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Coruña</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Spain</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science and Information Technologies, Faculty of Computer Science, CITIC-Research Center of</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Industrial de Ferrol, Universidade da Coruña</institution>
          ,
          <addr-line>Ferrol</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Information and Communication Technologies, Universidade da Coruna, Biomedical Research Institute of A Coruña (INIBIC)</institution>
          ,
          <addr-line>A</addr-line>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Knowledge Media Institute (KMi), The Open University</institution>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>Laboratory of Mechanical Engineering, Centro de Investigación en Tecnologías Navales e Industriales (CITENI)</institution>
          ,
          <addr-line>Campus</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>4282</volume>
      <fpage>131</fpage>
      <lpage>143</lpage>
      <abstract>
        <p>The integration of Augmented Reality (AR) in Total Knee Arthroplasty (TKA) has the potential to enhance surgical precision and improve surgeon performance by providing real-time, three-dimensional visual information directly within the surgeon's field of view. This study evaluates the performance of various interaction methods for AR Head-Mounted Displays (HMDs), focusing on usability, eficiency, and ergonomics within the constraints of the surgical environment. Five interaction methods-head movement, holographic touch, hand gestures, gaze ifxation, and a gaze-gesture combination-were assessed based on task completion time, error rates, subjective workload, and user comfort.</p>
      </abstract>
      <kwd-group>
        <kwd>usability evaluations</kwd>
        <kwd>Augmented reality (AR)</kwd>
        <kwd>head-mounted displays (HMD)</kwd>
        <kwd>total knee arthroplasty (TKA)</kwd>
        <kwd>interaction methods</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>CEUR</p>
      <p>
        ceur-ws.org
applications. For example, Williams et al. [7] compared gesture-based and speech-based interactions,
highlighting trade-ofs in terms of eficiency and user preference. Similarly, Schön et al. [ 8] investigated
the ‘gorilla arm’ fatigue phenomenon in mid-air interactions, while Kim et al. [9] studied how users
interact with both virtual and physical objects in 3D matching tasks. These studies commonly follow a
structured evaluation framework: (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) a pre-evaluation phase, including user training and background
data collection, (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) a task execution phase with controlled variables, and (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) post-task questionnaires
and feedback collection [8, 10, 11, 12].
      </p>
      <p>In the medical domain, research on AR interaction methods has primarily focused on the recognition
performance of specific interaction systems rather than a comparative analysis of diferent interaction
modalities. Bautista et al. [13] compared native gestures of the Meta AR glasses with the MYO armband
for gesture recognition in an orthopedic surgery application. However, their evaluation was limited
to non-overlapping interfaces, leaving unexplored the potential for false gesture recognition during
surgery. Similarly, another study [14] examined gesture-based interactions using the Oculus HMD
and the MYO armband in an image-guided surgery scenario, but the evaluation was restricted to a
pre-operative context, outside the constraints of real-time surgery.</p>
      <p>These studies highlight the importance of evaluating interaction methods, but they also reveal a
significant gap: limited research systematically compares interaction methods within actual surgical
environments, potentially restricting the available interactions. To fill this gap, our work analyzes these
environments to define feasible interaction methods for implementation and proposes both objective
and subjective evaluations of various interaction techniques used with AR HMDs under recreated
surgical conditions. Drawing from existing research frameworks, we adapt evaluation protocols to
address the specific needs and constraints of orthopedic surgery, with the goal of identifying the most
efective and user-friendly interaction methods to enhance surgical support and assist surgeons in the
operating room.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Material and methods</title>
      <sec id="sec-2-1">
        <title>2.1. Surgical environment</title>
        <p>The surgical environment is a highly controlled, sterile, and dynamic space where precision, safety,
and eficiency are paramount. It is typically occupied by a multidisciplinary team, including surgeons,
nurses, and anesthesiologists, all operating under strict protocols [15]. The environment is often
constrained in terms of physical space and subject to continuous movement, instrument usage, and
communication. Background noise from equipment, alarms, and team interactions is common, which
can interfere with technologies like voice recognition. Additionally, the presence of blood or bodily
lfuids on the surgeon’s gloves or tools can impair the reliability of gesture recognition or touch-based
interactions, especially when systems rely on visual tracking or capacitive sensors. Surgeons must
also maintain sterility at all times, limiting the ability to interact directly with hardware interfaces.
Lighting conditions, visibility, and the need for uninterrupted focus further complicate the integration
of advanced technologies, making human-computer interaction a critical consideration in designing AR
HMDs for surgical use.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Interaction selection</title>
        <p>AR HMDs ofer a range of interaction methods, each with distinct advantages and limitations in surgical
environments. To identify the most suitable interaction techniques for our device and context, we first
compiled a list of potential methods. These were then filtered based on their compatibility with the
device used in our evaluation —Microsoft HoloLens 2— with a particular focus on methods that did
not require additional hardware, ensuring practicality and ease of integration in the operating room.
Among the compatible methods, we assessed their feasibility within the surgical environment described
in section 2.1.</p>
        <p>These interactions can be implemented in a single-modal form, where communication between the
user and the system relies on one input channel—such as voice, gestures, or gaze used independently.
In contrast, bimodal, or more broadly multimodal, interactions involve the simultaneous use of two
or more input modalities. This enables richer, more intuitive, and eficient communication with AR
systems [16, 17]. However, studies have shown that while multimodal interactions can reduce errors
and cognitive load, they may also increase the time required to complete tasks, and users may report
higher perceived efort [ 18]. These findings suggest that bimodal or multimodal interactions influence
AR performance in a task-dependent manner: they often enhance accuracy and user acceptance but
may involve trade-ofs in speed and perceived workload [ 16].</p>
        <p>Among the interactions allowed by our device, several interaction techniques were excluded from
this study based on their suitability for the surgical environment:
• Interactions involving physical contact were ruled out due to the challenges they pose for
maintaining sterility in surgical settings. Likewise, techniques requiring the surgeon to make
significant movements away from the operating area were dismissed, as such actions could
interrupt the continuous visualization of data and disrupt the surgical workflow.
• Body movement —specifically, device movement in space— is incompatible with the surgical
workflow due to the constrained operating environment. Surgeons must remain stationary during
procedures to maintain precision and avoid disrupting the sterile field, and the limited space,
crowded with colleagues and surgical tools, leaves no room for unnecessary movements. Any
such displacement could compromise both the surgical procedure and the safety of the patient.
• Spatial Mapping &amp; Anchoring does not detect user intent, as its primary function is limited to
providing 3D visualization and ensuring spatial alignment of holograms within the physical
environment. It is not designed for interaction or decision-making, but rather to enhance the
accuracy of holographic placement relative to real-world objects, thereby supporting contextual
awareness during surgery.
• In the case of natural language input versus structured voice commands, natural language was
excluded due to the high risk of unintended activations during conversations with other surgical
team members. Structured voice commands, in contrast, provide a more deliberate and reliable
mode of interaction and could be considered in future works.
• Image-based interactions were also discarded. Since our system already employs ArUco markers
for tool recognition[19], adding more visual elements could increase cognitive load and risk
confusion—issues previously observed in similar evaluations [19, 20]. Moreover, the presence of
blood or bodily fluids during surgery may obscure visual markers, further reducing reliability.</p>
        <p>Although tangible interactions —such as physical buttons— were excluded due to sterility concerns,
their virtual counterparts, namely holographic buttons operated via hand gesture recognition, were
considered. These preserve the intuitive familiarity of tactile input while maintaining a sterile,
contactfree interaction [21].</p>
        <p>To identify efective bimodal interaction strategies, we constructed a matrix cross-referencing all
compatible methods. Combinations that could interfere with surgical activities were discarded, and from
the remaining options, we selected those that aligned most closely with natural human behaviour, as
supported by prior studies. Based on this process, five interaction methods were selected and evaluated
in this preliminary study:
1. Head Orientation Selection (HOS): This technique replicates the functionality of the interface
evaluated in [20], with the goal of improving it. Horizontal head movements (left or right) are
mapped to navigation.
2. Holographic Touch: This method simulates the pressing of a physical button using virtual
interface elements. The user extends their hand toward a hologram, triggering the action through
proximity and motion—efectively mimicking a touch interaction without contact.
3. Gestures: A simple and low-efort gesture—raising the index finger to the left or right—is used
to indicate a selection. This gesture was chosen for its clarity, ease of execution, and minimal
physical or cognitive load, requiring no specific positioning relative to the interface.
4. Gaze Fixation: Users select an interface element by looking directly at it. Selection is confirmed
by maintaining gaze for a short period, providing a hands-free and intuitive interaction mode.
5. Gaze + Gestures (Bimodal): This combined method uses gaze for targeting an option, followed
by a confirming gesture (raising the index finger), enhancing accuracy and reducing the risk of
accidental selections.</p>
        <p>Preliminary tests were conducted to validate the compatibility of the interactions mentioned above
within a simulated surgical environment. Future work will involve more extensive testing under various
conditions, such as diferent glove colours, blood on gloves, tools in hand, and other relevant factors.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Experimental data collection</title>
        <sec id="sec-2-3-1">
          <title>2.3.1. Participants</title>
          <p>In the preliminary tests, ten voluntary participants were recruited to evaluate the basic compatibility
and functionality of the proposed interactions within a simulated environment. The sample included
individuals aged between 23 and 46 years, with heights ranging from 163 cm to 188 cm and arm lengths
from 50 cm to 60 cm. Six participants had corrected vision impairments and wore their own prescription
glasses during the evaluation. In terms of hand dominance, two participants were left-handed, while
the remaining eight were right-handed. These initial tests provided valuable insights, but future work
will expand the sample size to include a broader range of participants, particularly orthopedic surgeons.
By incorporating professionals with expertise in the surgical field, the next phase of testing aims to
evaluate the interactions’ efectiveness and usability in a more realistic context, ensuring the technology
is suitable for actual clinical use.</p>
        </sec>
        <sec id="sec-2-3-2">
          <title>2.3.2. Experiment and survey</title>
          <p>At the start of the evaluation, participants were briefed on the procedure and asked to provide
demographic, biometric, and technology usage information. The device was then fitted and calibrated to
ensure optimal alignment and comfort. Each interaction method was introduced and demonstrated,
after which participants were given time to familiarize themselves with a practice interface until they
felt confident using each method.</p>
          <p>Evaluations were conducted in a controlled environment designed to closely simulate surgical
conditions. Participants wore surgical gowns and gloves to replicate real-world factors that could
impact interaction performance. Each participant followed a structured protocol for testing the five
interaction methods. For each method, they completed a predefined task using the assigned interaction
technique, followed by a brief questionnaire to capture their perceptions and experiences. This cycle
was repeated for all five interaction methods. At the end of the session, participants completed a final
questionnaire to provide their overall impressions and formally conclude the evaluation.</p>
        </sec>
        <sec id="sec-2-3-3">
          <title>2.3.3. Task design</title>
          <p>Users were asked to perform a menu-driven navigation task designed to simulate the configuration of a
device during surgery —such as aligning the system with the preoperative plan or handling any step
requiring user input. This task reflects a phase in the surgical procedure where the surgeon’s hands are
typically free, and the primary focus is on decision-making. However, in future work, we aim to assess
whether these interaction methods could interfere with the surgical workflow during other phases.</p>
          <p>The task design was inspired by an existing commercial solution [22], with the graphical interface
remaining unchanged except for the icon representing the current interaction method. To minimize
learning efects and reduce bias, the menu text was replaced with ten pairs of generic placeholders
(“Option A” and “Option B”), with the order of options randomized across trials. Participants were
randomly assigned to consistently select either Option A or Option B throughout the evaluation.
Each participant completed all ten tasks using each of the five interaction methods, resulting in a
comprehensive and balanced assessment.</p>
        </sec>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Data analysis</title>
        <sec id="sec-2-4-1">
          <title>2.4.1. Objective data</title>
          <p>The data collected to evaluate the diferent interaction methods comprised both objective and subjective
measures. Objective data were obtained from measurable performance metrics during task execution,
while subjective data were gathered through questionnaires completed by participants after each test.
Performance: Task performance was evaluated using two key metrics: task completion time and error
rate. Completion time was recorded from the moment the participant activated the start button, with
timestamps logged at each placeholder transition. This enabled a detailed analysis of the interaction
process and helped identify any issues linked to specific interaction methods or interface screens.
Timing concluded once the participant had completed all ten selections. Errors were assessed using a
4-point Likert scale: 0 – correct execution, 1 – near miss, 2 – incorrect execution, and 3 – assistance
required[23].</p>
        </sec>
        <sec id="sec-2-4-2">
          <title>2.4.2. Subjective data</title>
          <p>Perceived Workload: After completing each task, participants assessed their cognitive workload using
the raw version of the NASA-TLX questionnaire. They rated their mental efort on a 0–20 scale,
providing a quantitative measure of the cognitive demands associated with each interaction method
[10, 24, 9].</p>
          <p>Preferences Questionnaire: Immediately following each task, participants completed a brief two-item
questionnaire, rating both the overall efectiveness and perceived comfort of the interaction method on
a 0–10 scale [8].</p>
          <p>General Preferences: Upon completing all tasks, participants ranked their top three preferred
interaction methods. They were also invited to share open-ended feedback, highlighting any challenges they
encountered and ofering general impressions of the diferent interaction techniques.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results</title>
      <p>Statistical analyses were performed to identify significant diferences between the interaction methods.
Task completion times followed a normal distribution for all methods except HOS, as determined by
the Shapiro-Wilk test. Consequently, the Friedman test was used to analyze task completion time due
to the mixed distribution of results. Error rates exhibited a non-normal distribution and were also
analyzed using the Friedman test. The significance level was set at 5% (p &lt; 0.05), and p-values below
this threshold were considered statistically significant. For the preference questionnaire, which is not a
standardized instrument, internal consistency was assessed using Cronbach’s α to evaluate its reliability.</p>
      <sec id="sec-3-1">
        <title>3.1. Performance</title>
        <p>The results of the task completion times reveal significant diferences among the interaction methods (p
&lt; 0.001). Gaze fixation was the fastest method, followed by gaze + gestures (bimodal), which provided
a balance of speed and usability. On the other hand, gestures and head orientation selection (HOS)
took longer to complete, with holographic touch being the slowest method. While gaze fixation ofered
the quickest task completion, it had some errors, which suggests that, while eficient, its reliability
may need to be improved. Gestures were the most reliable, with no errors or near errors, though they
required slightly more time compared to gaze-based methods.</p>
        <p>In terms of errors, gestures and head orientation selection (HOS) were error-free, demonstrating their
reliability. Gaze fixation and gaze + gestures (bimodal) had some errors and near errors, suggesting that
while these methods were efective in terms of task completion time, they introduced some challenges
in accuracy. Holographic touch showed moderate reliability with a few errors and near errors. Overall,
gestures provided the best balance of reliability and usability, whereas holographic touch and gaze
+ gestures might require further refinement to reduce errors and improve eficiency, especially in
environments where precision is crucial, such as surgery.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Perceived workload</title>
        <p>Table 1 presents the results of the NASA-TLX perceived workload analysis for five diferent interaction
methods, assessing mental demand, physical demand, temporal demand, performance, efort, frustration,
and the overall mean workload score. The interaction methods evaluated include Head Orientation
Selection (HOS), Holographic Touch, Gestures, Gaze Fixation, and Gaze + Gestures (Bimodal). Among
the methods, Gaze + Gestures (Bimodal) exhibited the highest overall workload (mean = 26.25), driven
by higher scores in mental demand, temporal demand, and frustration. In contrast, Head Orientation
Selection (HOS) had the lowest overall workload (mean = 16.08), with the highest physical demand
but relatively lower scores in other categories. Gestures ranked similarly with a low overall workload
(mean = 16.5), though it showed relatively moderate mental demand and frustration. Gaze Fixation
and Holographic Touch exhibited moderate workloads, with Gaze Fixation having a notably higher
temporal demand and Holographic Touch reflecting moderate frustration scores. These results provide
valuable insight into the perceived workload and usability of each method, highlighting diferences in
how each interaction type places cognitive and physical demands on users.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Preferences questionnaire</title>
        <p>In the preference questionnaire (Cronbach’s α = 0.903), participants rated ’Gaze Fixation’ the highest,
with a mean score of 8.3, which also corresponded to the method’s highest comfort score (mean =
8). The second highest-scoring method was ’Head Orientation Selection’ (mean score = 8.2), though
it received a significantly lower comfort score (mean = 6.7). The remaining methods received the
following scores: ’Gestures’ (mean score = 7.7; comfort score = 7.3), ’Gaze + Gestures’ (mean score = 8;
comfort score = 7.56), and ’Holographic Touch’ (mean score = 7; comfort score = 6.2).
Workload
(%)</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. General preferences</title>
        <p>Regarding participants’ rankings of interaction methods based on preference and comfort, gaze fixation
and gestures emerged as the top-performing techniques. Each was selected by 90% of participants as
one of their top three preferred methods, with gaze fixation also appearing in the top three comfort
rankings for 100% of participants. This indicates that these two methods were consistently rated
highly across users, both in terms of subjective preference and perceived ease of use. Notably, gaze
ifxation was the most frequently selected as both the first-choice preference (30%) and for comfort (40%),
reinforcing its perceived eficiency and ease of use. Gestures, while slightly less favored as a first choice
(30%), had the highest third-choice preference (50%) and mirrored this pattern in the comfort rankings,
indicating consistent, if moderate, user satisfaction. Head Orientation Selection was selected by 70% of
participants in both categories, suggesting a solid, middle-ground experience. Conversely, gaze gestures
received lower overall preference (40%) and comfort (30%) ratings, possibly due to its higher error rates
and perceived complexity. Holographic touch was the least favored in both preference and comfort
(10% each), highlighting clear user dissatisfaction, likely stemming from its usability limitations in the
experimental setup.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion and conclusions</title>
      <p>Although diferences in error rates across interaction methods were not statistically significant, errors
remain a critical concern in surgical environments, where even minor mistakes can have serious
implications. As such, performance evaluations considered both task completion time and error count,
despite statistical significance emerging only for completion time. Among the tested methods, gaze
ifxation was the fastest and received the highest preference ratings; however, it also produced the most
errors, indicating a need for greater precision or enhanced error mitigation strategies. Gaze + gestures
ofered a balanced performance, combining moderate speed with favourable user evaluations, but its
relatively high error rate also calls for caution. In contrast, the gestures method, while slightly slower,
stood out as the only technique with zero errors, demonstrating strong reliability and potential for
high-risk environments.</p>
      <p>Head orientation selection (HOS) was similarly error-free and generally well-received. However,
results on comfort were contradictory—despite being rated as the least comfortable on average, it was
selected as the most comfortable method by some participants. This discrepancy suggests variability in
user preferences, possibly related to individual diferences in perception or physical strain. Refinements
in tilt sensitivity could improve consistency, though any adjustments must be carefully evaluated to
avoid compromising detection accuracy. Holographic touch, although natively supported by the device,
consistently underperformed across measures. Users reported frustration due to dificulties in pressing
static interface elements, likely caused by a lack of depth cues and spatial flexibility.</p>
      <p>In terms of perceived workload, NASA-TLX results indicated that gaze-based methods were associated
with heightened temporal demand. Users reported feeling rushed, as visual confirmation of selection
increased the pressure to act swiftly. This aligns with the notion that while gaze interactions are
fast, they may induce cognitive strain. Introducing alternative confirmation cues may help mitigate
this issue. Physical demand was notably high in the head tilt method, with some users expressing
discomfort due to repetitive or exaggerated head movements, while others appreciated the lower angle
detection threshold. This variation suggests that ergonomic adjustments could improve usability. Lastly,
holographic touch elicited high frustration levels, primarily due to inconsistent recognition, reinforcing
the importance of responsive and adaptable interface design in constrained environments like surgery.</p>
      <p>Future work should aim to refine these preliminary observations through more ecologically valid
simulations that account for the physical constraints of surgery —particularly scenarios where the
surgeon’s hands are occupied with instruments or the patient’s body. Incorporating such conditions
could provide deeper insight into the practical usability and limitations of each interaction method
under real-world operating room demands.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>Ánxela Pérez Costa and Florian Michaud would like to acknowledge the support of the Galician
Government and the Ferrol Industrial Campus by means of her predoctoral research contract 2023/CP/209
and his postdoctoral research contract 2022/CP/048.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Pokhrel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Alsadoon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. W.</given-names>
            <surname>Prasad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Paul</surname>
          </string-name>
          ,
          <article-title>A novel augmented reality (ar) scheme for knee replacement surgery by considering cutting error accuracy</article-title>
          ,
          <source>International Journal of Medical Robotics and Computer Assisted Surgery</source>
          <volume>15</volume>
          (
          <year>2019</year>
          ). doi:
          <volume>10</volume>
          .1002/rcs.
          <year>1958</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S. P.</given-names>
            <surname>Canton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. N.</given-names>
            <surname>Austin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Steuer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. M.</given-names>
            <surname>Kass</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fogg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Clayton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Cunningham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Scott</surname>
          </string-name>
          , D. LaBaze,
          <string-name>
            <given-names>E. G.</given-names>
            <surname>Andrews</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. T.</given-names>
            <surname>Biehl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C. V.</given-names>
            <surname>Hogan</surname>
          </string-name>
          ,
          <article-title>Feasibility and usability of augmented reality technology in the orthopaedic operating room</article-title>
          ,
          <source>Current Reviews in Musculoskeletal Medicine</source>
          <volume>17</volume>
          (
          <year>2024</year>
          )
          <fpage>117</fpage>
          -
          <lpage>128</lpage>
          . doi:
          <volume>10</volume>
          .1007/s12178- 024- 09888- w.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Buchner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Buntins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kerres</surname>
          </string-name>
          ,
          <article-title>The impact of augmented reality on cognitive load and performance: A systematic review</article-title>
          ,
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .1111/jcal.12617.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>T.</given-names>
            <surname>Schneider</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Cetin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Uppenkamp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Weyhe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Muender</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. V.</given-names>
            <surname>Reinschluessel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Salzmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Uslar</surname>
          </string-name>
          ,
          <article-title>Measuring bound attention during complex liver surgery planning: Feasibility study</article-title>
          ,
          <source>JMIR Formative Research</source>
          <volume>9</volume>
          (
          <year>2025</year>
          ). doi:
          <volume>10</volume>
          .2196/62740.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>E. J.</given-names>
            <surname>Brown</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Fujimoto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Blumenkopf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. L.</given-names>
            <surname>Kontson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. L.</given-names>
            <surname>Benz</surname>
          </string-name>
          ,
          <article-title>Usability assessments for augmented reality head-mounted displays in open surgery and interventional procedures: A systematic review</article-title>
          ,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .3390/mti7050049.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Birlo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. J.</given-names>
            <surname>Edwards</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Clarkson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Stoyanov</surname>
          </string-name>
          ,
          <article-title>Utility of optical see-through head mounted displays in augmented reality-assisted surgery: A systematic review</article-title>
          ,
          <source>Medical image analysis 77</source>
          (
          <year>2022</year>
          ). URL: https://pubmed.ncbi.nlm.nih.gov/35168103/. doi:
          <volume>10</volume>
          .1016/J.MEDIA.
          <year>2022</year>
          .
          <volume>102361</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Williams</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. R.</given-names>
            <surname>Ortega</surname>
          </string-name>
          ,
          <article-title>Understanding gesture and speech multimodal interactions for manipulation tasks in augmented reality using unconstrained elicitation</article-title>
          ,
          <source>Proceedings of the ACM on Human-Computer Interaction</source>
          <volume>4</volume>
          (
          <year>2020</year>
          ). doi:
          <volume>10</volume>
          .1145/3427330.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>