<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Design Science Approach to Developing Synchronized Tools for Mental Workload Assessment in Software Engineers⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sameera Gamage</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pantea Keikhosrokiani</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Oulu</institution>
          ,
          <addr-line>P.O. Box 123, Oulu, 90014, Northern Ostrobothnia</addr-line>
          ,
          <country country="FI">Finland</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <abstract>
        <p>Mental workload (MWL) critically afects productivity and well-being in high-cognitive fields like software engineering. Traditional MWL assessments that are based solely on subjective or physiological measures often lack reliability and relevance in the real world. This research presents a custom-built MWL data collection framework for software engineers, comprising a Task Simulation Desktop Application (TSDA) and a Self-Reporting Mobile Application (SRMA). TSDA simulates real-world software tasks, while SRMA collects NASA-TLX ratings synchronized with EEG signals. Following the Design Science Research (DSR) methodology, the tools underwent iterative development and pilot testing, leading to improvements in usability, data accuracy, and system reliability. The dual source system allows for synchronized EEG and subjective workload evaluation, laying the foundation for future machine learning-based real-time workload prediction and adaptive cognitive load management. The developed tools ofer scalable, structured, and domain-specific support to improving software engineer performance and mental well-being by supporting improved data collection.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Mental Workload Assessment</kwd>
        <kwd>Software Engineering</kwd>
        <kwd>EEG-based Measurement</kwd>
        <kwd>Design Science Research</kwd>
        <kwd>Task Simulation Tools</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Mental workload (MWL) significantly influences task performance, cognitive eficiency, and well-being,
particularly in high-demand fields such as software engineering [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Accurate MWL assessment is
essential for enhancing productivity, reducing cognitive strain, and improving decision-making [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
Traditional MWL assessment methods like NASA-TLX and physiological measures such as EEG have
limitations due to subjective bias and sensitivity to external factors.
      </p>
      <p>
        Software engineering tasks (e.g., debugging, problem solving, multitasking) impose substantial
cognitive loads, yet current assessment tools fail to capture real-time workload variations specific to
this domain. Existing methods like primary and secondary task performance are influenced by skill
levels or introduce additional cognitive burdens, while physiological signals require careful contextual
interpretation [
        <xref ref-type="bibr" rid="ref3">3, 4, 5</xref>
        ].
      </p>
      <p>This research presents a custom-built MWL assessment framework integrating:
1. Task Simulation Desktop Application (TSDA) – replicates software engineering tasks in a
controlled setting.
2. Self-Reporting Mobile Application (SRMA) – collects NASA-TLX assessments alongside EEG
data.</p>
      <p>Using synchronized EEG and subjective reports, this dual-source system enables real-time MWL
validation tailored to software engineers. The research follows the Design Science Research (DSR)
methodology with iterative development and usability testing.</p>
      <p>The primary objectives of this research are the following.
1. Identify functional requirements for EEG-based MWL assessment tools in software engineering.
2. Analyze the relationship between EEG-derived metrics and NASA-TLX self-reports.
3. Develop secure and user-centered applications for synchronized MWL data collection.
4. Evaluate the efectiveness of tools through empirical testing and user feedback.</p>
      <p>This research generates both procedural knowledge, in the form of a more practical framework for
synchronized MWL data collection, and empirical knowledge through validation of the developed tools
using real-world simulation data from software engineers. The development process also contributes to
the knowledge of designing by applying and adapting the DSR methodology to the development of
tools in the context of cognitive load measurement.</p>
      <p>The knowledge produced by this research is expected to support future research and applications in
real-time mental workload prediction, adaptive task scheduling, and cognitive-aware tool development
and utilization in software development environments. Specifically, validated tools and collected datasets
can serve as a foundation for machine learning-based cognitive load classifiers and interventions to
improve developer productivity and well-being.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <p>While working on multitasks expecting to achieve critical deadlines, SE face higher levels of Cognitive
loads. High cognitive load afects productivity, code quality, and mental well-being, leading to burnout
and increased software defects. Tang et al. (2024) analyzed developer behaviors in validating
AIgenerated code using eye tracking and found high mental demand in IDE-based workflows [ 6]. Nakasai
et al. (2024) measured the MWL of software developers using physiological indicators, providing
objective information on cognitive strain [7]. Astuti et al. (2024) explored the impact of techno-stress
on millennial work-life balance in digital work, focusing on its mediating role in employee well-being
and job satisfaction [8]. Schott et al. (2024) examined usability, cognitive load, and presence in Virtual
Reality (VR) environments, emphasizing how copresence and social interaction influence cognitive
demands in mixed reality applications [9]. Dourado et al. (2024) evaluated MWL in Industry 5.0 wearable
Augmented Reality (AR) systems, demonstrating high cognitive strain during software-related tasks
[10]. These studies emphasize the need for workload optimization strategies to improve developer
eficiency and well-being while reducing defects and cognitive overload in software engineering. It is
important to identify tasks that increase the mental workload of SE.</p>
      <sec id="sec-2-1">
        <title>2.1. Mental Workload (MWL): Concepts and Definitions</title>
        <p>
          Mental workload (MWL) refers to the cognitive efort required to complete a task and is understood
through resource-based and subjective perspectives [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. The Multi-Resource Model (MRM) and
YerkesDodson law provide foundational models for understanding how MWL influences performance [
          <xref ref-type="bibr" rid="ref2">4, 2</xref>
          ].
MWL is critical to optimize in high-demand domains like software engineering, HCI, and aviation, where
poor workload management can impact productivity and safety [5]. Understanding MWL supports
designing efective tools for workload evaluation in dynamic environments.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Traditional Techniques for Mental Workload Measurement</title>
        <p>
          MWL measurement techniques fall into three categories: primary task performance, secondary task
performance, and subjective rating scales. Each has specific strengths and limitations. Primary Task
Performance assesses accuracy, speed, or decision-making under load, but results may be influenced by
individual diferences or ceiling/floor efects. Combined use with physiological and subjective metrics
can improve the precision of assessment [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. Secondary Task Performance introduces a parallel task to
evaluate the reduction in performance under load. Although efective in detecting moderate workload
changes, it can increase total cognitive load and interfere with task execution [6]. In addition, Subjective
Rating Scales perform subjective assessments, especially NASA-TLX, are widely used because of their
simplicity and applicability, despite susceptibility to bias. Table 1 compares subjective popular MWL
scales.
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Design Science Research Method</title>
        <p>Design Science Research Methodology (DSRM) enables structured artifact development through iterative
cycles of relevance, rigor, and evaluation [16, 17]. Figure 1 outlines the six DSRM stages used in this
research.</p>
        <p>This research follows the combined approach of the DSR and Action Research (DSAR) approach [18],
supporting the iterative development of TSDA and SRMA tools through the collection of mixed-method
requirement gathering and empirical feedback. The method ensures that the research remains both
scientifically rigorous and practically applicable in real-world software engineering settings.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>Given the gaps illustrated in Section 2, the development of TSDA and SRMA followed a DSR methodology
to address the lack of synchronized domain-specific MWL assessment tools. Figure 2 presents the inputs
and outputs of each cycle of the DSR process in the artifact development process.</p>
      <sec id="sec-3-1">
        <title>3.1. Relevance Cycle</title>
        <p>This cycle focused on identifying problems related to MWL, workforce eficiency, and well-being in
software engineering. The literature highlighted the need for an optimized workload distribution
and improved performance through the accurate detection of MWL. Traditional methods were found
inadequate due to the dynamic nature of software engineering tasks. Consequently, this phase defined
the core components such as task types, data collection approaches, involved physiological signals, and
contributing cognitive organs.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Design Cycle</title>
        <p>Key system requirements were defined, including the collection of performance metrics and controlled
task simulations to ensure unbiased data. The company environments were unsuitable due to operational
interference. Preliminary interviews with software engineers identified real-world task themes such as
deadlines, multitasking, and R&amp;D workflows. These insights informed the task simulation design and
the system functionality.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Rigor Cycle</title>
        <p>TSDA and SRMA were built on validated cognitive load models and usability principles. Existing research
on EEG-based estimation and NASA-TLX was integrated for development guidance. Continuous EEG
data required SEs to perform non-overlapping tasks to ensure traceability. Empirical testing and iterative
refinement ensured scientific validity and practical utility. EEG fluctuations were planned to be mapped
to individual tasks to support accurate MWL interpretation.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Evaluation Cycle</title>
        <p>This phase involved systematic testing and refinement of TSDA and SRMA. Efectiveness was
measured through correlations between EEG signals and NASA-TLX scores, as well as task performance
metrics (e.g., accuracy, time). Usability testing improved interface design and task integration. Visual
comparisons of data sets validated tool interoperability. Participant feedback informed refinements,
enhancing system stability and data reliability. Table 2 summarizes the stages and their corresponding
DSRM cycles.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Research Implementation</title>
      <sec id="sec-4-1">
        <title>4.1. Artifact Development phase</title>
        <p>TSDA and SRMA were developed using Agile methodology in iterative sprints. The interviews guided
task realism, UI design, and system requirements. The features of both applications are detailed in Table
3. TSDA replicates real-world SE tasks; SRMA captures NASA-TLX self-reports.</p>
        <p>The non-functional requirements of the developed tools focused on performance, usability, security,
2. Task Simulation
3. Data Management
• Clear task instructions.
• Six visual analog scale dimensions: Mental, Physical, Temporal
Demand, Performance, Efort, Frustration.
• Sliders for input.
• Upload success/failure messages.
• Integrates NASA-TLX form.
• Treated as secondary task capturing cognitive load.
• Secure cloud upload.</p>
        <p>• Data anonymization for privacy.</p>
        <p>Task Simulation Desktop Application (TSDA)</p>
        <p>Self Reporting Mobile Application (SRMA)
and compatibility. The TSDA ensures fast response times, smooth task simulations, and an intuitive UI
with clear instructions. The SRMA features a user-friendly interface, optimized for Android with quick
interaction and low battery consumption. Both tools prioritize secure data handling through Firebase
integration, encryption, and access control. Furthermore, the applications are designed to be compatible
with all platforms, Windows, macOS, and various screen sizes, ensuring broad accessibility and user
reliability.Table 4 outlines current limitations in real-world simulation and self-report accuracy.</p>
        <p>The simulated tasks may not perfectly replicate real-world experi- Requires users to understand the meaning of each workload
dimenences in chosen career paths. sion.</p>
        <p>Users may perform with lower stress in controlled environments,
since they are not performing actual tasks.</p>
        <p>Student participants may not fully reflect real-world cognitive
demands.</p>
        <p>May not capture the full user experience.</p>
        <p>Data transmission requires secure handling.</p>
        <p>Limited mobile UI space.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Implementation of the artifact development</title>
        <p>TSDA includes subtasks that simulate a complete sprint cycle, from planning to deployment and learning.
SRMA includes the NASA-TLX form to capture the perceived workload. The following steps reflect a
standard sprint workflow. It begins with Sprint Planning to define goals, review and prioritize user
stories, and estimate efort. The user story refinement follows to clarify requirements, decompose stories
into tasks, and set acceptance criteria. During development, features are coded using best practices,
version control is maintained, and pull requests are submitted. Code review involves peer reviews,
addressing feedback, and improving code quality. Testing includes unit, integration, and regression
tests to ensure reliability. In the documentation, technical documentation, and usage guidelines are
updated. The sprint review demonstrates features, gathers feedback, and identifies improvements.
Finally, Continuous Learning promotes exploring new tools, practices, and sharing knowledge with the
team.</p>
        <p>A similar set of tasks simulated a typical high-workload scenario in software development, focusing
on implementing a user log-in feature for a web site in TSDA.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Testing Validation and Refinements</title>
        <p>Usability Testing, Qualitative Feedback, and Refinement: An iterative feedback process was used
for refinement. Think-Aloud and post-test questionnaires helped identify usability issues. Adjustments
included improving UI transitions, increasing task realism, and ensuring data consistency. The key
feedback and actions are shown in Table 5.</p>
        <p>Pilot Testing and System Performance Evaluation is a three-phase pilot study that confirmed data
synchronization, ease of workload entry, and log accuracy. This ensured that task timestamps were
aligned with MWL ratings. The post-test refinements addressed stability, clarity, and response times.</p>
        <p>Final Validation and Data Collection Workflow is there to ensure that TSDA and SRMA collect and
synchronize MWL data efectively, a structured validation process was conducted before large-scale
deployment. This phase focused on verifying the accuracy of the data synchronization and logging
processes. The validation process ensured that the recorded workload values were correctly stored in
Firebase and cross-checked with task execution logs to prevent inconsistencies. Further analysis was
performed to validate data synchronization and system reliability. The system logs were reviewed to
confirm that no workload submission attempts were lost, corrupted, or unsynchronized. By ensuring
that the workload entries aligned precisely with task timestamps, the validation process reinforced data
integrity and ensured that the developed tools provide a reliable foundation for future MWL analysis
and machine learning-based workload predictions.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Results</title>
      <p>In this research, software development processes were followed to develop specific and relevant
applications for data collection for MWL analysis of employees in an organization. As a profession,
software engineers were selected as there was a lack of research conducted on SE MWL analysis. Two
applications were developed to collect data while performing their usual tasks. Primary and secondary
tasks were considered to collect MWL data. Task performance data, such as,
1. Task performance accuracy and task completion time (collected from TSDA),
2. Self reported MWL as a secondary data collection (collected from SRMA),
were collected while recording the behavior of the cognitive load using the Emotive EEG signal sensing
wireless device.</p>
      <sec id="sec-5-1">
        <title>5.1. Task simulation desktop application</title>
        <p>TSDA was designed to simulate real-world tasks faced by software engineers in a controlled way. It
was developed during the Design and Development phase of the DSR methodology, based on the needs
identified in the Relevance Cycle and refined through feedback in the Evaluation Cycle.</p>
        <p>Key Functionalities are as follows,</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Self reporting mobile application</title>
        <p>SRMA was developed to collect subjective MWL data using NASA-TLX ratings. It works alongside TSDA
to capture self-reported feedback after each sub-task, allowing synchronized EEG analysis. The SRMA
includes three key screens (Figure 3b), each designed iteratively and aligned with the DSR methodology.
Functionality and DSR Alignment are described in the following.</p>
        <p>1. Landing Page: Users enter their unique ID (same as TSDA), ensuring data linkage across both
tools and click the button Software Engineer. Developed during the Design Cycle, and based on
stakeholder interviews, emphasizing simple onboarding and real-world applicability.
2. Feedback Page: Participants rate six NASA-TLX dimensions using sliders. Once done, the data are
uploaded to Firebase. Designed as a secondary task in the Rigor Cycle, it ensures low cognitive
interruption and real-time feedback collection.
3. Success Page: After submission, users receive a confirmation message and are returned to the
home page. Error handling is also included. Enhanced during the Evaluation Cycle, this page was
refined to enhance reliability and system stability.</p>
        <p>The complete source code and all developed data collection tools’ interfaces are available on GitHub
for access and further reference.1</p>
      </sec>
      <sec id="sec-5-3">
        <title>5.3. Comparative Analysis of MWL Tools</title>
        <p>Based on the latest literature and research findings, Table 6 compares traditional and modern MWL
assessment tools in terms of accuracy, usability, real-world applicability, and suitability for software
engineering tasks. The final row shows the integrated system of this research (TSDA + SRMA), which
uniquely combines subjective (NASA-TLX) and objective (EEG) data within realistic task simulations to
address these gaps.
1The complete implementation can be found at:
• TSDA Development: https://github.com/Sameera-G/TreplicatorEEG
• SRMA Development: https://github.com/Sameera-G/nasa_tlx_eeg_research
(a) Page for activities related to Refinement of user sto- (b) Pages of SRMA to mark and upload self reported
ries MWL values</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Discussion</title>
      <p>During the development process of the two applications, TSDA and SRMA, we followed a structured
and iterative approach using the DSR method. Both tools were developed based on identified needs
in MWL research in software engineering and refined using participant feedback. Figure 4 shows the
tools developed in use during the experiment.</p>
      <p>The TSDA was developed to simulate real software engineering tasks such as prioritizing user stories,
code review, and testing, reflecting real sprint workflows. It records task accuracy and completion time,
which are automatically uploaded to Firebase, helping to evaluate mental load at each stage. The SRMA
app supports TSDA by collecting NASA-TLX ratings after each task. Together, the tools sync EEG
signals, performance data, and self-reported workload. We found that tasks with more errors and longer
durations in TSDA often had higher ratings in SRMA, showing strong consistency between objective
and subjective data.</p>
      <p>The entire data pipeline, including TSDA, SRMA, and EEG, performed reliably. Firebase recorded all
entries without data loss and the task logs matched the EEG and self-report timestamps, confirming the
stability of the system.</p>
      <sec id="sec-6-1">
        <title>6.1. Comparison with Previous Methods</title>
        <p>Traditional MWL assessment tools such as NASA-TLX and SWAT rely only on self-reports. While
they are easy to use, they often sufer from user bias and memory-related errors. Other physiological
methods like heart rate variability and skin response are more objective but are not specific to mental
efort and can be afected by unrelated stressors.</p>
        <p>EEG is more specific to cognitive load, but past studies often used it in isolation. Our tools combine
EEG with NASA-TLX in a synchronized way and link both to realistic task activities. This addresses a
gap in earlier research. For example, recent work by Nakasai et al. (2024) used nasal skin temperature but
did not connect it to software task structure. Our tools are specifically built for software engineers and
provide task-by-task feedback using both physiological and subjective indicators. User feedback helped
us improve both TSDA and SRMA. Participants found the interfaces easy to use, but they suggested
adding more realistic tasks and smoother transitions. Based on this, we added visual dashboards,
enhanced task complexity, and confirmed task data upload after each session. We also added error
handling and success confirmation in SRMA to ensure a smooth user experience.</p>
        <p>Real-time validation between EEG and NASA-TLX helped identify inconsistencies. For instance, if
a participant reported low workload after a dificult task that showed high EEG activity, the system
lfagged it for further review. This adds reliability to the research and reduces the influence of random
responses.</p>
      </sec>
      <sec id="sec-6-2">
        <title>6.2. Real-World Application and Limitations</title>
        <p>The tools have strong real-world potential for software teams to detect when developers are under high
cognitive load and adjust task assignments accordingly. This can help reduce burnout and improve
productivity. However, limitations still exist. TSDA simulates tasks in a controlled lab setting, which
may not include real-world distractions like team communication, meetings, or interruptions. Also,
some participants may still misunderstand NASA-TLX scales despite instructions. These limitations
will be addressed in future iterations.</p>
      </sec>
      <sec id="sec-6-3">
        <title>6.3. Future Research and Integration with Machine Learning</title>
        <p>Although this phase focused on tool development, future work will use machine learning to analyze
patterns in EEG and SRMA data. Deep learning models can be trained to recognize high workload
states in real time and suggest break times or workload rebalancing automatically. This could lead
to intelligent systems that adapt to individual cognitive states and improve mental well-being in the
software development process.</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>7. Ethical Considerations &amp; Limitations</title>
      <p>The research was conducted in accordance with ethical principles for research involving professionals
following the ethical guideline of the Ethics Committee of the Human Sciences Ethics Committee of
the University of Oulu. All participants were informed about the purpose of the research and gave
their voluntary consent. The research addressed key ethical and methodological considerations. The
privacy of the participant was protected by avoiding the collection of personal data and ensuring
informed consent. Although the simulated tasks reflected real software engineering workflows, certain
workplace factors such as distractions and collaboration were not fully replicated. All data handling
was in accordance with GDPR and ethical research guidelines.</p>
    </sec>
    <sec id="sec-8">
      <title>8. Conclusion</title>
      <p>This research focused on developing and validating two custom-built tools, the TSDA and the SRMA, to
help assess MWL. DSR methodology, the research aimed to provide a reliable method that captures
both physiological data from EEG signals and subjective self-reported ratings using NASA-TLX. The
overall goal was to create a synchronized system for real-time MWL measurement that captures the
cognitive demands of software development tasks.</p>
      <p>The main objectives were to define the technical requirements, study the link between EEG signals
and self-reports, build synchronized applications, and evaluate them through user testing. These goals
were achieved through iterative development and both qualitative and quantitative testing.</p>
      <p>TSDA simulated real-world software tasks, tracking accuracy, and completion time, while SRMA
collected NASA-TLX ratings after each task. The tools were connected through Firebase for synchronized
data collection. The tests showed a strong consistency between the EEG and self-reported data, which
confirms the reliability of the dual source system. User feedback led to improvements in usability, task
realism, and data flow stability. This research provides a validated system for measuring MWL using
physiological and subjective data in a real-world context. Future work will focus on using machine
learning to automate workload classification and support smart workload management, aiming to boost
developer well-being and performance.</p>
    </sec>
    <sec id="sec-9">
      <title>Acknowledgments</title>
      <p>The authors extend their gratitude to all responders, supervisors, and experts who participated in
the usability tests and feedback sessions. The authors sincerely thank Anastasiia Voitenko for her
contribution to the research, participation in the testing process, and kind permission to include her
photograph in the publication.</p>
    </sec>
    <sec id="sec-10">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used CHATGPT-4 to check grammar and spelling.
After using this tool, the authors reviewed and edited the content as needed and take full responsibility
for the content of publication.
[4] I. Teoh Yi Zhe, P. Keikhosrokiani, Knowledge workers mental workload prediction using optimised
elanfis, Applied Intelligence 51 (2021) 2406–2430. doi: 10.1007/s10489-020-01928-5.
[5] P. Keikhosrokiani, M. Isomursu, O. Korhonen, T. T. Sean, Intelligent mental workload mobile
application in personalized digital care pathway for lifestyle chronic disease, in: M. Särestöniemi,
et al. (Eds.), Digital Health and Wireless Solutions. NCDHWS 2024, volume 2083 of
Communications in Computer and Information Science, Springer, Cham, 2024. URL: https://doi.org/10.1007/
978-3-031-59080-1_24. doi:10.1007/978-3-031-59080-1_24.
[6] N. Tang, M. Chen, Z. Ning, A. Bansal, Developer behaviors in validating and repairing llm-generated
code using ide and eye tracking, in: 2024 IEEE Symposium on Visual Languages and
HumanCentric Computing (VL/HCC), IEEE, 2024, pp. 40–46. doi:10.1109/VL/HCC60511.2024.00015.
[7] K. Nakasai, S. Komeda, M. Tsunoda, M. Kashima, Measuring mental workload of software
developers based on nasal skin temperature, IEICE TRANSACTIONS on Information and Systems
107 (2024) 1444–1448.
[8] S. Astuti, P. Nurtantio, K. K.S., Exploring the impact of technostress on millennial work-life
balance in digital work: The mediating role of work well-being, IEEE Transactions on Engineering
Management (2024). doi:10.1109/10762733.2024.
[9] D. Schott, M. Kunz, F. Heinrich, J. Mandel, Stand alone or stay together: An in-situ experiment of
mixed-reality applications in embryonic anatomy education, ACM Transactions on Virtual Reality
Software and Technology (2024). doi:10.1145/3641825.3687706.
[10] I. L. Dourado, L. C. Moreira, M. Kaufman, B. Horan, A wearable ar system for maintenance in
industry 5.0: Assessing mental workload and usability, in: 2024 29th International Conference on
Automation and Computing (ICAC), 2024, pp. 1–6. doi:10.1109/ICAC61394.2024.10718838.
[11] S. G. Hart, L. E. Staveland, Development of nasa-tlx (task load index): Results of
empirical and theoretical research, in: P. A. Hancock, N. Meshkati (Eds.), Human Mental
Workload, volume 52 of Advances in Psychology, North-Holland, 1988, pp. 139–183. URL:
https://www.sciencedirect.com/science/article/pii/S0166411508623869. doi:https://doi.org/
10.1016/S0166-4115(08)62386-9.
[12] S. Rubio, E. Díaz, J. Martín, J. M. Puente, Evaluation of subjective mental workload: A comparison
of swat, nasa-tlx, and workload profile methods, Applied Psychology 53 (2004) 61–86. doi: https:
//doi.org/10.1111/j.1464-0597.2004.00161.x.
[13] T. Kosch, J. Karolus, J. Zagermann, H. Reiterer, A. Schmidt, P. W. Woźniak, A survey on measuring
cognitive workload in human-computer interaction, ACM Comput. Surv. 55 (2023). URL: https:
//doi.org/10.1145/3582272. doi:10.1145/3582272.
[14] R. Widiastuti, V. R. B. Kurniawan, Kusmendar, E. Nurhayati, R. A. Putra, Implementation of the
cardiovascular load and rating scale mental efort to reduce the bakery worker’s workload, AIP
Conference Proceedings 2590 (2023) 040004. URL: https://doi.org/10.1063/5.0106674. doi:10.1063/
5.0106674.
[15] H. Mansikka, K. Virtanen, D. H. and, Comparison of nasa-tlx scale, modified cooper–harper scale
and mean inter-beat interval as measures of pilot mental workload during simulated flight tasks,
Ergonomics 62 (2019) 246–254. URL: https://doi.org/10.1080/00140139.2018.1471159. doi:10.1080/
00140139.2018.1471159, pMID: 29708054.
[16] A. R. Hevner, S. T. March, J. Park, S. Ram, Design science in information systems research, MIS</p>
      <p>Quarterly 28 (2004) 75–105. URL: https://doi.org/10.2307/25148625. doi:10.2307/25148625.
[17] K. Pefers, T. Tuunanen, M. A. Rothenberger, S. Chatterjee, A design science research methodology
for information systems research, Journal of Management Information Systems 24 (2007) 45–77.</p>
      <p>URL: https://doi.org/10.2753/MIS0742-1222240302. doi:10.2753/MIS0742-1222240302.
[18] V. de Castro, M. L. Martín-Peña, E. M. Martínez, M. Salgado, Combining action research with design
science as a qualitative research methodology: An application to service (operations) management
research, International Journal of Qualitative Methods 24 (2025). URL: https://doi.org/10.1177/
16094069241312018. doi:10.1177/16094069241312018.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Munoz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Araque</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. F.</given-names>
            <surname>Sánchez-Rada</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. A.</given-names>
            <surname>Iglesias</surname>
          </string-name>
          ,
          <article-title>An emotion aware task automation architecture based on semantic technologies for smart ofices</article-title>
          ,
          <source>Sensors</source>
          <volume>18</volume>
          (
          <year>2018</year>
          )
          <article-title>1499</article-title>
          . doi:
          <volume>10</volume>
          . 3390/s18051499.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F.</given-names>
            <surname>Kamari Ghanavati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Choobineh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Keshavarzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Nasihatkon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Jafari</surname>
          </string-name>
          <string-name>
            <surname>Roodbandi</surname>
          </string-name>
          ,
          <article-title>Assessment of mental workload and its association with work ability in control room operators</article-title>
          ,
          <source>La Medicina del Lavoro</source>
          <volume>110</volume>
          (
          <year>2019</year>
          )
          <fpage>389</fpage>
          -
          <lpage>397</lpage>
          . doi:
          <volume>10</volume>
          .23749/mdl.v110i5.
          <fpage>8115</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. G.</given-names>
            <surname>Dufy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <surname>X. Zhang,</surname>
          </string-name>
          <article-title>Measurement and identification of mental workload during simulated computer tasks with multimodal methods and machine learning</article-title>
          ,
          <source>Ergonomics</source>
          <volume>63</volume>
          (
          <year>2020</year>
          )
          <fpage>896</fpage>
          -
          <lpage>908</lpage>
          . doi:
          <volume>10</volume>
          .1080/00140139.
          <year>2020</year>
          .
          <volume>1759699</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>