Meaningful Feedback from Wearable Sensor Data to Train Psychomotor Skills Gianluca Romano1 1 DIPF Leibniz Institute for Research and Information in Education, Rostocker Str. 6, 60323 Frankfurt on the Main, Germany Abstract Learning psychomotor skills requires feedback for improvement and give insight on performance. However, providing feedback is not trivial. Every learner is different and the same feedback might not work for everyone. The workshop aims to make participants aware of the problematic transition from analyzed wearable sensor data to meaningful feedback. Thus, the participants will get more familiar with wearable sensor data and directly experience how learners might want to receive feedback that they deem meaningful. Keywords Psychomotor, Wearable Sensor Data, Feedback, 1. Introduction Learning psychomotor skills requires feedback for improvement and give insight on performance. However, providing feedback is not trivial. Every learner is different and the same feedback might not work for everyone, i.e. effective feedback needs to be specific [1]. Always basing feedback on a theoretical background how to perform the exercise might not be optimal. In practice, more proficient athletes deviate from textbook recommendations and acknowledge that there are multiple correct ways to perform an exercise. This indicates that the feedback depends on the proficiency of the learner. Besides proficiency, feedback also depends on the psychomotor skill directly, the learning objective, short, mid, and long-term goals, and the focus during the training session. Additionally, feedback is not one dimensional. Most likely, a set of feedback is provided rather than only one. Wearable sensors, such as Inertial Measurement Units (IMUs) can be used to capture a learner’s performance during exercises. However, it is not clear how to bridge the gap from the model that makes sense out of the data to meaningful feedback. Meaningful feedback can be interpretable for the learner such that he can take action. Hence, it has an inherent call to action component. The workshop aims to make participants aware of the problematic transition from analyzed wearable sensor data to meaningful feedback. Thus, the participants will get more familiar with wearable sensor data and experience how learners might want to receive feedback that they deem meaningful. MILeS 22: Proceedings of the second international workshop on Multimodal Immersive Learning Systems, September 13, 2022, Toulouse, France Envelope-Open romano@dipf.de (G. Romano) © 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) 2. Technical Background This section provides a technical background on the field of Human Action Recognition (HAR). HAR is insofar related to the topic of providing meaningful feedback for psychomotor skills because psychomotor skills can be interpreted as human actions, more precisely, motion units [2], action units/atoms [3], motion primitives [4, 5], or fine or coarse-grained actions [6]. Thus, depending on the granularity of an action it constitutes a psychomotor skill. Therefore, recognizing actions is essential to gain insights on psychomotor skills to give feedback to a learner. A possible wearable sensor that is used in the research field of HAR are IMUs. In [7, 8, 9] the data are used to extract meaningful features. While [7, 8] features are extracted manually, [9] generates features automatically with an RNN encoder-decoder model. In [7], features are generated in a meaningful way to create indicators that discriminate against different runners. In [8] features can be learned directly from the data without domain knowledge. The probabilistic aspect accounts for uncertainty in motion which is helpful for the features that describe trajectories. Other works directly use the raw data from IMUs [10, 11]. In [10], two IMUs are used to classify if a worker is fastened with a safety hook. In [11], IMUs are placed into goalkeeper gloves to extract insights from goalkeeper kinematics. These can be used for analysis to optimize the performance of a goalkeeper. In [12], virtual IMUs are used to build a dataset. The data for the virtual IMU is synthesized from the SMPL human body model. IMUs are already in our everyday life. They are in smartphones or smartwatches. The authors of [7] attached a smartphone to the runner’s leg. In [13], the authors make use of the IMU in smartwatches to recognize fine-grained hand activity like washing hands, brushing your teeth or typing on a keyboard. Their proposed system has three main steps: (i) collecting the data with smartwatches, (ii) processing the signal with a Fast Fourier Transformation (FFT), and (iii) using a variant of the VGG-16 model to do the actual action recognition. 3. Feedback Feedback has a positive effect on learning psychomotor skills [14, 15]. In Multi Modal Learning Analytics, feedback can be categorized in different modalities, such as visual or aural as stated by [16]. However, the modalities only describe how feedback is transmitted and not its content. In fact, feedback need to be tailored to the individual learner and be specific [1]. Also, feedback might address different stages of the learner performance. You might argue that a learner should not perform squats until he can comfortably stay in a deep squat position. Feedback can also be given with respect to the actual performed exercise. For example, the feedback could address joint positions and angles more precisely for bent knees. Feedback could also be projected into the future aiming to what needs to be achieved to reach a goal. In this sense, a weight lifter could receive the feedback to strengthen the core for a better stability to achieve his personal goal [17]. 4. Workshop Expectation The workshop intends that participants learn about the non triviality to derive meaningful feedback from wearable sensor data. Participants will learn about wearable sensor data and what feedback they expect or hope from it. Wearable sensor data can be interpreted openly or targeted questions can be asked. For example, in the form of ”Given plots of wearable sensor data and a selection of possible feedback, what feedback would you prefer?”. References [1] J. C. Archer, State of the science in health professional education: effective feedback, Medical Education 44 (2010) 101–108. URL: https://onlinelibrary.wiley.com/doi/abs/ 10.1111/j.1365-2923.2009.03546.x. doi:https://doi.org/10.1111/j.1365- 2923.2009. 03546.x . arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1365- 2923.2009.03546.x . [2] J.-W. Cui, Z.-G. Li, H. Du, B.-Y. Yan, P.-D. Lu, Recognition of upper limb action intention based on imu, Sensors 22 (2022). URL: https://www.mdpi.com/1424-8220/22/5/1954. doi:10. 3390/s22051954 . [3] G. Yao, T. Lei, X. Liu, P. Jiang, Temporal modeling on multi-temporal-scale spatiotemporal atoms for action recognition, Applied Sciences 8 (2018). URL: https://www.mdpi.com/ 2076-3417/8/10/1835. doi:10.3390/app8101835 . [4] M. Zhang, A. A. Sawchuk, Motion primitive-based human activity recognition us- ing a bag-of-features approach, in: Proceedings of the 2nd ACM SIGHIT Interna- tional Health Informatics Symposium, IHI ’12, Association for Computing Machinery, New York, NY, USA, 2012, p. 631–640. URL: https://doi.org/10.1145/2110363.2110433. doi:10.1145/2110363.2110433 . [5] H. Xue, R. Herzog, T. M. Berger, T. Bäumer, A. Weissbach, E. Rueckert, Using probabilistic movement primitives in analyzing human motion differences under transcranial current stimulation, Frontiers in Robotics and AI 8 (2021). URL: https://www.frontiersin.org/ articles/10.3389/frobt.2021.721890. doi:10.3389/frobt.2021.721890 . [6] C. Avilés-Cruz, A. Ferreyra-Ramírez, A. Zúñiga-López, J. Villegas-Cortéz, Coarse-fine convolutional deep-learning strategy for human activity recognition, Sensors 19 (2019). URL: https://www.mdpi.com/1424-8220/19/7/1556. doi:10.3390/s19071556 . [7] Y. Yao, Z. Wang, P. Luo, H. Yin, Z. Liu, J. Zhang, N. Guo, Q. Guan, Runnerdna: Interpretable indicators and model to characterize human activity pattern and individual difference (2022). [8] H. Xue, R. Herzog, T. M. Berger, T. Bäumer, A. Weissbach, E. Rueckert, Using probabilistic movement primitives in analyzing human motion difference under transcranial current stimulation (2021). doi:10.3389/frobt.2021.721890 . [9] A. Ghods, D. J. Cook, Activity2vec: Learning adl embeddings from sensor data with a sequence-to-sequence model (2019). [10] K.-S. Song, S. Kang, D.-G. Lee, Y.-H. Nho, J.-S. Seo, D.-S. Kwon, A motion similarity measurement method of two mobile devices for safety hook fastening state recognition, IEEE Access 10 (2022) 8804–8815. doi:10.1109/ACCESS.2022.3144144 . [11] G. Lisca, C. Prodaniuc, T. Grauschopf, C. Axenie, Less is more: Learning insights from a single motion sensor for accurate and explainable soccer goalkeeper kinematics, IEEE Sensors Journal 21 (2021) 20375–20387. doi:10.1109/JSEN.2021.3094929 . [12] L. Pei, S. Xia, L. Chu, F. Xiao, Q. Wu, W. Yu, R. Qiu, Mars: Mixed virtual and real wearable sensors for human activity recognition with multi-domain deep learning model (2020). [13] G. Laput, C. Harrison, Sensing fine-grained hand activity with smartwatches, ACM, 2019, pp. 1–13. doi:10.1145/3290605.3300568 . [14] T. Mahmood, A. Darzi, The learning curve for a colonoscopy simulator in the absence of any feedback: No feedback, no learning, Surgical Endoscopy And Other Interventional Techniques 18 (2004) 1224–1230. URL: https://doi.org/10.1007/s00464-003-9143-4. doi:10. 1007/s00464- 003- 9143- 4 . [15] R. Brydges, J. Manzone, D. Shanks, R. Hatala, S. J. Hamstra, B. Zendejas, D. A. Cook, Self-regulated learning in simulation-based training: a systematic review and meta-analysis, Medical Education 49 (2015) 368–378. URL: https://onlinelibrary. wiley.com/doi/abs/10.1111/medu.12649. doi:https://doi.org/10.1111/medu.12649 . arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/medu.12649 . [16] R. Calvo, S. D'Mello, J. Gratch, A. Kappas (Eds.), The Oxford Handbook of Affective Comput- ing, Oxford University Press, 2015. URL: https://doi.org/10.1093/oxfordhb/9780199942237. 001.0001. doi:10.1093/oxfordhb/9780199942237.001.0001 . [17] R. van den Tillaar, A. H. Saeterbakken, Comparison of core muscle activation between a prone bridge and 6-rm back squats, Journal of Human Kinetics 62 (2018) 43–53. URL: https://doi.org/10.1515/hukin-2017-0176. doi:doi:10.1515/hukin- 2017- 0176 .