Measuring Multimodal Data in an XR-Based Training Simulation Environment

Authors

  • Kukhyeon Kim
  • Jongho Kim
  • Daeun Kim
  • Hongyu Xian
  • Seoyeon Park
  • Jeeheon Ryu

Keywords:

XR-based Simulation, HoloLens, Multimodal Data, Learner Performance

Abstract

This pilot research aims to investigate the potential of multimodal data in predicting the user’s performance in an XR-based training simulation environment. XR-based learning simulations have rapidly gained popularity in training education areas like medical and nursing, as well as STEAM, due to their ability to provide training opportunities in an immersive and authentic environment. In this study, multimodal data, such as eye-tracking and behavioral data (task completion time, accuracy, head movement, and hand movement) were collected, as well as the participant’s perception of the simulation through a subjective survey questionnaire. We aim to investigate how and what kind of multimodal data could be collected in an XR-based learning environment to optimize the training curriculum, improve the user experience, and enhance the learning outcomes. To achieve this, we collected multimodal data for attention, cognitive load, and performance behavior: head movement, hand movement, and eye information from 22 medical residents who participated in the XR-based simulation experience focused on strabismus diagnosis for resident training. As a result, we can collect fifteen multimodal data types for attention and nine data types for performance behavior. These results could be the fundamental research for predicting the learner’s performance states in XR-based training simulation.

Downloads

Published

2023-06-17

How to Cite

Kim, K., Kim, J., Kim, D., Xian, H., Park, S., & Ryu, J. (2023). Measuring Multimodal Data in an XR-Based Training Simulation Environment . Immersive Learning Research - Practitioner, 1(1), 8–11. Retrieved from https://publications.immersivelrn.org/index.php/practitioner/article/view/105