=Paper= {{Paper |id=Vol-3070/short05 |storemode=property |title=Using a Design Science Research Methodology Process to Design a Text Entry Technique for Mobile VR |pdfUrl=https://ceur-ws.org/Vol-3070/short05.pdf |volume=Vol-3070 |authors=Eduardo Gabriel Queiroz Palmeira,Regis Kopper,Edgard Afonso Lamounier Júnior,Alexandre Cardoso }} ==Using a Design Science Research Methodology Process to Design a Text Entry Technique for Mobile VR== https://ceur-ws.org/Vol-3070/short05.pdf
       Using a Design Science Research Methodology Process
        to Design a Text Entry Technique for Mobile VR


       Eduardo Gabriel Queiroz Palmeira1, Regis Kopper2, Edgard Afonso Lamounier
                             Júnior1, Alexandre Cardoso1

   1
     Faculty of Electrical Engineering, Federal University of Uberlândia (UFU), 38400-902
   Uberlândia, MG, Brazil
       egqpalmeira@gmail.com, {lamounier,alexandre}@ufu.br
   2
    Department of Computer Science, University of North Carolina at Greensboro, 27402-6170
   Greensboro, NC, USA
       kopper@uncg.edu



          Abstract. Mobile-based virtual reality (VR) is the more affordable way for the
          general public to experience immersive VR, and text input is a frequent process
          in mobile VR. Usually, the raycasting selection technique is used to perform
          this task. However, using this technique may present some limitations. Hence,
          this short paper aims to briefly present the work in progress of designing and
          developing an alternative text entry technique for mobile-based VR using a
          design science research (DSR) methodology process. Our technique uses a one-
          handed ambiguous keyboard and focuses on improving user performance and
          typing experience for short-term text input. Further improvement and the
          evaluation of the technique will compose the next steps of this research.

          Keywords: Mobile VR, Text Input, Design Science Research, Human-
          Computer Interaction




1 Introduction

In recent years, besides hardware’s cost reduction for the general public to experience
immersive virtual reality (VR) systems based on personal computers (PCs), this
simulated experience approachability was promoted further by the possibility of being
experienced through smartphones [1-3]. The main advantage of mobile-based VR is
its portability—autonomy and wireless setting; furthermore, it is more affordable than
the equipment required for an entry-level PC-based VR experience [1]. Nevertheless,
its performance and graphical capability are limited, as well as its positioning tracking
[3,4]. For example, head and hand tracking in mobile VR—through the headset and
VR controller, respectively—is performed only in 3 degrees of freedom (DoF). In
other words, for both visualization and control, only rotation movements (directions)
are tracked instead of rotation and translation movements (directions and




 Copyright © 2021 for this paper by its authors.
 Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
tridimensional positions)—as occur in PC-based VR with the use of head-mounted
displays (HMDs) and handheld controllers with 6 DoF.
   Text input is a frequent process in mobile VR. Usually, the raycasting selection
technique is used to perform this task. It works as follows: users must keep the virtual
pointing steady, intersecting the desired virtual key (1) until they confirm the
selection, for instance, by pressing a button or (2) until the system’s automatic
selection is confirmed after a predetermined time (dwell time). In mobile VR, the
raycasting selection can be performed using the VR controller or the touchpad located
on the side of the active headset 1; for the latter, the direction in which the user looks is
considered the virtual pointing and a dot in the middle of the screen represents it.
    Whereas previous work had focused on presenting in more detail the work in
progress of the design and development of our alternative text entry technique for
mobile-based VR [5], this short paper focuses on its design science research (DSR)
methodology process. Additional information regarding the problem identification
and the motivation behind this research will be more explored in the next section.


2 The Artifact Development Process

In contexts where it may be useful to design a novel artifact to solve a specific
problem or improve existing situations, the design science fits better than traditional
science [6-8]. Due to the real-world complexity, the DSR does not seek an optimal
solution but a satisfactory one capable of positively impacting people’s life [7,9].
Accordingly, the development process of our artifact was performed using as
reference the DSR methodology described by Peffers et al. [10]. These authors built
this methodology based on a consensus approach of well-accepted common elements
within DSR literature. Their DSR methodology process is divided into six iterative
activities in a nominal sequence: problem identification and motivation, objective of
the solution, design and development, demonstration, evaluation, and
communication—represented by previous work, this short paper itself, and further
publications.


2.1 Problem Identification and Motivation

In this activity, researchers should define a specific problem and justify the value of a
solution for the context. In our case, as our approach is problem-centered, some
factors must be made evident. Designing efficient and user-centered solutions for text
input in VR is a challenge; hence, many researchers have been developing different
devices and techniques for text entry in PC-based VR [11]. However, the literature
indicates a lack of alternative techniques for mobile-based VR. Using a narrow
search, we found only a couple of studies [12,13] presenting alternative techniques for

1 As this work is related to text entry techniques (an interactive process), we are considering

  mobile VR as those with the use of active headsets (e.g., Samsung Gear VR). That is because
  passive headsets, such as Google Cardboard, have some limitations [2], enabling only
  passive visualization of 360° multimedia (non-interactive).
mobile VR. It is worth noting that speech-based text entry techniques have limitations
regarding text editing [14] and their use in public places: these techniques can cause
privacy issues to users, and noisy environments can impair speech recognition [12-
14]. It is also useful to mention that speech-based techniques may not accurately
recognize acronyms, new words, or words in other languages.
   As mentioned before, the standard selection technique for VR is raycasting.
However, using this technique may present issues concerning the difficulty to aim at
the desired virtual key and the fatigue generated when using both hands for
“shooting” fast [15]; this first issue may be caused by the users’ hand tremors [16].
Due to the lack of physical support for their hands, it is difficult for users to aim and
keep the virtual pointing steady, intersecting the desired (small) virtual key until the
selection is confirmed [17]. Furthermore, unlike PC-based VR, which uses two
controllers, mobile VR uses only one controller or the active headset itself for
raycasting selection. This condition may cause more tiredness in the long term, as its
text input speed is even slower. Moreover, during the immersive experience using
mobile VR, users may receive and send messages through the system’s interface
itself. Besides, one hand always remains unused in settings in which the VR hand
controller is used. All things considered, these circumstances emerge as opportunities
for researchers to design novel text input techniques controlled by the available hand,
focused on short-term text entry. In this way, users would not need to switch devices
while immersed in the virtual world and “blinded” to the real one.


2.2 Objective of the Solution

In this activity, researchers should infer the solution objectives based on the analysis
of the problem definition. Depending on the research, such objectives can be: describe
how it is expected that the novel artifact can assist problem solutions not addressed so
far or predict where the desired solution would be better than existing solutions. In
this way, our objective is to develop an alternative text entry technique for mobile VR
by using a one-handed ambiguous keyboard. The novel technique is focused on
improving user performance and typing experience for short-term text input.
   The identified related works presenting an alternative text entry technique for
mobile VR are hands-free [12,13]. Alternately, our solution objective in terms of
interaction setup is that users will use the headset with the smartphone inside as the
visualization device, the VR controller as navigation and main interaction device, and
the ambiguous keyboard as the text input device. Therefore, both hands will be used
during the process, making it faster and efficient: the hand holding the VR controller
will be used for navigation within the text for copy editing tasks, meanwhile, the hand
using the ambiguous keyboard will be used for character input.


2.3 Design and Development

In this activity, researchers should determine the artifact’s functionality first and then
build it. The research contribution may be incorporated into the artifact design itself.
Our ambiguous keyboard prototype comprises a bent case and a Printed Circuit Board
(PCB) for 16 keys (4x4) custom mechanical keyboard, Cherry MX Greens tactile
switches, 3D printed round-shaped white keycaps, and a Raspberry Pi 3 with
Bluetooth and camera modules. Small blue square-shaped pieces of electrical tape
were stuck over the keycaps to serve as markers for the computer vision tracking
system (see Fig. 1).




                       Fig. 1. Computer vision for visual feedback.

   This tracking system was designed to provide visual feedback, detecting which
keys are being occluded by the user’s fingers in real-time. For this, we used computer
vision through the camera module associated with the Python programming language
and the OpenCV library. The strong tactile bump switches themselves were used to
provide tactile feedback from the user’s fingers hitting keys. Our 16 keys ambiguous
keyboard is appropriate to be used with one hand, and it has sub-layouts that support
not only lowercase letters but also uppercase letters, numbers, and symbols (Fig. 2).
Users can switch between layouts by pressing the toggle keys. In this direction,
besides being efficient, our layout is a familiar layout, similar to old phones’
ambiguous keypad. So, for instance, if users want to input the ‘b’ letter, they can press
the key corresponding to the number two twice with all the toggle keys turned off.


2.4 Demonstration

In this activity, researchers should demonstrate how the artifact use can solve the
identified problem. That can be accomplished through experimentation or simulation,
for instance. To achieve this, we developed a mobile VR application using Unity 3D.
Its user interface comprises the ambiguous keyboard virtual representation with real-
time visual feedback, text phrases to be copied by the user, and an empty text entry
field (Fig. 3).
                      Fig. 2. Sub-layouts of the text input technique.




                          Fig. 3. A screenshot of the application.

   When building an artifact, avoiding unwanted side effects is a challenge [6], but it
is essential. The virtual representation of the user’s hands is, in fact, important for a
positive user experience. Nevertheless, the uncanny valley can occur if the hands’
tracking is inaccurate [18,19]; this phenomenon can impair user experience and sense
of presence. Thus, considering that mobile VR is stationary (3 DoF), user virtual
hands’ representation was not included. However, despite users not seeing the
representation of their virtual hands, they can see their typing results in real-time to
preserve a positive user experience. Lastly, we used a Samsung Galaxy S8
smartphone, a Samsung Gear VR active headset, and a Samsung VR hand controller
(3 DoF) to run the application.


2.5 Evaluation

In this activity, researchers should observe and measure how well the artifact can
solve the problem. Relevant metrics and analysis techniques related to the researched
context should be used to evaluate the artifact, and the evaluation should result in
adequate empirical evidence. Before starting this activity, we decided to iterate back
to activity 3 (design and development) to improve the tracking system. As a
limitation, the computer vision tracking system only tracks one finger per keyboard’s
column; it also provides imprecise feedback depending on the environment’s lighting
and the module camera position in relation to the ambiguous keyboard. Hence, before
the evaluation through empirical comparison, we decided to develop an accurate
touch-based tracking system (Fig. 4). Instead of using blue square-shaped markers
associated with computer vision, we placed metallic square-shaped high impedance
sensors over the keycaps to detect the user’s fingers’ touch. This new prototype is
under development and will be presented in further detail in future publications.




                Fig. 4. The under-development touch-based tracking system.

   Usually, the evaluation of DSR’s artifacts is based on methodologies available in
the knowledge base [8]. In this way, after improving our prototype, also in future
works, we will evaluate our technique using methodological aspects from the study by
Boletsis and Kongsvik [15]. Participants’ task will be to copy pre-defined phrases
from MacKenzie and Soukoreff phrase sets created for evaluating text entry
techniques [20]. In this way, the empirical experiment will collect data regarding
performance—typing speed [21,22] and error rate [22,23]—and user preferences—
usability [24] and user experience [25]. In addition, it will compare the following
techniques: (1) raycasting head-directed selection technique using the headset; (2)
raycasting selection technique using the VR hand controller (3 Dof); (3) the technique
presented in this paper (multi-tap approach); and (4) the technique presented in this
paper but with a single-tap approach associated with machine learning to predict
possible corresponding words.


3 Conclusion

This short paper presented the work in progress of the design and development
process of a novel text entry technique for mobile VR, focusing on its DSR
methodology. Reliance on creativity and trial-and-error are characteristic of this type
of research focused on artifact development [8]. Thus, an iteration back to activity 3
(design and development)—seeking the improvement of the artifact effectiveness—
and activity 5 (evaluation) will compose the next steps of this work. We will also
conduct a broad search through a systematic literature review to identify more related
works. Finally, after further improvement and evaluation, we will continue to
communicate our research resulting knowledge through academic publications.

Acknowledgments. This study was financed in part by the Coordenação de
Aperfeiçoamento de Pessoal de Nível Superior – Brasil (CAPES) – Finance Code
001.


References

1. Tori, R., Hounsell, M. (eds.).: Introdução a Realidade Virtual e Aumentada. 3rd edn.
   Editora SBC, Porto Alegre (2020) https://doi.org/10.5753/sbc.6654.2
2. Powell, W., Powell, V., Brown, P., Cook, M., Uddin, J.: Getting Around in Google
   Cardboard – Exploring Navigation Preferences with Low-Cost Mobile VR. In: 2016 IEEE
   2nd Workshop on Everyday Virtual Reality (WEVR). IEEE (2016) 5–8
   https://doi.org/10.1109/WEVR.2016.7859536
3. Steed, A., Julier, S.: Design and Implementation of an Immersive Virtual Reality System
   Based on a Smartphone Platform. In: 2013 IEEE Symposium on 3D User Interfaces
   (3DUI). IEEE (2013) 43–46 https://doi.org/10.1109/3DUI.2013.6550195
4. Sharma, P.: Challenges with Virtual Reality on Mobile Devices. In: ACM SIGGRAPH
   2015        Talks      (SIGGRAPH         ’15).      ACM,        New       York     (2015)
   https://doi.org/10.1145/2775280.2792597
5. Palmeira, E., Moraes, I., Telles, E., Martin, V., Gonçalves, V., Kopper, R., Lamounier Jr.,
   E., Cardoso, A.: One-Handed Text Entry in Mobile-Based Virtual Reality: An Ambiguous
   Keyboard Technique. In: Ahram, T., Falcão, C. (eds.): Advances in Usability, User
    Experience, Wearable and Assistive Technology. Lecture Notes in Networks and Systems,
    Vol.     275.     Springer     International   Publishing,    Cham      (2021)   310–318
    https://doi.org/10.1007/978-3-030-80091-8_36
6. March, S., Smith, G.: Design and Natural Science Research on Information Technology.
    Decision Support Systems, 15(4) (1995) 251–266 https://doi.org/10.1016/0167-
    9236(94)00041-2
7. Dresch, A., Lacerda, D., Antunes Jr., J.: Design Science Research: Método de Pesquisa para
    Avanço da Ciência e Tecnologia. 1st edn. Bookman, Porto Alegre (2015)
8. Hevner, A., March, S., Park, J., Ram, S.: Design Science in Information Systems Research.
    Management         Information      Systems     Quarterly,    28(1)      (2004)   75–105
    https://doi.org/10.2307/25148625
9. Simon, H.: The Sciences of the Artificial. 3rd edn. MIT Press, Cambridge (1996)
10. Peffers, K., Tuunanen, T., Rothenberger, M., Chatterjee, S.: A Design Science Research
    Methodology for Information Systems Research. Journal of Management Information
    Systems, 24(3) (2007) 45–77 https://doi.org/10.2753/MIS0742-1222240302
11. Dube, T., Arif, A.: Text Entry in Virtual Reality: A Comprehensive Review of the
    Literature. In: Kurosu, M. (ed.): Human-Computer Interaction. Recognition and Interaction
    Technologies. Lecture Notes in Computer Science, Vol. 11567. Springer International
    Publishing, Cham (2019) 419–437 https://doi.org/10.1007/978-3-030-22643-5_33
12. Xu, W., Liang, H., Zhao, Y., Zhang, T., Yu, D., Monteiro, D.: RingText: Dwell-Free and
    Hands-Free Text Entry for Mobile Head-Mounted Displays Using Head Motions. IEEE
    Transactions on Visualization and Computer Graphics, 25(5) (2019) 1991–2001
    https://doi.org/10.1109/TVCG.2019.2898736
13. Lu, X., Yu, D., Liang, H., Feng, X., Xu, W.: DepthText: Leveraging Head Movements
    Towards the Depth Dimension for Hands-free Text Entry in Mobile Virtual Reality
    Systems. In: 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE
    (2019) 1060–1061 https://doi.org/10.1109/VR.2019.8797901
14. Grubert, J., Witzani, L., Ofek, E., Pahud, M., Kranz, M., Kristensson, P.: Text Entry in
    Immersive Head-Mounted Display-Based Virtual Reality Using Standard Keyboards. In:
    2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE (2018) 159–
    166 https://doi.org/10.1109/VR.2018.8446059
15. Boletsis, C., Kongsvik, S.: Controller-Based Text-Input Techniques for Virtual Reality: An
    Empirical Comparison. International Journal of Virtual Reality, 19(3) (2019) 2–15
    https://doi.org/10.20870/IJVR.2019.19.3.2917
16. Kopper, R., Bowman, D., Silva, M., McMahan, R.: A Human Motor Behavior Model for
    Distal Pointing Tasks. International Journal of Human-Computer Studies, 68(10) (2010)
    603–615 https://doi.org/10.1016/j.ijhcs.2010.05.001
17. Tu, H., Huang, S., Yuan, J., Ren, X., Tian, F.: Crossing-Based Selection with Virtual
    Reality Head-Mounted Displays. In: 2019 CHI Conference on Human Factors in
    Computing Systems (CHI ’19). Paper 618. ACM, New York (2019) 1–14
    https://doi.org/10.1145/3290605.3300848
18. Palmeira, E., Martin, V., Moraes, I., Kopper, R., Lamounier Jr., E., Cardoso, A.: O
    Uncanny Valley das Mãos Virtuais em Aplicações de Realidade Virtual Imersiva: uma
    Revisão Sistemática da Literatura. RISTI - Revista Ibérica de Sistemas e Tecnologias de
    Informação, 2020(E31) (2020) 497–512 https://www.proquest.com/docview/2496338859
19. Berger, C., Gonzalez-Franco, M., Ofek, E., Hinckley, K.: The Uncanny Valley of Haptics.
    Science Robotics, 3(17) (2018) 1–2 https://doi.org/10.1126/scirobotics.aar7010
20. MacKenzie, I., Soukoreff, R.: Phrase Sets for Evaluating Text Entry Techniques. In: CHI
    ’03 Extended Abstracts on Human Factors in Computing Systems. ACM, New York (2003)
    754–755 https://doi.org/10.1145/765891.765971
21. Wobbrock, J.: Measures of Text Entry Performance. In: Mackenzie, S., Tanaka-Ishii, K.
    (eds.): Text Entry Systems: Mobility, Accessibility, Universality. Morgan Kaufmann
    Publishers, Burlington (2007) 47–74 https://doi.org/10.1016/B978-012373591-1/50003-6
22. Arif, A., Stuerzlinger, W.: Analysis of Text Entry Performance Metrics. In: 2009 IEEE
    Toronto International Conference Science and Technology for Humanity (TIC-STH). IEEE
    (2009) 100–105 https://doi.org/10.1109/TIC-STH.2009.5444533
23. Soukoreff, R., MacKenzie, I.: Metrics for Text Entry Research: An Evaluation of MSD and
    KSPC, and a New Unified Error Metric. In: Proceedings of the SIGCHI Conference on
    Human Factors in Computing Systems (CHI ‘03). ACM, New York (2003) 113–120
    https://doi.org/10.1145/642611.642632
24. Brooke, J.: SUS: A Retrospective. Journal of Usability Studies, 8(2) (2013) 29–40
    https://dl.acm.org/doi/10.5555/2817912.2817913
25. IJsselsteijn, W., Kort, Y., Poels, K.: The Game Experience Questionnaire. Technische
    Universiteit Eindhoven, Eindhoven (2013) https://research.tue.nl/en/publications/the-game-
    experience-questionnaire