=Paper=
{{Paper
|id=Vol-2183/position2
|storemode=property
|title=Text Entry in VR and IntroducingSpeech and Gestures in VR Text Entry
|pdfUrl=https://ceur-ws.org/Vol-2183/position2.pdf
|volume=Vol-2183
|authors=Jiban Adhikary
}}
==Text Entry in VR and IntroducingSpeech and Gestures in VR Text Entry==
Barcelona, Spain | September 3, 2018 MobileHCI 2018 Workshop on Socio-Technical Aspects of Text Entry
Text Entry in VR and Introducing
Speech and Gestures in VR Text
Entry
Jiban Adhikary Biography
Michigan Technological University I am a Computer Science PhD student at Michigan
Houghton, MI 49931, USA Technological University, Houghton, Michigan, USA. I
jiban@mtu.edu have a Bachelor of Science degree in Computer Science
and Engineering from University of Dhaka, Bangladesh.
My current research focuses on designing interactive
systems for text entry in midair and virtual reality (VR)
Currently I am working under the supervision of Dr.
Keith Vertanen who is a renowned researcher in the
field of text entry.
Summary of Related Past Works
A lot of works have been done on text entry techniques
in personal computers and mobile devices. However,
works related to text entry in midair and VR
environments are barely sufficient. This is mainly
because there have always been challenges to design
and implement text input surfaces in midair or VR, to
Permission to make digital or hard copies of part or all of this work for personal or
track or sense user’s actions and to map the interaction
classroom use is granted without fee provided that copies are not made or distributed between the user’s actions and the input surface.
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s). We have been working on designing an interface for
Copyright held by the owner/author(s). entering text in the virtual environment for a year. Text
MobileHCI, 2018 Barcelona, Spain.
entry in VR environment is different from computers or
mobile devices because there is no physical keyboard
or touchscreen in this environment to interact with.
15
Barcelona, Spain | September 3, 2018 MobileHCI 2018 Workshop on Socio-Technical Aspects of Text Entry
Entering text in VR environment can be achieved by Too few works have explored gesture based text entry in
speech, gestures or virtual keyboard. Our work focuses VR. Chun et al. [2] investigated the feasibility of head-
on entering text in a virtual keyboard. We have been based text entry for HMDs. In their work they used head
able to create a prototype of a virtual keyboard that rotation to control a point on the keys of a virtual
enables user to input text in the virtual environment. keyboard. They investigated three techniques: TapType,
This prototype senses finger movements of the user DwellType and GestureType. TapType resembled tap
along the keyboard by using a Leap Motion Sensor. We typing in smartphones and users moved a pointer with
have also incorporated a sentence based decoder head rotation and selected a button by tapping a key. In
named VelociTap[1] with this prototype for auto DwellType, users dwelled over a key to select that key
correction and the prototype provides audio feedback and in GestureType users performed a word-level input
for better interaction. We plan to extend the prototype using a gesture based typing style. They achieved the
to a usable interactive system and investigate its best entry rate of 24.73 WPM in GestureType by
limitations and benefits by conducting user studies. improving the gesture-word recognition algorithm and
incorporating head movement pattern which was recorded
Speech and Gestures in VR Text Entry during the study.
While working with our prototype, we implemented a
single gesture (a thumbs up) as a delimiter of an While there have been a few works in mid-air text entry
interaction in virtual environment. For example, the using hand gestures, text entry in VR using gestures is
thumbs up gesture could be used to mark the end of a also rare. AirTap[7], Wilson et al. [8] and Vogel et al.[9]
Figure 1: A user wearing head sentence. For our current prototype, it does not used tap and pinch gestures to simulate button clicking in
mounted display and leap motion necessarily require many gestures to fulfil our main virtual environment for text entry purposes. Vulture[3],
device
objective but it will really be interesting to explore how AirStroke[4] and Feit et al.[5] used hand gestures to enter
multiple gestures or even speech can be introduced in text in midair. The ideas described in these works can be
virtual reality text entry. incorporated to enter text in VR as well.
Although it seems exciting to incorporate speech and A mid-air word-gesture keyboard has been proposed in
gestures in VR text entry, there are a few limitations. To Vulture[3]. The idea of a word-gesture keyboard is that a
make a gesture in VR it would necessarily require a body user can draw a pattern formed by the letters of a word in
part to make a gesture. For example, a user wearing a a touch surface rather than typing the letters. Swype,
head mounted display device in VR might move his head SlideIT and ShapeWriter are examples of word-gesture
to make a gesture or if he is wearing a hand tracker then keyboards. In Vulture, the idea of implementing word-
he can make a gesture using his fingers. However, these gesture keyboard in midair instead of touch or stylus-
kind of head or finger interactions need users to move based surface was introduced. It used a large high-
their head or upper arm of the hand frequently which may resolution display and users wore a glove with reflective
result in pain and fatigue. markers.
16
Barcelona, Spain | September 3, 2018 MobileHCI 2018 Workshop on Socio-Technical Aspects of Text Entry
AirStroke[4] and Grafitti[10] are two examples of stroke Factors in Computing Systems, pp. 4479-4488. ACM,
based text entry techniques. Stroke based techniques are 2017.
mainly used in stylus based interfaces. In these interfaces 3. Markussen, Anders, Mikkel Rønne Jakobsen, and
users need to use the stylus to make a distinctive gesture Kasper Hornbæk. "Vulture: a mid-air word-gesture
to mark the entry of a character. keyboard." Proceedings of the 32nd annual ACM
conference on Human factors in computing systems.
In comparison to the gesture based works in midair and ACM, 2014.
VR, text entry using speech has remained unexplored and 4. Ni, Tao, Doug Bowman, and Chris North. "AirStroke:
to my knowledge there has not been a single work related bringing unistroke text entry to freehand gesture
to this idea. McGlashan et al.[6] investigated technical and interfaces." Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems. ACM, 2011.
design issues in manipulating virtual reality with a speech
interface and proposed a prototype that provides users to 5. Feit, Anna Maria, Srinath Sridhar, Christian Theobalt,
control specialized functions using speech. and Antti Oulasvirta. "Investigating multi-finger
gestures for mid-air text entry." Korea(2015).
In conclusion, gesture and speech based text entry is still 6. McGlashan, Scott, and Tomas Axling. "A speech
in the budding process. Fortunately, the invention of interface to virtual environments." In Proc.,
effective sensing devices and motion trackers (e.g. Leap International Workshop on Speech and Computers.
1996.
Motion Sensor, VICON tracker and HMDs etc.) is attracting
researchers and is paving the way to design and 7. Vogel, Daniel, and Ravin Balakrishnan. "Distant
implement new interactive systems. Hopefully, in the next freehand pointing and clicking on very large, high
few years this line of research will flourish and we will resolution displays." In Proceedings of the 18th annual
ACM symposium on User interface software and
have better speech and gesture based text entry systems
technology, pp. 33-42. ACM, 2005.
in VR.
8. Wilson, Andrew D. "Robust computer vision-based
References detection of pinching for one and two-handed gesture
1. Keith Vertanen, Haythem Memmi, Justin Emge, Shyam input." In Proceedings of the 19th annual ACM
Reyal, and Per Ola Kristensson. 2015. VelociTap: symposium on User interface software and technology,
Investigating Fast Mobile Text Entry using Sentence- pp. 255-258. ACM, 2006.
Based Decoding of Touchscreen Keyboard Input. 9. Bowman, Doug A., Chadwick A. Wingrave, J. M.
In Proceedings of the 33rd Annual ACM Conference on Campbell, V. Q. Ly, and C. J. Rhoton. "Novel uses of
Human Factors in Computing Systems (CHI '15). ACM, Pinch Gloves™ for virtual environment interaction
New York, NY, USA, 659-668. techniques." Virtual Reality 6, no. 3 (2002): 122-129.
2. Yu Chun, Yizheng Gu, Zhican Yang, Xin Yi, Hengliang 10. Castellucci, Steven J., and I. Scott MacKenzie. "Graffiti
Luo, and Yuanchun Shi. "Tap, dwell or gesture?: vs. unistrokes: an empirical comparison."
Exploring head-based text entry techniques for hmds." In Proceedings of the SIGCHI Conference on Human
In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 305-308. ACM,
2008.
17