=Paper= {{Paper |id=Vol-2841/BigVis_2 |storemode=property |title= Browsing Internet Content in Multiple Dimensions |pdfUrl=https://ceur-ws.org/Vol-2841/BigVis_2.pdf |volume=Vol-2841 |authors=Tiger Cross,Riccardo Bovo,Thomas Heinis |dblpUrl=https://dblp.org/rec/conf/edbt/CrossBH21 }} == Browsing Internet Content in Multiple Dimensions== https://ceur-ws.org/Vol-2841/BigVis_2.pdf
                Browsing Internet Content in Multiple Dimensions
                     Vision of a Web Browsing Tool for Immersive Virtual Reality Environments

                    Tiger Cross                                            Riccardo Bovo                              Thomas Heinis
           Imperial College London                                   Imperial College London                      Imperial College London
         tiger.cross17@imperial.ac.uk                                r.bovo19@imperial.ac.uk                       t.heinis@imperial.ac.uk
ABSTRACT                                                                               organising information across more than one axis, compared to
An immersive virtual reality environment (IVRE) offers a design                        the vertical navigation of desktop browsers.
space radically different from traditional desktop environments.                          Our vision aims to implement a web browser application
Within this new design space, it is possible to reimagine a con-                       that uses an existing search API to perform a search engine’s
tent search experience which breaks away from the linearity of                         work. User’s search tactics in information retrieval systems (i.e.,
current web browsers and the underlying single relevance metric                        browsers) have been categorized by previous literature into two
of the search engine.                                                                  distinct categories: goal-directed searching behaviour and ex-
   To the best of our knowledge, there is no current commercial                        ploratory search behaviour[2]. We note that there is not a clear
nor research implementation that allows users to interact with re-                     line that can be drawn between the two. Users that exhibit
sults from a web search ordered in more than one dimension[11].                        Goal-directed searching behaviour, whom we will refer to as
On the research front, a lot of work has been done in ordering                         "Searchers", know what they are looking for and wish to find
query results based on semantic relations and different types of                       it quickly and easily (e.g. What is the current temperature in
user interaction have been explored such as head rotation amplifi-                     London?). "Explorers", on the other hand, are willing to browse
cation. Current commercial browser applications in virtual reality                     results and may not have a clear idea of what they are looking
(VR) port the current behaviour from PCs and mobile devices                            for (e.g. What is the meaning of life?). Any information retrieval
with the same linear search and scroll behaviour for retrieving                        system should cater to both strategies.
content.
   We propose a novel browser application in an IVRE which
allows a user to execute a query which can be entered via either
voice or a virtual keyboard. The resulting data arrangement of
information benefits from multiple and interactable relevance-
related metrics. This application combines the engaging environ-
ment of IVREs and the common search behaviour patterns as the
next stage in the evolution of web browsing software.

1    INTRODUCTION
Immersive virtual reality environments are a technology that
has been evolving since 1962, however, the core concept has re-
mained the same, involving the user viewing a 3-dimensional
rendered environment through a headset which they can interact
with using controllers. At the time of writing, there are compact                      Figure 1: Third-person mock-up of Multi-Dimensional Im-
and affordable headsets such as the Oculus Quest devices from                          mersive Browsing experience
Facebook, that can handle virtual reality (VR) applications with-                         The following section describes the state-of-the-art imple-
out the requirement of being plugged into a powerful PC. These                         mentations in IVRE’s which are either designed specifically for
headsets usually come with controllers which the user holds in                         querying and browsing data or propose features that would make
their hands and can use similar to a mouse for a PC or as a pair                       it more usable in an IVRE. We then discuss the motivation for
of virtual hands.                                                                      the software, followed by the design options in the context and
   The key motivation of this application is that existing web                         finally the concrete details of the idea.
browsers for IVREs do not exploit the additional degrees of free-
dom available to the user to provide an immersive browsing                             2 CURRENT STATE-OF-THE-ART
experience. Currently, web browsers port over what is currently
available on desktop PCs, with separate windows capable of dis-                        2.1 Research Prototypes
playing at most 3 search results each, forcing the user to scroll                      VR provides a rich canvas for data exploration and some research
excessively. The goal of using multiple dimensions, i.e. having                        papers have delved into these abilities further. None seem to have
results surround the user rather than be presented in front of                         done so for general web content.
them on a panel, is to allow them to find desired results faster,                         2.1.1 Data browsing tools. Two applications within data brows-
making use of their spatial and short term memory, since the                           ing tools are relevant for our work with the first focusing on
core objective when searching the web is to find and learn infor-                      representing semantic connections in 3D and the second aiming
mation. When referring to multiple dimensions in this context,                         to query images in VR.
we mean allowing the user to scroll and grab results in 3D, and                           The goal of Subject Explorer 3D [7] is to propose a prototype
© 2021 Copyright for this paper by its author(s). Published in the Workshop Proceed-   for information exploration and retrieval with a core focus on
ings of the EDBT/ICDT 2021 Joint Conference (March 23–26, 2021, Nicosia, Cyprus)       semantic concepts. The prototype aims to “allow users to itera-
on CEUR-WS.org. Use permitted under Creative Commons License Attribution 4.0
International (CC BY 4.0)                                                              tively switch effortlessly between both types of search tactics”
                                                                                       as mentioned before: "Searchers" and "Explorers".
    This study presents a prototype developed for non immersive          to switch their gaze between multiple result pages. Although,
desktop VR. The prototype visualizes semantically related entities       Figure 2 shows only 2 web results visible in an entire window.
connected in branches forming a tree which the user can explore
by pointing and clicking on nodes to "fly through" them. This was
demonstrated using data from the Library of Congress Subject
Headings, and was assumed that the model could be transferred
to other data workloads. The article discusses the future work
possibilities of usability testing and comparison with text-only
information retrieval systems.
    Virtual Reality Lifelog Explorer [6] presents an IVRE solution
to the querying and browsing of a series of images of a persons’
viewpoint collected over time. Most of the queries are over a
range, searching for images containing an item or set of items or
over a specified period. The user interaction methods discussed
both involve point and click navigation and touching buttons                  Figure 2: Oculus browser window showing results
on a panel, to select filters (categorical and temporal) to apply.
The resulting images are ranked by the number of matching                   2.2.2 Firefox Reality. The initial experience in the VR browser
conceptual filters and then temporally (if the same number of            developed by Mozilla2 is suggestion driven, with the home page
filters match for multiple images). Images are presented in a list       showing popular content. One feature allows the user to set a
diagonally on a horizontal plane, moving further away to the             virtual world as a background behind the browsing windows,
user’s right, with higher ranked items closer to them.                   similar to the native "virtual room" provided in the Oculus space.
                                                                         This reminds the user that they are in virtual reality and adds to
   2.1.2 Prospective IVRE browser Features. Research has dis-            the immersive experience.
cussed the usefulness of applying “Egocentric Distance-Based                The linear nature of results ordering is still apparent here,
Item Sizing” (EDIS) variations in combination with an amphithe-          however, this browser allows for an additional query input mech-
atre layout of data icons [4]. This means altering the effect of         anism – voice search. Although, the search results are not specific
perspective such that closer items do not seem as large as they          to the VR environment, being the same ones obtained from an
normally would. It also discusses the effects of adding 3D land-         identical query on a PC.
marks into the virtual space, intending to allow users to alter             One other feature of note that Firefox Reality implements is
the landmarks themselves, to act as memory aides for where               to allow the user to configure the size of the window, adjusting it
certain elements are. The experiment, which was a version of the         to their preference. This allows for greater depth and immersion
card-flipping game known as concentration, could be translated           in the virtual reality environment as well.
to a search context, in terms of a user remembering the layout of
search results in an amphitheatre of text or icons. Recalling the           2.2.3 SoftSpace. Softspace labels itself as a productivity ap-
spatial location of an object of interest (i.e., query result) enables   plication rather than a browser3 . Yet, its browsing features still
users to access content they previously glanced at much easier.          act as a wrapper for a standard search page, allowing the user to
   The user study shows that EDIS had minor effects for the user’s       create multiple windows (i.e., tabs), placing and resizing them as
ability to recall elements spatially and that the 3D landmarks were      they wish.
a better approach, particularly when allowing users to set them             Movement is implemented as pulling oneself through the en-
themselves. In a commercial browsing application, this seems to          vironment, which is more interactive than both point and click
be the same as bookmarks or previously visited resources, but            teleportation and using the joystick to navigate. The resizing of
visualised in a 3D setting.                                              windows by pulling both edges is very intuitive in IVREs. It is
                                                                         also worth noting that the keyboard is a virtual one that the user
2.2     Commercial Applications                                          must be close to type on.
                                                                            Regarding data layout, that is left up to the user, although, they
Many current implementations of web browsers for VR systems              are still limited to browsing in a window and the linear search
still involve a window in which the user must scroll vertically          experience discussed previously.
through results in a single dimension. Some attempt to mask
this by immersing the user in a virtual world when browsing or
allowing multiple windows to span the dimensions.                        3 VISION
                                                                         3.1 Motivation and Overview
   2.2.1 Oculus Browser. Facebook’s implementation of an IVRE
                                                                         Our vision aims to move away from the one-dimensional, linear
browser1 uses the standard model of result ordering, with point
                                                                         experience of modern search engines, by developing a browsing
and click navigation, which is somewhat akin to desktop us-
                                                                         application that presents results that are easy for users to scan and
age with the Google Search Engine. Switching between different
                                                                         able to make use of the additional degrees of freedom available.
types of content involves clicking on a different vertical tab (i.e.,
                                                                         The software implementing our vision should be able to make the
"All", "Images", "Maps" etc.). Navigation is limited by the ray-
                                                                         users feel more immersed in their virtual browsing environment,
casting [9] interaction and there is no feature to manipulate the
                                                                         by making them closer and more in control of the data items.
visual display size or distance, with the viewing range limited to
                                                                         This will result in the user becoming more productive overall, in
just under 180 degrees.
                                                                         terms of memory and data organisation.
   An interesting feature is the ability to have multiple windows
which make use of the surrounding virtual space, allowing users
                                                                         2 https://mixedreality.mozilla.org/firefox-reality
1 https://developer.oculus.com/documentation/oculus-browser/             3 https://www.soft.space/
   The application implementing our vision should allow for a          then be across the user, from left to right, and we could either
user to move past the query inputting phase quickly and effi-          order results in this manner, or outwards from the centre.
ciently, to then be greeted by many internet results around them.         The search APIs use numerous metrics to determine relevance,
They should be able to rotate to view different result formats or      for example, number of users clicking on a source and number of
pull a different type of result format in front of them through a      searches that match tags on a resource. The use of these metrics
specific controller or gesture interaction. The user must be able      for result ordering is out of scope, as we are not intending to
to grab and resize individual results that they wish to focus on,      design a new search engine, but a browsing experience.
and equally save or discard the results to their desire.
                                                                       3.3    Design Choices and Implementation
3.2    Design Space                                                    Following careful consideration of the options discussed in the
The following parts describe the different feature options avail-      previous section, we now discuss the features that we will include
able in various scenarios for the browsing experience.                 in the application.
   3.2.1 Environment Interaction. The standard method of navi-            3.3.1 Environment Interaction. We aim to allow the user to
gation in most IVRE’s today is to have a virtual beam protruding       interact with the data using virtual hands or controllers, enabling
from the controller which the user may control as a cursor, we         them to pull virtual delimiters for rows and columns to scroll
refer to this as raycasting [9]. This puts distance between the user   and pick up individual data items they are interested in. The
and the content they are interacting with and is unfavourable          delimiters will be interactable with an open hand / trigger button
in creating an immersive experience. It can be difficult to use in     and the data items can be selected with a grabbing gesture /
some situations such as typing as it requires preciseness.             button.
   Alternatively, we could allow the user to grab and scroll to           As well as being able to pull results closer and scroll with
either move themselves or the data. Allowing locomotion could          their virtual hands, we want the user to be able to intuitively
risk inducing motion-sickness and may make it more difficult for       pick up results, resize them to view the content, dock important
users to focus on results. Manipulating the data gives the idea        content and discard undesired material virtually. Literature has
that the user is in control of the data and where it is placed.        shown this type of interaction to be beneficial as it requires
                                                                       less preliminary learning and dexterity when navigating the
   3.2.2 Query Input. At a high level, there were two options
                                                                       environment[1].
here, voice input and keyboard input.
   Research implementations of voice input have gained positive           3.3.2 Query Input. The primary method of query inputting
feedback and have been shown to allow for an input rate of 23.6        the tool exposes is voice search. The user will be able to speak into
words per minute [8].                                                  a virtual microphone and execute their query, this will be done
   Keyboard input can be separated into two further options: a         via a speech-to-text engine which will send the resulting text to
touchscreen-like keyboard projected in front of the user, which        a search API. Since we aim to make this part of the application as
they may type on using raycasting, or a desktop-style keyboard         fast and efficient as possible for the user, this is the most suitable
which the user can virtually type on. Studies have found that in       option[8].
IVRE’s the latter appears to transfer more of the muscle memory           Voice search is not always 100% reliable due to multiple factors
since 45% of the typing speed from a touchscreen is retained,          such as different accents and microphone quality. Thus, as a
whereas a virtual desktop keyboard maintains 60% of the typing         contingency, the secondary method of query input will be a
speed [5].                                                             virtual desktop keyboard.
   3.2.3 Data Arrangement. We considered the following struc-              3.3.3 Data Arrangement. Out of the potential methods for
tures to arrange the data:                                             presenting results, the most appealing one that suits our vision
• Amphitheatre layout - Depicted in Figure 1, similar to the           is that of an amphitheatre, for reasons discussed in Section 3.2.3.
  one mentioned in section 2.1.2. The key advantage is that it         This has been shown to work well for users in the spatial memory
  allows the user to observe a clear ordering of results in multiple   setting[4], allowing them to recall the location of results faster.
  dimensions, in a way that is easy to scan.                               This layout also solves the problem of filtering different types
• Virtual wall - This would appear from a third-person point of        of content (such as images and shopping content) as we can
  view as a cylinder surrounding the user, although, this may          present content barriers in the 360° space, for example, the user
  make the user feel claustrophobic considering that we want to        may be able to look 90° to the right and see image results whereas
  make them feel close to the content they are interested in.          directly ahead of them are web results. We intend to introduce
• Hierarchical tree - The layout would involve creating seman-         UI indicating which type of result is in which direction.
  tic arcs between results and allowing the user to navigate               Existing extended reality (XR) applications have attempted
  results by pulling themself through many small tree-like struc-      similar styles of data organisation, such as personal cockpit[3],
  tures. A viable alternative to this would be to connect web          which aims to target head-worn augmented reality (AR) displays,
  pages with the same domain (e.g., spaceneedle.com/about and          presenting mobile-like applications in a hemisphere in front of
  spaceneedle.com/plan-your-visit).                                    the user. Since this has worked effectively at organising informa-
                                                                       tion in front of the user in this setting we believe it can be clearly
   3.2.4 Results Ordering. In an IVRE we have three physical           transferred to VR in a web search context.
dimensions, each with an axis that could be used to order the
results on. To ensure easier understanding, we simplify the design        3.3.4 Results Ordering. Since most search engines and API’s
to use two axes in the 3D space.                                       have optimised search results for a single metric, relevance, this
   The primary axis encompasses two dimensions, directed up-           is how we will aim to order results on the primary axis.
wards and away from the users horizontal plane of sight. Nat-             We now propose the metrics we may use for ordering results
urally, we would order more applicable results (i.e. higher rele-      along the secondary axes. For web results, a user usually wishes
vance) to appear closer to the user. The secondary axis would          to learn the sought after information as quickly as possible, hence
we will order results based on the average time to read the main        using Microsoft’s speech-to-text and search APIs for transcribing
content of a webpage or by the most recently updated sources.           search queries and retrieving web results respectively.
Images will be ordered by either size or date appeared. There are
many potential modifiers for sorting shopping results with the          3.6     Evaluation Plan
most obvious being price.                                               It is important to note here that we are not comparing this ap-
                                                                        plication across different mediums, against a desktop search for
3.4     Potential Additional Features                                   example. Instead, we aim to ask users to find a described piece of
The Oculus Quest headsets also allow for explicit tracking of the       information and must search for it using our software and also
user’s physical hands to use as controllers. We will use this to        using Oculus Browser and Firefox Reality. Several warm-up tasks
add to the level of immersion.                                          will be given to all users to ensure that they are at a similar level
   Some existing headsets provide eye-tracking capabilities and         of skill when navigating the environment.
this could be used to create landmarks where the user’s vision              We will evaluate the user experience using a framework de-
has dwelled, or instead, to collect data that adds to the relevance     veloped for use in immersive virtual environments[12], which
metric.                                                                 presents a detailed questionnaire containing 10 measurement
   Lastly, taking inspiration from a recent study[10], this applica-    scales, including usability, immersion and cybersickness. We
tion would enable a mode for users sitting down, which amplifies        may also choose to observe our own additional metrics that are
rotation and would allow users to experience the entire 360° space      specific to the search scenario, such as the time for a user to
when their field of movement is restricted.                             successfully find the target information.

3.5     Prototyping and Implementation                                  4     CONCLUSION
An initial prototype scene to demonstrate the interaction and           We have demonstrated an idea for an IVRE browser application
data layout was made on an Oculus Quest 2 device using the              that is radically different from the common one-dimensional
BYLDR software. Figure 3 depicts the 0° and 100° points of view         experience seen in desktop PCs and other browsers in VR. The
of a user when executing the search string "space needle".              application is designed around the user’s capabilities with a head-
                                                                        mounted VR device and makes use of as much of the additional
                                                                        space available through this medium. We have also seen how
                                                                        other research ideas have contributed to the feature development
                                                                        of this application and explored current commercially available
                                                                        browser applications for IVREs to date.

                                                                        REFERENCES
                                                                         [1] Ferran Argelaguet and Carlos Andujar. 2013. A survey of 3D object selection
                                                                             techniques for virtual environments. Computers & Graphics 37, 3 (2013), 121–
                                                                             136. https://www.sciencedirect.com/science/article/pii/S0097849312001793
                                                                         [2] Soussan Djamasbi and Adrienne Hall-Phillips. 2014. 2 - Visual Search. In
                                                                             Eye Tracking in User Experience Design, Jennifer Romano Bergstrom and
                                                                             Andrew Jonathan Schall (Eds.). Morgan Kaufmann, Boston, 27 – 45. http:
                                                                             //www.sciencedirect.com/science/article/pii/B9780124081383000029
                                                                         [3] Barrett M. Ens, Rory Finnegan, and Pourang P. Irani. 2014. The Personal
                                                                             Cockpit: A Spatial Interface for Effective Task Switching on Head-Worn Dis-
                                                                             plays. In Proceedings of the SIGCHI Conference on Human Factors in Computing
                                                                             Systems (CHI ’14). Association for Computing Machinery, New York, NY, USA,
                                                                             3171–3180. https://doi.org/10.1145/2556288.2557058
        (a) Web results, microphone and keyboard in front of user        [4] BoYu Gao, Byungmoon Kim, Jee-In Kim, and HyungSeok Kim. 2018. Am-
                                                                             phitheater Layout with Egocentric Distance-Based Item Sizing and Landmarks
                                                                             for Browsing in Virtual Reality. International Journal of Human–Computer
                                                                             Interaction 35, 10 (2018).
                                                                         [5] J. Grubert, L. Witzani, E. Ofek, M. Pahud, M. Kranz, and P. O. Kristensson.
                                                                             2018. Text Entry in Immersive Head-Mounted Display-Based Virtual Reality
                                                                             Using Standard Keyboards. In 2018 IEEE Conference on Virtual Reality and 3D
                                                                             User Interfaces (VR). 159–166.
                                                                         [6] Cathal Gurrin, Klaus Schoeffmann, Hideo Joho, Andreas Leibetseder, Liting
                                                                             Zhou, Aaron Duane, Duc-Tien Dang-Nguyen, Michael Riegler, Luca Piras, and
                                                                             Minh-Triet et al. Tran. 2019. Comparing Approaches to Interactive Lifelog
                                                                             Search at the Lifelog Search Challenge. ITE Transactions on Media Technology
                                                                             and Applications 7, 2 (2019).
                                                                         [7] Charles-Antoine Julien, Catherine Guastavino, France Bouthillier, and John E.
                                                                             Leide. 2013. Subject Explorer 3D: A Virtual Reality Collection Browsing and
                                                                             Searching Tool. Proceedings of the Annual Conference of CAIS (2013).
                                                                         [8] S. Pick, A. S. Puika, and T. W. Kuhlen. 2016. SWIFTER: Design and Evaluation
                                                                             of a Speech-based Text Input Metaphor for Immersive Virtual Environments.
                                                                             In IEEE Symposium on 3D User Interfaces.
                                                                         [9] Krzysztof Pietroszek. 2018. Raycasting in Virtual Reality.
                                                                        [10] Eric D. Ragan, Siroberto Scerbo, Felipe Bacim, and Doug A. Bowman. 2017.
                                                                             Amplified Head Rotation in Virtual Reality and the Effects on 3D Search,
                 (b) Image results to the right of the user                  Training Transfer, and Spatial Orientation. IEEE Transactions on Visualization
                                                                             and Computer Graphics 23, 8 (2017).
   Figure 3: Point-of-view images from prototype scene                  [11] Jason Rambach, Gergana Lilligreen, Alexander Schäfer, Ramya Bankanal,
                                                                             Alexander Wiebel, and Didier Stricker. 2020. A survey on applications
   Following this, a high-fidelity prototype shall be made using             of augmented, mixed and virtual reality for nature and environment.
                                                                             arXiv:cs.HC/2008.12024
Microsoft’s prototyping software for VR devices, Maquette4 . This       [12] Katy Tcha-Tokey, Olivier Christmann, Emilie Loup-Escande, and Simon Richir.
will then be used as a reference to develop the application in Unity,        2016. Proposition and Validation of a Questionnaire to Measure the User
                                                                             Experience in Immersive Virtual Environments. The International Journal of
4 https://www.maquette.ms/                                                   Virtual Reality 16 (10 2016), 33–48.