<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Exploring hybrid reality environments for overview+detail tasks in immersive data visualisation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Daniel Ablett</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Swoyen Suwal</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrew Cunningham</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bruce H. Thomas</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Australian Research Centre for Interactive and Virtual Environments, University of South Australia</institution>
          ,
          <addr-line>Adelaide</addr-line>
          ,
          <country country="AU">Australia</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>There have been a lot of advances in the field of Augmented Reality (AR), but there is limited research on the combined usage of AR and physical displays (known as Hybrid Reality Environments). We explore hybrid reality environments using a combination of Augmented Reality and high-density displays for large graph visualisation. We present the design of a system and early observation from a pilot study involving navigation in such a system.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Hybrid Reality Environments</kwd>
        <kwd>Augmented Reality</kwd>
        <kwd>Immersive Analytics</kwd>
        <kwd>Visaulisation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>of interaction. This enables users to view a 3D
visualisation from diferent perspectives by walking around
There currently exists no ultimate display technology it and fully utilising the spatial capabilities that VR/AR
that has high-density, 3D stereoscopy, 360 degree field provides [1]. Using a 3D display such as the Hololens,
of view, and natural interaction. Conversely, the de- for example, can improve the information presented by
mands for visualising complex data accurately and in disambiguating the clutter found in 2D displays [2]. This
high-fidelity, as espoused by the emerging field of Immer- makes the information presented more readable. The
sive Analytics, is growing. In this work, we begin to ex- drawback with these types of displays are the limited
resplore a form of hybrid reality environments consisting of olution, field-of-view, and the low-performance power
the combination of multiple display technologies, across (in the case of AR). While most Immersive Analytics
reconventional displays and Augmented Reality (AR), to search focuses on either head-mounted displays or CAVE
support presenting large graph visualisations. style displays [1] separate, there is value in leveraging</p>
      <p>Traditional physical displays can provide a two- the afordances of both technologies to address the
shortdimensional (2D) view into the three-dimensional (3D) comings of any single technology.
world. However, such projection loses stereoscopy and, In this work, we are interested in using head-mounted
as such, important information and cues may be lost AR devices and high-resolution displays to support
Imin the resulting view. This is apparent in big data visu- mersive Analytics of large graphs visualisations.
Immeralisation, where the high number of data points make sive Analytics is “the use of engaging, embodied analysis
the visualisation too dense to usefully interpret. Regular tools to support data understanding and decision
makmonitors have a small viewport which limits the infor- ing” [3]. We describe a system that visualises large graph
mation that can be displayed on them. To overcome this data using an array of large 4K displays in a CAVE-like
limitation, large displays like a CAVE system can be used arrangement to provide a detail of the graph, and a 3D
to maximise information displayed. However, they are overview of the graph provided by a Microsoft Hololens.
still limited to projecting information onto (or beyond) We then present some early pilot observations that will
the display wall. guide future development.</p>
      <p>AR and Virtual Reality (VR) can be used to overcome
some of these limitations as information is presented in
stereoscopy and can appear directly in the user’s sphere 2. Background</p>
      <sec id="sec-1-1">
        <title>Data generated by technologies have grown progres</title>
        <p>APMAR’22: The 14th Asia-PacificWorkshop on Mixed and Augmented sively throughout the years. This has resulted in a huge
*RCeaolritrye,sDpoenc.d0in2-g03a,u2th02o2r., Yokohama, Japan amount of digital structured and unstructured data which
$ daniel.ablett@mymail.unisa.edu.au (D. Ablett); is also known as “Big Data”. It is dificult to make sense
andrew.cunningham@unisa.edu.au (A. Cunningham); of these large set of data without any medium of
conveybruce.thomas@unisa.edu.au (B. H. Thomas) ing information. Data visualization plays a key role in
 https://andrewc.me/ (A. Cunningham) making humans understand the complexity and the links
(B. H00.0T0-h0o0m03a-s2)536-3011 (A. Cunningham); 0000-0002-9148-085X between the data by externalising it through computer
© 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License visualisation and human-computer interactions [4].
CPWrEooUrckReshdoinpgs IhStpN:/c1e6u1r3-w-0s.o7r3g ACttEribUutRion W4.0oInrtekrnsahtioonpal (PCCroBYce4.0e).dings (CEUR-WS.org) Information Visualisation (InfoVis) is a specific area
of data visualisation concerned with the visualisation of
abstract data. Such abstract data can include graphs or
network data (such as social network [5]) which compose
of a significant proportion of big data. Graph
visualisation involves projecting network data, often as nodes and
links between nodes, using a particular layout [6]. It is
recognised in InfoVis that users benefit from an overview
of the data to orientate themselves and identify points
of interest. Overview + Detail is a set of InfoVis
techniques that use multiple views, where one view shows an
overview and another shows a detail view linked through
interaction and visual cues [7].</p>
        <sec id="sec-1-1-1">
          <title>2.1. Immersive visualisation</title>
          <p>Throughout the last three decades, InfoVis has explored It is worth noting that while the CAVE2 demonstrates
concepts of 3D visualisation on 2D displays [7], includ- Febretti et al.’s hybrid reality characteristics using a
sining 3D graph visualisation [8]. However, more recent gle display technology, the definition does no preclude
research has been exploring the afordances of modern the use of multiple display technologies to address the
Virtual Reality (VR) and Augmented Reality (AR) tech- characteristics. SecondSight [15] demonstrates a mobile
nology for InfoVis tasks. Immersive Analytics is this area phone coupled with an AR HMD to visualise data, in
of research exploring the use of immersive technologies what the authors refer to as a hybrid interface. However,
such VR and AR to support data understanding and deci- it should be noted that SecondSight does not meet
characsion making [9, 10]. The aim of Immersive Analytics is to teristic C1 (a display approaching the sphere of percetion
solve the problem of interpreting big data visualisation of a human) of Febretti et al.’s definition of hybrid reality.
with the use of natural (or embodied) interaction tech- One study [16] found that the interaction techniques
niques. ImAxes [11] is such an Immersive Analytics tool between the devices in a hybrid reality environment can
for VR that demonstrates this characteristic of directly be inconsistent and discussed having to implement
difmanipulating data by grabbing and moving it. ferent interaction methods for the diferent devices in</p>
          <p>Graph visualisation has been explored in Immersive the environment. They built a framework for unified
Analytics. Drogemuller et al. [12] developed a system interaction scheme for the diferent displays in the
Hyfor visualising large network data using VR. They sub- brid reality environment which is widely used for CAVE2
sequently evaluated techniques for navigating these net- systems.
works [5]. One of the key techniques explored by the
authors was a form of Overview+Context known as Worlds- 3. Hybrid reality visualisation
in-Miniature (WIM) [13], where an miniature version of
the world (or in that particular case, a miniature graph) system
is presented to provide context to the user.</p>
        </sec>
      </sec>
      <sec id="sec-1-2">
        <title>It can be recognised that various display modalities (such</title>
        <p>as traditional displays or the Microsoft Hololens) have
2.2. Hybrid reality environments diferent benefits and shortcomings suited to particular
Febretti et al. [14] define hybrid reality environments as tasks. We sought to overcome the limitation of a single
having the following characteristics: C1) a large high- modality by coupling the 2D environment of physical
resolution display “approaching the sphere of influence displays and the 3D environment of HoloLens to take
and perception of a human”, C2) stereoscopic support advantage of the capabilities of both the systems while
to visualise 3D data, C3) natural interactions, C4) space overcoming their singular limitations. This encourages
to support collocated collaboration, and C5) a software collaborative use of the system and reduces the
potenarchitecture to integrate the displays and interaction. tial dificulty in analysing and interpreting the big data
Febretti et al. presented the CAVE2 system as addressing visualisation.
these characteristics. CAVE2 is a Cave Automatic Vir- We developed a hybrid reality system for visualising
tual Environment (CAVE) with stereoscopic high-density large graph data (see Figure 1). This system was
comdisplays and tracked head and wand interaction. With prised of two display modalities: 1) four high-resolution
the CAVE2 System, information was able to be presented 4K displays arranged in an arc (referred to as the CAVE
more efectively to improve spatial understanding. for brevity’s sake) and, 2) a Microsoft Hololens worn
by the user. This combination of display technologies
addresses C1–C4 of Febretti et al.’s hybrid reality. Our tion of our interaction layer. The Hololens intergration
integrated system is shown in Figure 2. afords two key aspects of interaction:</p>
        <p>The last characteristic (C5) of a hybrid reality system
requires that the displays and interaction are part of a sin- 1. The system is able to track the user and their viewing
gle synchronised environment, and the displays project direction in the physical environment, projecting the
aspects of that shared environment. From an implemen- CAVE display from their point of view.
tation perspective, this requires a networking solution 2. The natural hand interaction present in the Hololens
to synchronise the displays and design considerations can be used to interact with the virtual environment,
as to aspects of the environment should appear in each including the CAVE displays.
display modality.</p>
        <p>In the rest of this section, we describe the design of this
system, first describing the general architectural design
of the system to support hybrid reality followed by the
specific design of the graph visualisation.</p>
        <sec id="sec-1-2-1">
          <title>3.1. System Architecture</title>
          <p>Our system is developed in Unity 2019. The system
architecture is composed of a positional tracking layer,
networking layer, a display configuration layer, and an
interaction layer. For the positional tracking, we use
the Optitrack Flex motion camera system. All displays
within the environment are tracked, including the CAVE
displays and the Hololens, using the the Optitrack.</p>
          <p>For the networking and display configuration ,
we used the High-End Visualisation System (HEVS), a
Unity framework developed by the University of New
South Wales’ EPICentre for running synchronous
applications [17]. HEVS is a high-performance networking
solution designed as framework for synchronising 3D
environments with traditional displays. HEVS uses a JSON
configuration file to define the position and relative
orientation of the displays.</p>
          <p>To enable hybrid reality environments, we expanded
the HEVS framework to support the HoloLens. As the
Hololens is a moving display, this required adapting the
static display configuration of HEVS to support moving
frames of reference. This modification forms the
founda</p>
          <p>This architecture enables a hybrid reality environment
composed of multiple display modalities. A user is able
to use the Hololens to visualise low-resolution but 3D
information, while the CAVE can visualise high-density
information but in 2D projection, all in a synchronised
and shared virtual environment. Manipulating objects
in the shared virtual environment is reflected across all
display modalities.</p>
          <p>Selective displays: One of our key insights when
developing this system was that, while the displays should
be in a shared synchronised environment to be
considered hybrid reality, the display do not need to, nor should
they, project all aspects of the environment. The displays
should project aspects of the environment that they are
most efective at displaying. For example, the Hololens
has a relatively low-resolution with a small field-of-view
but can do stereoscopic projection; as such, it is better
suited to showing small 3D aspects of the environment.
To enable this, we added support to tag objects within the
virtual environment to appear in displays with specific
capabilities.</p>
        </sec>
        <sec id="sec-1-2-2">
          <title>3.2. Graph visualisation in hybrid reality</title>
        </sec>
      </sec>
      <sec id="sec-1-3">
        <title>To explore our hybrid reality system, we applied it to</title>
        <p>the visual analytics task of graph visualisation. We
created a virtual environment with a spherical graph layout
with the user placed in the centre of the sphere. The
CAVE displays sit in an arc within the sphere, thus
projecting some of the graph layout onto the CAVE displays.
An overview visualisation, in the form of a
worlds-inminiature (WIM), sits in the centre of the environment.
Graph nodes were presented in high-detail with textual
labels in the CAVE display due to its high pixel-density,
while only an abstract overview was presented in the
WIM.</p>
        <p>We chose a spherical graph layout as they have
previously been demonstrated to be more eficient than 2D
layouts for certain tasks in immersive environments [18].
To create the layout, we first apply a 3D force directed
layout [19] to the graph. Then, for each node  in the
layout, we project its position onto surface of a sphere:
() = 
 − 
|,, − |
(1)</p>
      </sec>
      <sec id="sec-1-4">
        <title>Where  is the centre of the force directed graph</title>
        <p>layout and  is the radius of the spherical projection. We
set to 4 m which places the circumference of the sphere
projection outside the arc of our CAVE displays.
Overview (Worlds in Miniature) design: To
provide an overview visualisation, we include a
Worlds-inMiniature (WIM) in the centre of the environment. This
WIM is projected into the Hololens display to leverage
the Hololen’s afordances of direct hand interaction and
stereoscopic 3D visualisation. Users could rotate the
WIM by pinching and dragging it with their fingers.</p>
        <p>We used an iterative design process to develop the
graph visualisation and WIM. An initial issue we came
across was the user not knowing which nodes of the WIM
could be seen in the CAVE display. Initially, we solved this
by drawing the relative location of the CAVE displays as
a green outline inside the sphere of the WIM. We further
highlight nodes orange when they were visible on the
cave display, as shown in Figure 3.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>4. Pilot study</title>
      <sec id="sec-2-1">
        <title>We ran a pilot study with three participants to gain some</title>
        <p>early observations of how hybrid reality environments
could be used to navigate a complex graph visualisation.
View navigation (in this case, rotating the spherical graph
to locate particular nodes) is a fundamental task in
visualisation and a good task to examine for hybrid display
modalities. During the pilot, participants performed two
related navigation tasks:
• Minimal task: Participants were asked to orientate
the graph so that the minimum number of nodes
possible were present on the high-resolution displays.
• Maximal task: Participants were asked to orientate
the graph so that the maximum number of nodes were
present on the high-resolution displays.</p>
        <p>The graphs were comprised of 60 nodes, with some
nodes labelled with a letter. While orienting the graph,
participants had to ensure that nodes labelled with the
letters A to H were present in the CAVE display. This
ensured that participants had to use both display modalities
and leverages the afordance of the high-density displays
to depict text. Participants were given a two-minute time
limit for each task, displayed to the participant in the
centre of the CAVE display. We measured task time and
error, with error calculated as:
 = | − |
(2)</p>
      </sec>
      <sec id="sec-2-2">
        <title>Where  was the optimal number of the nodes that</title>
        <p>should be present in the CAVE display, and  was the
actual number of nodes the participants had in the CAVE
display.</p>
        <sec id="sec-2-2-1">
          <title>4.1. Conditions</title>
          <p>The pilot had two display modalities as conditions to
compare our hybrid reality environment to conventional
displays:
• HoloLens: A 3D overview was presented in the centre
of the room using the HoloLens.
• Laptop display: A 3D overview was presented in the
centre of the room on a 2D laptop display.</p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>To keep the conditions controlled, both tasks use HoloLens’ gesture recognition for input; otherwise this input would not be suitably controlled [20].</title>
        <sec id="sec-2-3-1">
          <title>4.2. Subjective questionnaire</title>
        </sec>
      </sec>
      <sec id="sec-2-4">
        <title>At the end of the study, participants were asked to fill in a questionnaire composed of a series of Likert questions:</title>
      </sec>
      <sec id="sec-2-5">
        <title>Q1 I am proficient in using the HoloLens (1–7)</title>
      </sec>
      <sec id="sec-2-6">
        <title>Q2 I found the HoloLens controls easy to use (1–7)</title>
      </sec>
      <sec id="sec-2-7">
        <title>Q3 The rotation input was easy to use with the HoloLens display (1–7)</title>
      </sec>
      <sec id="sec-2-8">
        <title>Q4 The rotation input was easy to use with the Laptop display (1–7)</title>
      </sec>
      <sec id="sec-2-9">
        <title>Q5 The graph was easy to visually analyze in the HoloLens (1–7)</title>
        <p>Condition Task Average Time Average Error Performance: On average across Minimal and
Maximal tasks, the HoloLens was slower and had more error
HoloLens Minimal 45.76 s 8.8 than the Laptop (see Table 1). There were only three</p>
        <p>Laptop Minimal 39.37 s 7.8 users, so these results do not have much weight;
howHoloLens Maximal 32.09 s 12 ever, it is still worth exploring why this may be. In the
Laptop Maximal 25.25 s 8.8 post-study questionnaire, one participant said that the
“small field of view of the HoloLens made it less useful
Table 1 than the Laptop’s physical display”. Given the nature
Pilot study summary results of the task, this is a big issue. When a user looks at the
cave display, the sphere may be out of the view of the
HoloLens, while they may still see it in their peripheral
Q6 The graph was easy to visually analyze in the Laptop on the Laptop.</p>
        <p>display (1–7) When looking at each user individually, the results are
not as consistent as with the summary. When looking at
Q7 Which device do you prefer more to view the graph? the average error, two out of three users had less error
Q8 Which device do you think you were more accurate? using the HoloLens for the Minimal Task. It is also worth
noting that one user was faster with the HoloLens for the
Q9 Which device do you think you were faster with? Minimal Task and another was faster with the HoloLens
for the Maximal Task.</p>
        <p>Q10 Overall the HoloLens task was (SMEQ 0–150) Participant movement: Movement around the
Q11 Overall the Laptop task was (SMEQ 0–150) sphere and WIM was encouraged, however all three users
showed an insignificant amount of movement. One user
Q12 Do you have any comments about the methods, or moved around the sphere for a single task, but the other
anything related to the tasks? users did not move during the HoloLens task at all.</p>
        <p>Subjective feedback: There were not enough users</p>
        <p>Questions Q1–Q2 were used to gauge an understand- in the pilot study to find any patterns from the
questioning of how proficient the user is with the HoloLens. Dif- naire. Even so, there was no consensus for the subjective
ifculty handling the controls may have an efect on the answers for Q7, Q8 and Q9. Howver, every participant
results. Q10 and Q11 of the questionnaire asked the user did think that the graph was easier to visually analyse
to answer how dificult the task was using a Subjective using the HoloLens (Q5 and Q6). The SMEQ value for
Mental Efort questionnaire [ 21], or SMEQ. The SMEQ each user was equal or higher for the HoloLens compared
asks the user to give a value from 0 to 150 to indicate how to the Laptop. This would imply having a 3D sphere over
hard a task was to do. SMEQ was chosen over a simple a flat view of the sphere involves more mental efort,
Likert scale because it has been shown to be easy to use something found in other studies [22].
by users and reliable [21].</p>
        <sec id="sec-2-9-1">
          <title>4.3. Design and procedure</title>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>5. Conclusions and future work</title>
      <p>Participants experienced 2 (display modalities) × Hybrid reality environments show promise for specific
6 (graphs). To ensure robustness when performing the Immersive Analytics tasks. Through our design, we
study, we adhered to a script. In the script, we go through recognised the value of a single shared environment
and explain the purpose of the study and how to use the across the displays, however, those displays should only
interaction technique. We also go through some train- show aspects of the environment appropriate for the
aforing where we explain how to use the drag gesture to dances of that particular display technology. For example,
rotate the sphere. There is also a training task for each Hololens may be suited to visualise a WIM in the
envicondition/task pair (4 in total) before the actual began. ronments to accomodate its limited field of view.
Following the study, participants were provided the sub- During the pilot study, we noticed that the users did
jective survey to fill. not move very often from there starting location; it may
be worth exploring techniques to encourage them to
move around the space and leverage the density of the
4.4. Observations displays further. In the future, we plan to address these
It is important to acknowledge that with such a small short comings and run a full study to understand the
participant size, conclusive findings are hard to draw, benefits of such environments.
however, we believe it is still useful to draw qualitative
observations from the pilot to inform further design.
analysis, in: The Engineering Reality of Virtual
Reality 2013, volume 8649, SPIE, 2013, pp. 9–20.
[1] G. Cliquet, M. Perreira, F. Picarougne, Y. Prié, [15] C. Reichherzer, J. Fraser, D. C. Rompapas,
T. Vigier, Towards hmd-based immersive analytics, M. Billinghurst, Secondsight: A framework for
in: Immersive analytics Workshop, IEEE VIS 2017, cross-device augmented reality interfaces, in:
Ex2017. tended Abstracts of the 2021 CHI Conference on
[2] C. Ware, P. Mitchell, Visualizing graphs in three Human Factors in Computing Systems, 2021, pp.
dimensions, ACM Transactions on Applied Percep- 1–6.</p>
      <p>tion (TAP) 5 (2008) 1–15. [16] A. Febretti, A. Nishimoto, V. Mateevitsi, L.
Renam[3] T. Dwyer, K. Marriott, T. Isenberg, K. Klein, N. Riche, bot, A. Johnson, J. Leigh, Omegalib: A multi-view
F. Schreiber, W. Stuerzlinger, B. H. Thomas, Im- application framework for hybrid reality display
mersive analytics: An introduction, in: Immersive environments, in: 2014 IEEE Virtual Reality (VR),
analytics, Springer, 2018, pp. 1–23. IEEE, 2014, pp. 9–14.
[4] J. Moorthy, R. Lahiri, N. Biswas, D. Sanyal, J. Ran- [17] T. Bednarz, Visualisation, simulations
jan, K. Nanath, P. Ghosh, Big data: prospects and &amp; expanded perception, 2019. URL:
challenges, Vikalpa 40 (2015) 74–96. http://torch.unsw.edu.au/sites/default/files/
[5] A. Drogemuller, A. Cunningham, J. Walsh, 48_Visualisation%2C%20Simulations%20%26%
M. Cordeil, W. Ross, B. Thomas, Evaluating nav- 20Expanded%20Perception_EN.pdf .
igation techniques for 3d graph visualizations in [18] O.-H. Kwon, C. Muelder, K. Lee, K.-L. Ma, A study
virtual reality, in: 2018 International Symposium on of layout, rendering, and interaction methods for
Big Data Visual and Immersive Analytics (BDVA), immersive graph visualization, IEEE transactions
IEEE, 2018, pp. 1–10. on visualization and computer graphics 22 (2016)
[6] J. Díaz, J. Petit, M. Serna, A survey of graph layout 1802–1815.</p>
      <p>problems, ACM Computing Surveys (CSUR) 34 [19] T. M. Fruchterman, E. M. Reingold, Graph drawing
(2002) 313–356. by force-directed placement, Software: Practice
[7] M. Card, Readings in information visualization: us- and experience 21 (1991) 1129–1164.</p>
      <p>ing vision to think, Morgan Kaufmann, 1999. [20] G. Ellis, A. Dix, An explorative analysis of user
[8] A. Cunningham, K. Xu, B. Thomas, Seeing more evaluation studies in information visualisation, in:
than the graph: evaluation of multivariate graph Proceedings of the 2006 AVI workshop on BEyond
visualization methods, in: Proceedings of the Inter- time and errors: novel evaluation methods for
innational Conference on Advanced Visual Interfaces, formation visualization, 2006, pp. 1–7.
2010, pp. 429–429. [21] J. Sauro, J. S. Dumas, Comparison of three
one[9] K. Marriott, F. Schreiber, T. Dwyer, K. Klein, N. H. question, post-task usability questionnaires, in:
Riche, T. Itoh, W. Stuerzlinger, B. H. Thomas, Im- Proceedings of the SIGCHI conference on human
mersive analytics, volume 11190, Springer, 2018. factors in computing systems, 2009, pp. 1599–1608.
[10] U. Engelke, M. Cordeil, A. Cunningham, B. Ens, [22] J. Baumeister, S. Y. Ssin, N. A. ElSayed, J. Dorrian,
Immersive analytics, in: SIGGRAPH Asia 2019 D. P. Webb, J. A. Walsh, T. M. Simon, A. Irlitti, R. T.</p>
      <p>Courses, 2019, pp. 1–156. Smith, M. Kohler, et al., Cognitive cost of using
[11] M. Cordeil, A. Cunningham, T. Dwyer, B. H. augmented reality displays, IEEE transactions on
Thomas, K. Marriott, Imaxes: Immersive axes as visualization and computer graphics 23 (2017) 2378–
embodied afordances for interactive multivariate 2388.
data visualisation, in: Proceedings of the 30th
annual ACM symposium on user interface software
and technology, 2017, pp. 71–83.
[12] A. Drogemuller, A. Cunningham, J. Walsh, W. Ross,</p>
      <p>B. H. Thomas, Vrige: Exploring social network
interactions in immersive virtual environments, ????
[13] R. Stoakley, M. J. Conway, R. Pausch, Virtual
reality on a wim: interactive worlds in miniature, in:
Proceedings of the SIGCHI conference on Human
factors in computing systems, 1995, pp. 265–272.
[14] A. Febretti, A. Nishimoto, T. Thigpen, J. Talandis,</p>
      <p>L. Long, J. Pirtle, T. Peterka, A. Verlo, M. Brown,
D. Plepys, et al., Cave2: a hybrid reality
environment for immersive simulation and information</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>