=Paper=
{{Paper
|id=Vol-3704/paper2
|storemode=property
|title=Design Considerations for the Placement of Data Visualisations in Virtually Extended Desktop Environments
|pdfUrl=https://ceur-ws.org/Vol-3704/paper2.pdf
|volume=Vol-3704
|authors=David Aigner,Judith Friedl-Knirsch,Christoph Anthes
|dblpUrl=https://dblp.org/rec/conf/realxr/AignerFA24
}}
==Design Considerations for the Placement of Data Visualisations in Virtually Extended Desktop Environments==
Design Considerations for the Placement of Data
Visualisations in Virtually Extended Desktop
Environments
David Aigner1 , Judith Friedl-Knirsch1,2 and Christoph Anthes1
1
University of Applied Sciences Upper Austria
2
Technical University of Munich, Human-Centered Computing and Extended Reality Lab
Abstract
As novel mixed reality devices are developed, the use case of augmented reality display extension
becomes feasible. Especially, in a visual data analysis use case, users can then benefit from expanding the
conventional screen space using augmented reality head-mounted displays. This opens up the discussion
of suitable placement of user interface elements in this extended augmented reality space. This includes
questions about customisability, interaction, and moving elements across the borders of a desktop screen.
Therefore, we present design considerations on layout and interaction with augmented reality extended
displays and propose a system that employs the discussed techniques to support the data analysis process.
Keywords
Extended Displays, Augmented/Mixed Reality, Cross Reality, Visual Data Analysis
Figure 1: Virtually extended desktop environments feature display-internal elements displayed
via desktop screen and display-external elements displayed via HMD (A). The process
of creating a layout includes transitioning user interface elements from an origin
position towards a destination position. In regards to the display, this transition can
be internal-to-external (B) or external-to-internal (C).
RealXR: Prototyping and Developing Real-World Applications for Extended Reality, June 4, 2024, Arenzano (Genoa), Italy
$ david.aigner@fh-hagenberg.at (D. Aigner); judith.friedl-knirsch@fh-hagenberg.at (J. Friedl-Knirsch);
christoph.anthes@fh-hagenberg.at (C. Anthes)
© 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
1. Introduction
The development of novel devices that allow users to freely move along Milgrams’s Reality
Virtuality Continuum (RVC)[1] opens opportunities to extend conventional 2D systems into
3D space. Cross Reality systems can then allow users to switch between different stages on
the RVC, depending on their preferences and the current requirements of their task, using
transition techniques [2, 3, 4, 5]. This means that users can switch to an augmented reality (AR)
perspective for collaboration with a collocated partner and switch further along the continuum
towards virtual reality to be fully immersed in a digital environment.
There are numerous use cases that can benefit from this opportunity to move along the RVC.
For example, in a data analysis process, immersive analytics allows users to experience and
analyse data in a three-dimensional immersive environment [6]. Cross reality systems can
extend this visual data analysis process to integrate multiple stages of the RVC [7].
However, most data analysis processes are still focused on 2D desktop environments. With
users being familiar with this environment as well as its inherent benefits for text entry[8] and
precise interaction [9], integrating this environment plays a vital role in employing immersive
systems in data analysis. Therefore, a cross reality system could support the analysis process
by extending the display space beyond the limitations of the physical screen. Users could then
utilise the same interaction methods and analysis tools that are already part of their data analysis
process and use the large display space as a dashboard to view multiple visual representations
of their data at the same time and get an overview of all important information at a glance [10].
This use case then opens the discussion on how these additional visualisations can or should
be placed. While the information layout in sensemaking tasks, where users arrange the infor-
mation based on their needs, has been well explored in literature [11, 12, 13, 14], an automatic
approach for this information layout has not yet been explored. We therefore propose de-
sign considerations and a concept for an immersive cross reality system that implements an
automatic layout for a visual data analysis dashboard.
2. Related Work
In this section, we focus on the two most relevant fields for user interface layouts. First, we
discuss methods for display extension with a focus on virtual display extensions using head-
mounted displays (HMDs). Secondly, we elaborate on common placement strategies in 2D
dashboards and 3D information layouts.
2.1. Extended display environments
When aiming to extend displays, there are multiple different approaches, starting with simply
adding more displays [11] over cross-device interaction between multiple small devices [15, 16,
17] towards extending displays using immersive technology [18, 19].
There have been studies on different display types and sizes when using immersive display
extensions, i.e. mobile devices [20, 21, 22, 23], desktop monitors [18, 24, 19, 25, 26], and large-
scale displays [27, 28]. The display extension itself can then either be screen-aligned or utilise
the entire three-dimensional space as a layout area. Within the area of utilising immersive
technologies for display extension, the extended area is either connected to the display [18, 21,
20] or enables the free use of three-dimensional space [19].
In the research on screen-aligned display spaces for mobile devices, research focuses on
interaction methods for the AR display extension [22, 29], as well as different sizes of extensions
and their impact on spatial memory, workload, and user experience for smartwatches [20] and
smartphones [21]. For both types of displays, researchers come to the conclusion that there is a
limit on how large the extended display space should be to positively impact spatial memory,
although the recommended sizes vary between the display types [20, 21]. Additionally, Biener
et al. [23] explored the feasibility of using VR to extend a tablet display for knowledge work.
For desktop-based display extension, the display extension is used for a 3D perspective on
objects [24, 18], for placing or extending controls into the screen-adjacent space [24, 18], or
for free placement of data plots for visual data analysis [30]. Cools et al. [18] define a hybrid
display space that is subdivided into screen space, screen border, a cylindrical space around the
user and a desk surface space. Additionally, Pavanatto et al. [25] compared the physical and
virtual monitors to a hybrid condition where the physical monitor was extended by two virtual
ones in AR. They found that virtual displays can be used for knowledge work. Nevertheless, the
discomfort of the HMD is still a problem. They also report that the hybrid condition represented
a middle ground between the physical and virtual condition in terms of time and accuracy. This
is similar to one of the findings of Pavanatto et al. [26] which found no significant difference
in performance time between virtual displays and the combination of real and virtual screens.
However, the relatively small field of view of the Microsoft HoloLens 2, which was used in both
studies, might influence the results.
When looking at the extension of large scale displays, there is research in interaction tech-
niques and visualisation techniques for transferring graph based data into the 3D space for
exploration [28]. For screen-aligned display extension, Perelman et al. [27] have proposed to
add an immersive personal view using AR HMDs to a shared space on a large interactive display.
This personal view is then placed in an orthogonal manner to the large display. This allows
users to decouple their own exploration from a collaborative analysis.
2.2. Layouts in 2D and 3D
For two-dimensional information layouts, Bach et al. [31] analysed 144 dashboards in order
to identify design patterns that are being used when designing dashboard user interfaces.
They distinguish between two different groups of patterns. A dashboard usually consists of
multiple dashboard elements. Content design patterns describe different facets of the content of
dashboard elements. Composition design patterns describe how these dashboard elements are
combined and presented. Furthermore, they state that while there are some high-level design
guidelines, there are still many open questions that have not been answered in research at this
point.
In virtual environments, there are multiple studies that explore how users utilise the three-
dimensional space around them, both in single-user scenarios [32, 14, 13], and collaborative
user studies [33]. These studies focus on different sensemaking or clustering tasks with map
data, including one larger map and several smaller multiples [32], as well as image data [33]
and textual data [13, 14]. Overall, there are different layout patterns that were observed. For the
planar layout, the items are mostly aligned flat, next to each other [32] and may be sorted into
several planar clusters [13]. For a spherical layout, the items are clustered in a sphere around
the user while for a spherical layout with cap, only a portion of this sphere is used [32]. Similar
to the latter one, a semicircular layout was found in two other studies [13, 14]. The last common
layout identified is the environmental layout where clusters are formed based on cues, such
as objects, in the virtual environment [33, 13, 14]. Additionally, Lisle et al. [12] found, that an
increase in available workspace also increases user satisfaction and decreases frustration.
In a different approach, based on visual data analysis with small multiples, Liu et al. [34]
compared the performance and user experience of planar, semicircular and circular layouts.
They found that planar is the most efficient layout while semicircular is the most preferred
layout, as it provides a good compromise between walking distance and getting an overview of
all plots. This was also later confirmed in two further studies, which also confirmed that there
was no statistically significant benefit from the semicircular over the planar layout while users
still preferred it [35]. Reipschlager et al. [36] also included a curved AR screen in their work to
enable an overview of the whole screen while limiting distortion based on the viewing angle.
For individual charts they included hinged visualisations that are connected by a hinge to the
larger display while being angled towards the user to mitigate the perception distortion.
Liu et al. [37] consolidate the opportunities for visualisation view management in a design
space, where thy consider four major topics concerning view presentation and user interaction.
They include the spatial relationship between user and visualisation view, the coordinate system
of the layout, the intent of the interaction and the input modality. When placing digital content
in an AR environment, coupling information based on semantic meaning or based on geometric
surfaces should also be considered [38]. Furthermore, Liu et al. [34] provide a design space
for the layout of immersive small multiples including dimension, curvature, aspect ratio, and
orientation. Daeijavad and Maurer [39] later add height and detail level and argue for the
inclusion of interaction technique as an additional dimension.
3. Design Considerations and System Concept
This section presents our design considerations for display-external placement of user interface
(UI) elements in a virtually extended display space and our concept for a respective system to
support the visual data analysis process. First, we elaborate on the issues of customisability
versus predetermination, external-to-internal versus internal-to-external transitions, and input
methods. Then, we describe the architecture of our proposed virtually extended desktop
environment in which we want to explore display-external UI placement.
3.1. Considerations for User Interface Layouts
One goal of UI placement is to provide users with an intuitive overview of the interface they
are about to use. The suitability of a layout varies depending on the confronted user and the
performed task [32].
3.1.1. Customisability versus Predetermination
The layout of UIs can either be predetermined by the developers, customised by the users or
something in between (semi-customisability). This results in a dichotomy of two complemen-
tary states: Customisability and predetermination. While customising a layout is done by the
application’s users, predetermining a layout is a task done by the application’s developers.
Customisability has the advantage of providing individual users with the ability to choose their
own individual layout. This is valuable, as a predetermined layout might not be equally suitable
for all users. It also leads to a higher level of complexity for the application, as the graphical
user interface must therefore be designed more flexibly and allow the users to customise it. An
extreme example of a customisable user interface provides its users with the ability to position
and resize elements free-handedly.
Predetermination allows the developers to keep the application more simple, compared to an
application with a user interface that is fully customisable. It also limits the user’s flexibility
in adjusting the interface to their needs. Predetermination must not imply limiting the user
interface to a single layout. The developers may also predetermine a set of layouts the users can
choose from but not customise. An extreme example of a predetermined user interface provides
its users with a singular layout, which was defined by the developers.
Semi-customisable user interfaces might feature different mechanics in order to assist the
users in customising the interface. A layout manager might allow users to save their customised
layout as a template. This allows switching between different layouts without having to freshly
customise them again when switching. User interface docks might allow users to assign a
certain element to a predefined slot on the display space. These docks might have their own
predetermined or customisable layout.
3.1.2. External-to-Internal versus Internal-to-External
The process of user interface placement involves moving elements of the user interface from
an origin position to a destination position. In a virtually extended desktop environment, we
distinguish between display-internal and display-external workspace. The origin and destination
positions are in either of these two workspaces. Depending on which workspace these positions
are located on, the transition of interface elements will be performed from one workspace to
another or just within one workspace.
Transitions from display-internal to display-external workspace involve interface elements
that are originally located on the display-internal workspace to transition to the display-external
workspace. One possible example in the data analysis use case would be a "filter visualisations"
mechanic, in which an overly crowded display-internal workspace is jammed with too many
data visualisations. In order to reestablish an overview of the interface and reduce the number
of visualisations on screen, the user decides on a filter configuration and triggers the transition
of filtered visualisations from the internal to the external workspace. This results in a clearer
and less jammed display-internal workspace and other visualisations being displayed on the
display-external workspace in a sorted and organised layout.
Transitions from display-external to display-internal workspace involve interface elements
that are originally located on the display-external workspace to transition to the display-internal
workspace. One possible example in the data analysis use case would be a "thumbnail overview"
mechanic, in which a set of data visualisations is aligned around the display-internal workspace
in the form of small thumbnails. The resolution of these thumbnail visualisations is too small
for actually analysing them. Clicking the thumbnails transitions them to the display-internal
workspace, where they are displayed with a high resolution. This provides the users with an
overview of all interface elements and the possibility to interchange them effortlessly.
Transitions within the display-internal workspace resemble the already well-established UI
placement, which can also be found in graphical user interfaces like Microsoft Windows and
Apple macOS.
Transitions within the display-external workspace do not involve the desktop display. There-
fore, they can be performed without a virtually extended desktop environment only using a
HMD. UI placement that exclusively takes place in the display-external workspace has been
investigated in other articles [32, 12, 39, 35, 33, 40, 41].
3.1.3. Interaction with User Interface Elements
The controller input modality is established and often used in applications featuring a HMD. It
provides the user with the ability to interact with the three-dimensional space. However, using
it in combination with a desktop environment urges the user to frequently switch between
controller and mouse or keyboard, causing a breach in input modalities.
Hand gestures do not require additional handheld hardware, but they are not available for
every HMD. The front camera of the HMD may detect and track the user’s hands’ position,
orientation and finger placement. This allows the user to perform certain hand gestures in
order to interact with the application. Using hand gestures in combination with a desktop
environment results in a smaller breach of input modalities compared to controllers. This is
due to no hardware devices being involved in hand gestures. However, utilising hand gesture
input in a large workspace may require a certain proximity between the user’s hands and the
points of interest in the workspace with which the user wants to interact. This might result in a
decrease in comfort for the user.
An alternative to the previous options is the traditional mouse and keyboard input modalities.
These are established in desktop environments. They offer benefits in terms of precision [9]
and text entry [8]. Additionally, this input modality is strongly associated with desktop envi-
ronments, which our architecture extends virtually. Given such a virtual workspace extension
of the planar desktop environment, the pointer associated with the mouse input modality can
traverse both the display-internal and the display-external workspace.
Additionally, there is the option of including different input technologies for the desktop
interaction and the AR space. Cools et al. [18] argue in favour of employing mixed input modal-
ities, using traditional keyboard and mouse interaction for the desktop system in combination
with hand gesture interaction in the AR space. The integration of a touchscreen input modality
should also be considered, as the hand gesture input modality already involves hand input
in front of the desktop screen. Therefore, these input modalities are rather similar in this
scenario. On the other hand, Seraji et al. [30] found in their user study, where participants also
used mouse and keyboard interaction for desktop, but a tracked controller for AR interaction,
that this mixed interaction led to a mental context-switching effort. Thus, they suggest that
interaction methods should be as similar as possible to enhance user performance. Both of these
approaches allow participants to place items far away from their screen, making interaction
in the extended space difficult to achieve with a 2D device such as the mouse. Our proposed
data analysis system, on the other hand, is mainly desktop based with the display extension
only concerning the space in close proximity to the screen. Therefore, we suggest only using
traditional mouse and keyboard input modalities for this system.
3.2. Proposed Architecture
Out of many possible use cases, we decided to use a data analysis use case that is performed on
a desktop computer system in an office environment in order to illustrate the different facets of
display-external UI placement. Our setup is similar to the one proposed by Cools et al. [18]. It
includes a desktop computer with one monitor that provides the user with keyboard and mouse
input modalities. Diverging from the usual office equipment, a HMD and a tracking device
are needed in order to virtually extend this setup. While being seated in front of the desktop
system, the user is wearing the HMD and therefore viewing the contents of the monitor as
well as the rest of the desktop system and their environments via the HMD. In addition, the
HMD tracks the position and orientation of the user’s head, while performing the data analysis
task. Additionally, a tracking device is attached to the monitor in order to track its position and
orientation.
Via the HMD, the user can view both the contents situated inside the desktop display (display-
internal workspace) as well as the contents situated outside the desktop display (display-external
workspace). Due to the shape of the monitor hardware, the shape of the display-internal
workspace must remain planar. The shape of the display-external workspace can be altered,
though. Although a planar surface seems reasonable as it mimics the shape of the display-
internal workspace, previous work [32, 33, 34, 35] has documented improved performance when
working on tasks in cylindrically or spherically shaped virtual workspaces contrary to planar
workspaces. Following the WIMP design schematic [? ], we nest the data of our analysis task in
windows. This creates logically separable entities in our user interface. These user interface
elements can be portrayed both on the display-internal and the display-external workspace.
4. Conclusion and Future Work
In this work, we presented design considerations for the layout of data visualisations on a
generic architecture for a system in which user interface placement can be investigated. This
system combines a conventional desktop environment with an HMD to extend the planar
desktop environment by a virtual surface around the display. This results in the division of a
display-internal and a display-external workspace.
In the design considerations we discuss the topic of layouts being either customisable by the
user or rather predetermined by the developer. Furthermore, the transition of user interface
elements from an origin position to a destination position needs to be thought out during the
design of the virtually extended display. Based on the location of these positions in regards
to the display, the transition can be external-to-internal or internal-to-external. Other direc-
tionalities like internal-to-internal or external-to-external are not unique to virtually extended
desktop environments. Furthermore, we argue for conventional mouse and keyboard input
modality, because they are more precise, well established and strongly associated with desktop
environments.
As part of our future work, we suggest implementing a prototype that features a virtually
extended desktop environment. This prototype should be capable of simulating a user interface
and enable the user to create and experience the effects of different user interface layouts.
Additionally, a user study is required to evaluate the effects of different user interface layouts
on performance, spatial memory and user experience.
Acknowledgments
This publication is a part of the X-PRO project. The project X-PRO is financed by research
subsidies granted by the government of Upper Austria.
References
[1] P. Milgram, H. Takemura, A. Utsumi, F. Kishino, Augmented reality: a class of displays on
the reality-virtuality continuum, Boston, MA, 1995, pp. 282–292. URL: http://proceedings.
spiedigitallibrary.org/proceeding.aspx?articleid=981543. doi:10.1117/12.197321.
[2] N. Feld, P. Bimberg, B. Weyers, D. Zielasko, Simple and Efficient? Evaluation of Transitions
for Task-Driven Cross-Reality Experiences, IEEE Transactions on Visualization and Com-
puter Graphics (2024) 1–18. URL: https://ieeexplore.ieee.org/abstract/document/10411102.
doi:10.1109/TVCG.2024.3356949, conference Name: IEEE Transactions on Visualiza-
tion and Computer Graphics.
[3] N. Feld, P. Bimberg, B. Weyers, D. Zielasko, Keep it simple? Evaluation of Transitions in
Virtual Reality, in: Extended Abstracts of the 2023 CHI Conference on Human Factors in
Computing Systems, CHI EA ’23, Association for Computing Machinery, New York, NY,
USA, 2023, pp. 1–7. URL: https://dl.acm.org/doi/10.1145/3544549.3585811. doi:10.1145/
3544549.3585811.
[4] F. Pointecker, J. Friedl, D. Schwajda, H.-C. Jetter, C. Anthes, Bridging the Gap Across
Realities: Visual Transitions Between Virtual and Augmented Reality, in: 2022 IEEE
International Symposium on Mixed and Augmented Reality (ISMAR), IEEE, Singapore,
Singapore, 2022, pp. 827–836. URL: https://ieeexplore.ieee.org/document/9995285/. doi:10.
1109/ISMAR55827.2022.00101.
[5] F. Pointecker, H. C. Jetter, C. Anthes, Exploration of Visual Transitions Between Vir-
tual and Augmented Reality, in: 4th Workshop on Immersive Analytics: Envisioning
Future Productivity for Immersive Analytics // @CHI 2020 Honolulu, 2020. URL: https://
pure.fh-ooe.at/ws/portalfiles/portal/35139661/07_pointecker2020exploration.pdf, tex.ids=
pointecker_exploration_2020-1.
[6] T. Chandler, M. Cordeil, T. Czauderna, T. Dwyer, J. Glowacki, C. Goncu, M. Klapperstueck,
K. Klein, K. Marriott, F. Schreiber, E. Wilson, Immersive Analytics, in: 2015 Big Data
Visual Analytics (BDVA), IEEE, Hobart, Australia, 2015, pp. 1–8. URL: https://ieeexplore.
ieee.org/document/7314296. doi:10.1109/BDVA.2015.7314296.
[7] B. Fröhler, C. Anthes, F. Pointecker, J. Friedl, D. Schwajda, A. Riegler, S. Tripathi, C. Holz-
mann, M. Brunner, H. Jodlbauer, H. Jetter, C. Heinzl, A Survey on Cross-Virtuality
Analytics, Computer Graphics Forum 41 (2022) 465–494. URL: https://onlinelibrary.wiley.
com/doi/10.1111/cgf.14447. doi:10.1111/cgf.14447.
[8] M. McGill, D. Boland, R. Murray-Smith, S. Brewster, A Dose of Reality: Overcoming
Usability Challenges in VR Head-Mounted Displays, in: Proceedings of the 33rd Annual
ACM Conference on Human Factors in Computing Systems, ACM, Seoul Republic of Korea,
2015, pp. 2143–2152. URL: https://dl.acm.org/doi/10.1145/2702123.2702382. doi:10.1145/
2702123.2702382.
[9] J. A. Wagner Filho, C. Freitas, L. Nedel, VirtualDesk: A Comfortable and Efficient Immer-
sive Information Visualization Approach, Computer Graphics Forum 37 (2018) 415–426.
URL: https://onlinelibrary.wiley.com/doi/10.1111/cgf.13430. doi:10.1111/cgf.13430,
publisher: John Wiley & Sons, Ltd.
[10] S. Few, Information dashboard design: The effective visual communication of data, O’Reilly
Media, Inc., 2006. URL: https://dl.acm.org/doi/abs/10.5555/1206491.
[11] C. Andrews, A. Endert, C. North, Space to think: large high-resolution displays for
sensemaking, in: Proceedings of the SIGCHI Conference on Human Factors in Computing
Systems, ACM, Atlanta Georgia USA, 2010, pp. 55–64. URL: https://dl.acm.org/doi/10.1145/
1753326.1753336. doi:10.1145/1753326.1753336.
[12] L. Lisle, K. Davidson, L. Pavanatto, I. A. Tahmid, C. North, D. A. Bowman, Spaces to Think:
A Comparison of Small, Large, and Immersive Displays for the Sensemaking Process, in:
2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2023, pp.
1084–1093. URL: https://ieeexplore.ieee.org/abstract/document/10316497. doi:10.1109/
ISMAR59233.2023.00125, iSSN: 2473-0726.
[13] L. Lisle, K. Davidson, E. J. Gitre, C. North, D. A. Bowman, Sensemaking Strategies with
Immersive Space to Think, in: 2021 IEEE Virtual Reality and 3D User Interfaces (VR), IEEE,
Lisboa, Portugal, 2021, pp. 529–537. URL: https://ieeexplore.ieee.org/document/9417736/.
doi:10.1109/VR50410.2021.00077.
[14] K. Davidson, L. Lisle, K. Whitley, D. A. Bowman, C. North, Exploring the Evolution of
Sensemaking Strategies in Immersive Space to Think, IEEE Transactions on Visualization
and Computer Graphics 29 (2023) 5294–5307. URL: https://ieeexplore.ieee.org/document/
9894094/. doi:10.1109/TVCG.2022.3207357.
[15] R. Rädle, H.-C. Jetter, N. Marquardt, H. Reiterer, Y. Rogers, HuddleLamp: Spatially-
Aware Mobile Displays for Ad-hoc Around-the-Table Collaboration, in: Proceedings
of the Ninth ACM International Conference on Interactive Tabletops and Surfaces, ITS
’14, Association for Computing Machinery, New York, NY, USA, 2014, pp. 45–54. URL:
https://doi.org/10.1145/2669485.2669500. doi:10.1145/2669485.2669500.
[16] U. Kister, K. Klamka, C. Tominski, R. Dachselt, G ra S p : Combining
Spatially-aware Mobile Devices and a Display Wall for Graph Visualization and Interaction,
Computer Graphics Forum 36 (2017) 503–514. URL: https://onlinelibrary.wiley.com/doi/10.
1111/cgf.13206. doi:10.1111/cgf.13206.
[17] T. Horak, S. K. Badam, N. Elmqvist, R. Dachselt, When David Meets Goliath: Combining
Smartwatches with a Large Vertical Display for Visual Data Exploration, in: Proceedings of
the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, Association
for Computing Machinery, New York, NY, USA, 2018, pp. 1–13. URL: https://dl.acm.org/
doi/10.1145/3173574.3173593. doi:10.1145/3173574.3173593.
[18] R. Cools, M. Gottsacker, A. Simeone, G. Bruder, G. Welch, S. Feiner, Towards a Desktop-AR
Prototyping Framework: Prototyping Cross-Reality Between Desktops and Augmented
Reality, in: 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct
(ISMAR-Adjunct), 2022, pp. 175–182. URL: https://ieeexplore.ieee.org/abstract/document/
9974207. doi:10.1109/ISMAR-Adjunct57072.2022.00040, iSSN: 2771-1110.
[19] M. R. Seraji, W. Stuerzlinger, HybridAxes: An Immersive Analytics Tool With Inter-
operability Between 2D and Immersive Reality Modes, in: 2022 IEEE International
Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), IEEE, Singa-
pore, Singapore, 2022, pp. 155–160. URL: https://ieeexplore.ieee.org/document/9974239/.
doi:10.1109/ISMAR-Adjunct57072.2022.00036.
[20] M.-E. Jannat, K. Hasan, Exploring the Effects of Virtually-Augmented Display Sizes on
Users’ Spatial Memory in Smartwatches, in: 2023 IEEE International Symposium on Mixed
and Augmented Reality (ISMAR), IEEE, Sydney, Australia, 2023, pp. 553–562. URL: https:
//ieeexplore.ieee.org/document/10316502/. doi:10.1109/ISMAR59233.2023.00070.
[21] S. Hubenschmid, J. Zagermann, D. Leicht, H. Reiterer, T. Feuchtner, ARound the Smart-
phone: Investigating the Effects of Virtually-Extended Display Size on Spatial Memory, in:
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, ACM,
Hamburg Germany, 2023, pp. 1–15. URL: https://dl.acm.org/doi/10.1145/3544548.3581438.
doi:10.1145/3544548.3581438.
[22] E. Normand, M. J. McGuffin, Enlarging a Smartphone with AR to Create a Handheld VESAD
(Virtually Extended Screen-Aligned Display), in: 2018 IEEE International Symposium on
Mixed and Augmented Reality (ISMAR), IEEE, Munich, Germany, 2018, pp. 123–133. URL:
https://ieeexplore.ieee.org/document/8613758/. doi:10.1109/ISMAR.2018.00043.
[23] V. Biener, D. Schneider, T. Gesslein, A. Otte, B. Kuth, P. O. Kristensson, E. Ofek, M. Pahud,
J. Grubert, Breaking the Screen: Interaction Across Touchscreen Boundaries in Virtual
Reality for Mobile Knowledge Workers, IEEE Transactions on Visualization and Computer
Graphics 26 (2020) 3490–3502. URL: https://ieeexplore.ieee.org/document/9212653/. doi:10.
1109/TVCG.2020.3023567.
[24] P. Reipschläger, R. Dachselt, DesignAR: Immersive 3D-Modeling Combining Augmented
Reality with Interactive Displays, in: Proceedings of the 2019 ACM International Confer-
ence on Interactive Surfaces and Spaces, ISS ’19, Association for Computing Machinery,
New York, NY, USA, 2019, pp. 29–41. URL: https://dl.acm.org/doi/10.1145/3343055.3359718.
doi:10.1145/3343055.3359718.
[25] L. Pavanatto, C. North, D. A. Bowman, C. Badea, R. Stoakley, Do we still need physical
monitors? An evaluation of the usability of AR virtual monitors for productivity work,
in: 2021 IEEE Virtual Reality and 3D User Interfaces (VR), IEEE, Lisboa, Portugal, 2021,
pp. 759–767. URL: https://ieeexplore.ieee.org/document/9417776/. doi:10.1109/VR50410.
2021.00103.
[26] L. Pavanatto, F. Lu, C. North, D. A. Bowman, Multiple Monitors or Single Canvas? Evalu-
ating Window Management and Layout Strategies on Virtual Displays, IEEE Transactions
on Visualization and Computer Graphics (2024) 1–15. URL: https://ieeexplore.ieee.org/
document/10443571/. doi:10.1109/TVCG.2024.3368930.
[27] G. Perelman, E. Dubois, A. Probst, M. Serrano, Visual Transitions around Tabletops in
Mixed Reality: Study on a Visual Acquisition Task between Vertical Virtual Displays and
Horizontal Tabletops, Proceedings of the ACM on Human-Computer Interaction 6 (2022)
585:660–585:679. URL: https://doi.org/10.1145/3567738. doi:10.1145/3567738.
[28] D. Schwajda, J. Friedl, F. Pointecker, H.-C. Jetter, C. Anthes, Transforming graph data
visualisations from 2D displays into augmented reality 3D space: A quantitative study,
Frontiers in Virtual Reality 4 (2023). URL: https://www.frontiersin.org/articles/10.3389/
frvir.2023.1155628. doi:10.3389/frvir.2023.1155628, publisher: Frontiers.
[29] C. Reichherzer, J. Fraser, D. C. Rompapas, M. Billinghurst, SecondSight: A Framework
for Cross-Device Augmented Reality Interfaces, in: Extended Abstracts of the 2021 CHI
Conference on Human Factors in Computing Systems, ACM, Yokohama Japan, 2021,
pp. 1–6. URL: https://dl.acm.org/doi/10.1145/3411763.3451839. doi:10.1145/3411763.
3451839.
[30] M. R. Seraji, P. Piray, V. Zahednejad, W. Stuerzlinge, Analyzing User Behaviour Patterns
in a Cross-Virtuality Immersive Analytics System, IEEE Transactions on Visualization
and Computer Graphics (2024) 1–11. URL: https://ieeexplore.ieee.org/document/10471345/.
doi:10.1109/TVCG.2024.3372129.
[31] B. Bach, E. Freeman, A. Abdul-Rahman, C. Turkay, S. Khan, Y. Fan, M. Chen, Dash-
board Design Patterns, IEEE Transactions on Visualization and Computer Graph-
ics 29 (2023) 342–352. URL: https://ieeexplore.ieee.org/document/9903550/;jsessionid=
6WwdqOuIJjOl4n1ZEoKPana8MFFf3CnRtCPLsp2w_GGWr4O8zWeK!223232451. doi:10.
1109/TVCG.2022.3209448.
[32] K. A. Satriadi, B. Ens, M. Cordeil, T. Czauderna, B. Jenny, Maps Around Me: 3D Multiview
Layouts in Immersive Spaces, Proceedings of the ACM on Human-Computer Interaction 4
(2020) 201:1–201:20. URL: https://doi.org/10.1145/3427329. doi:10.1145/3427329.
[33] W. Luo, A. Lehmann, H. Widengren, R. Dachselt, Where Should We Put It? Layout and
Placement Strategies of Documents in Augmented Reality for Collaborative Sensemaking,
in: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems,
CHI ’22, Association for Computing Machinery, New York, NY, USA, 2022, pp. 1–16. URL:
https://dl.acm.org/doi/10.1145/3491102.3501946. doi:10.1145/3491102.3501946.
[34] J. Liu, A. Prouzeau, B. Ens, T. Dwyer, Design and Evaluation of Interactive Small Multiples
Data Visualisation in Immersive Spaces, in: 2020 IEEE Conference on Virtual Reality
and 3D User Interfaces (VR), 2020, pp. 588–597. URL: https://ieeexplore.ieee.org/abstract/
document/9089546. doi:10.1109/VR46266.2020.00081, iSSN: 2642-5254.
[35] J. Liu, A. Prouzeau, B. Ens, T. Dwyer, Effects of Display Layout on Spatial Memory for
Immersive Environments, Proceedings of the ACM on Human-Computer Interaction 6
(2022) 576:468–576:488. URL: https://doi.org/10.1145/3567729. doi:10.1145/3567729.
[36] P. Reipschlager, T. Flemisch, R. Dachselt, Personal Augmented Reality for Information Visu-
alization on Large Interactive Displays, IEEE Transactions on Visualization and Computer
Graphics 27 (2021) 1182–1192. URL: https://ieeexplore.ieee.org/abstract/document/9223669.
doi:10.1109/TVCG.2020.3030460.
[37] J. Liu, B. Ens, A. Prouzeau, J. Smiley, I. K. Nixon, S. Goodwin, T. Dwyer, DataDancing:
An Exploration of the Design Space For Visualisation View Management for 3D Surfaces
and Spaces, in: Proceedings of the 2023 CHI Conference on Human Factors in Computing
Systems, ACM, Hamburg Germany, 2023, pp. 1–17. URL: https://dl.acm.org/doi/10.1145/
3544548.3580827. doi:10.1145/3544548.3580827.
[38] M. O. Ellenberg, M. Satkowski, W. Luo, R. Dachselt, Spatiality and Semantics - Towards
Understanding Content Placement in Mixed Reality, in: Extended Abstracts of the 2023 CHI
Conference on Human Factors in Computing Systems, ACM, Hamburg Germany, 2023,
pp. 1–8. URL: https://dl.acm.org/doi/10.1145/3544549.3585853. doi:10.1145/3544549.
3585853.
[39] P. Daeijavad, F. Maurer, Layouts of 3D Data Visualizations Small Multiples around
Users in Immersive Environments, in: 2022 IEEE International Symposium on Mixed
and Augmented Reality Adjunct (ISMAR-Adjunct), 2022, pp. 258–261. URL: https://
ieeexplore.ieee.org/abstract/document/9974255. doi:10.1109/ISMAR-Adjunct57072.
2022.00057, iSSN: 2771-1110.
[40] W. Luo, A. Lehmann, Y. Yang, R. Dachselt, Investigating Document Layout and Placement
Strategies for Collaborative Sensemaking in Augmented Reality, in: Extended Abstracts
of the 2021 CHI Conference on Human Factors in Computing Systems, CHI EA ’21,
Association for Computing Machinery, New York, NY, USA, 2021, pp. 1–7. URL: https:
//doi.org/10.1145/3411763.3451588. doi:10.1145/3411763.3451588.
[41] Z. Wen, W. Zeng, L. Weng, Y. Liu, M. Xu, W. Chen, Effects of View Layout on Situated
Analytics for Multiple-View Representations in Immersive Visualization, IEEE Transactions
on Visualization and Computer Graphics 29 (2023) 440–450. URL: https://ieeexplore.ieee.
org/abstract/document/9904883. doi:10.1109/TVCG.2022.3209475.