<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Interaction Concepts for Collaborative Visual Analysis of Scatterplots on Large Vertically-Mounted High-Resolution Multi-Touch Displays</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mohammad Chegini</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lin Shao</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dirk J. Lehmann</string-name>
          <email>dirk@isg.cs.uni-magdeburg.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Keith Andrews</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tobias Schreck</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Institute of Computer Graphics</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Knowledge Visualisation</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Graz University of Technology</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Austria Email:</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>m.chegini</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>l.shao</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>t.schreck}@cgv.tugraz.at</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Interactive Systems and Data Science, Graz University of Technology</institution>
          ,
          <country country="AT">Austria</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Magdeburg</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <fpage>90</fpage>
      <lpage>96</lpage>
      <abstract>
        <p>-Large vertically-mounted high-resolution multitouch displays are becoming increasingly available for interactive data visualisation. Such devices are well-suited to small-team collaborative visual analysis. In particular, the visual analysis of large high-dimensional datasets can benefit from high-resolution displays capable of showing multiple coordinated views. This paper identifies some of the advantages of using large, high-resolution displays for visual analytics in general, and introduces a set of interactions to explore high-dimensional datasets on large vertically-mounted high-resolution multi-touch displays using scatterplots. A set of touch interactions for collaborative visual analysis of scatterplots have been implemented and are presented. Finally, three perception-based level of detail techniques are introduced for such displays as a concept for further implementation.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>
        Large high-resolution displays are becoming an affordable
option for the visualisation of data [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Large displays have
proved to be effective for tasks such as comparative genomics
analysis [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], graph topology exploration [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], and sensemaking
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Large vertically-mounted (landscape-orientation)
highresolution multi-touch displays are particularly effective for
collaborative analysis by small teams. However, previous
research has often focused on horizontally-mounted tabletop
surfaces or vertically-mounted displays with more distant
interaction [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In this paper, a set of user interactions to
support scatterplot matrices analysis on vertically-mounted
displays are introduced. These techniques help analysts to
efficiently select a scatterplot from scatterplot matrices and
explore it collaboratively.
      </p>
      <p>
        Some physical and virtual interactions with large displays
were described in the previous literature. Modalities range
from natural interactions like speech, body tracking, gaze, and
gestures to the use of secondary control devices like mobile
phones, tablets, or Wii controllers [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Of these, multi-touch
interactions provide a fluid and intuitive interface suitable for
up-close interaction in front of the display by small groups.
Although there are studies about collaborative interaction with
large displays (e.g. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]), they usually focus on
singleuser interaction [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Since typical multi-touch interactions do
not support collaboration, more research needs to be done on
cooperative gestures, modalities and the dynamics of group
work around these devices. Cooperative gestures are known to
enhance the sense of teamwork and increase the participation
of team members [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>
        Screen size and resolution are particularly important for
information visualisation of multivariate datasets. Having a
large display allows multiple, linked views, such as
scatterplot matrices and parallel coordinates [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] to be provided
simultaneously. If the screen is not high-resolution, the user
experience of near distance interaction decreases significantly.
For instance, on screens with less than sixty pixels per inch,
the user is not able to read from the screen up-close [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
Furthermore, users can make more observations with less effort
using physical navigation (e.g., walking) rather than virtual
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. More screen space can be used to either provide a better
overview of a dataset or to provide more details of a portion
of it. For example, users can see both an entire scatterplot
matrix, specific scatterplots, and parallel coordinates plots at
the same time. As a result, users may have the opportunity to
gain more insight into large datasets.
      </p>
      <p>
        Previous studies [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] suggest that vertically-mounted displays
are more suited to parallel tasks within a group, due to reduced
visual distraction and the possibility to share information
through physical navigation like turning the head or walking.
On tabletop displays, if users are not on the same side of the
table, the shared view often needs to be reoriented.
      </p>
      <p>
        This paper addresses the design gap between standard
interaction techniques for large, multi-touch displays and advanced
interaction techniques and visual feedback for collaborative
scatterplot and scatterplot matrix analysis. Design concepts
for such interaction techniques have been implemented as a
proof of concept and are presented. The techniques include
scatterplot selection from scatterplot matrices, collaborative
regression model analysis, and an extension of the Regression
Lens [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] to include a floating toolbox. As a proof of concept,
the techniques are developed on a large display.
      </p>
      <p>The paper is structured as follows: Section II discusses
related work. Several novel interaction designs for collaborative
visual analysis of scatterplots on large displays are introduced
in Section III. The use case and current implementation of the
proposed interaction techniques are described in Section IV.
Section VI introduces the concept of perception-based level of
visual detail. The paper concludes with a discussion of open
problems and future work in Section VII.</p>
    </sec>
    <sec id="sec-2">
      <title>II. RELATED WORK</title>
      <p>
        At a high level, information visualisation systems consist of
two components: visual representation and interaction. Visual
representation concerns the mapping from data to display [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
The interaction starts with a user’s intent to perform a task,
followed by a user action. The system then reacts and feedback
is given to the user [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. It is essential to consider both visual
representation and interaction when designing an application
for information visualisation.
      </p>
      <sec id="sec-2-1">
        <title>A. Visualisation on Large Displays</title>
        <p>
          Researchers in various fields are increasingly confronted
with the challenge of visualising and exploring
highdimensional datasets [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. Keim argues that although
many traditional techniques exist to represent data, they are
often not scalable to high-dimensional datasets without
suitable analytical or interaction design [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ].
        </p>
        <p>
          With the current size and resolution of typical computer
displays, it is challenging to represent entire datasets on one
screen using techniques like scatterplot matrices or parallel
coordinates. The user is often forced to resort to panning and
zooming, leading to frustration and longer task completion
times. Ruddle et al. [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] conducted an experiment in which
participants searched maps on three different displays for
densely or sparsely distributed targets. They concluded that
since the whole dataset fits on a larger display, sparse targets
can be found faster.
        </p>
        <p>
          Multiple linked views are often used to gain a better
understanding of a high-dimensional dataset. Such views are
usually connected by techniques such as brushing or combined
navigation [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. Every view occupies space on display. If more
space is available, additional views can be shown
simultaneously. Allowing the user to access multiple windows increases
performance and satisfaction [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]. Isenberg et al. [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ] present
hybrid-image visualisation for data analysis, where two images
are blended to achieve distance-dependent perception. This
concept might be especially helpful for collaborative visual
analysis tasks on vertically-mounted displays, where users
observe data from various distances.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>B. Visual Data Analysis and Multi-Touch Interaction</title>
        <p>
          Previous researchers proposed various interaction
techniques for large displays and multi-dimensional dataset
interaction on multi-touch displays. Ardito et al. [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] proposed
a classification of large display interaction having five
dimensions: visualisation technology, display setup, interaction
modality, application purpose, and location. Khan presented a
survey of interaction techniques and devices for large,
highresolution displays [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. The survey categorises modalities of
interaction into speech, tracking, gestures, mobile phones,
haptic and other technologies such as gaze and facial expression.
        </p>
        <p>
          Tsandilas et al. presented SketchSliders [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ], a tool that
provides a mobile sketching interface to create sliders which
interact with multi-dimensional datasets on a wall display. In
comparison, in this paper, interaction is performed directly
on the display rather than using a secondary touch device.
Zhai et al. [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ] introduced gesture interaction for wall displays
based on the distance of the user from the screen. The gestures
can be performed in far or near mode. Unlike the techniques
described in this paper, the proposed interaction gestures are
not directly related to visual analytics tasks. Heilig et al. [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]
developed multi-touch scatterplot visualisation on a tabletop
display. Sadana and Stasko [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ] proposed advanced techniques
for scatterplot data selection on smaller touch-based devices,
such as tablets and smartphones, whereas this paper focuses
on large multi-touch displays.
        </p>
        <p>
          MultiLens supports various gestures for fluid multi-touch
exploration of graphs [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ]. The Regression Lens [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] allows
the user to interactively explore local areas of interest in
scatterplots by showing the best fitting regression models
inside the lens. The idea of visualising local regression models
is also studied by Matkovic´et al. [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ]. Rzeszotarski et al. [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ]
introduced Kinetica, a tool for exploring multivariate data by
physical interactions on multi-touch screens. Kister et al. [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ]
presented BodyLenses, a promising set of magic lenses for
wall displays, which are mostly controlled by body interaction
and therefore suitable for interacting with wall displays from
a distance.
        </p>
        <p>In comparison to this work, the aforementioned studies
either focus on a different type of interaction and medium
or are not designed for collaborative visual analytics tasks.</p>
      </sec>
      <sec id="sec-2-3">
        <title>C. Collaborative Visualisation</title>
        <p>
          Large displays are well-suited to collaboration [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ], [
          <xref ref-type="bibr" rid="ref29">29</xref>
          ].
Jakobsen and Hornbaek [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] conducted an exploratory study to
understand group work with high-resolution multi-touch wall
displays. The study suggests that using this kind of display
helps users to work more efficiently as a group and fluidly
change between parallel and joint work. A large display
benefits group working on a shared task, since users can operate
on one common physical medium and share information on
it.
        </p>
        <p>
          Morris et al. [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] formalised the concept of cooperative
gestures as a set of gestures performed by multiple users and
interpreted as a single task by the system. Liu et al. developed
CoReach [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], a set of gestures for collaboration between two
users over large multi-touch displays. Comparing the use of a
large vertically-mounted display against two ordinary desktop
displays, Prouzeau et al. [
          <xref ref-type="bibr" rid="ref30">30</xref>
          ] concluded that groups obtain
better results and communicate better on large,
verticallymounted displays.
        </p>
        <p>
          An experiment by Pedersen and Hornbaek [
          <xref ref-type="bibr" rid="ref31">31</xref>
          ] showed
that users prefer horizontal surfaces over vertically-mounted
displays, but this result was limited to simple single-user
tasks and not collaborative tasks with different dynamics.
Vertically-mounted displays allow users to obtain an overview
of their data by stepping back from the display and make it
possible to interact from afar as well as up close. Badam et
al. [
          <xref ref-type="bibr" rid="ref32">32</xref>
          ] proposed a system for collaborative analysis on large
displays by controlling individual lenses through explicit
midair gestures.
        </p>
        <p>Although these studies are not directly related to
collaborative scatterplot analysis on large multi-touch displays, they
do provide valuable insights into the design process of such
systems.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>III. PROPOSED INTERACTION TECHNIQUES</title>
      <p>
        Current standard multi-touch interaction techniques are
not designed for collaboration on vertically-mounted
highresolution displays [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Here, both single-user and
collaborative interactions are proposed for the analysis of
scatterplots and scatterplot matrices on such devices. Some of
the interaction techniques are based on the concept of the
Regression Lens [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], which supports real-time regression
analysis of subsets of a scatterplot through lens selection
and manipulation. With Regression Lens, a user can select
a local area in a scatterplot and observe the regression model
of selected points [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Shao et al. proposed operations to
adjust and manipulate the regression model shown in the
Regression Lens, such as changing the degree of the regression
model or inverting its axes. Figure 1 illustrates some of the
suggested collaborative gestures on an 84-inch
4K/ULTRAHD@60HZ multi-touch LCD monitor produced by Eyevis
[
        <xref ref-type="bibr" rid="ref33">33</xref>
        ]. The user on the left finds interesting scatterplots and
passes them to the user on the right. The user on the right
analyses the plots using the Regression Lens [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. In the rest
of this section, four interaction designs for both collaborative
and single scatterplot analysis are introduced. Later in Section
IV, an implementation of these techniques is demonstrated.
      </p>
      <sec id="sec-3-1">
        <title>A. Lens and Floating Toolbox</title>
        <p>
          Magic lens techniques like DragMagics [
          <xref ref-type="bibr" rid="ref34">34</xref>
          ] and
BodyLens [
          <xref ref-type="bibr" rid="ref35">35</xref>
          ] are used to explore local regions in a
visualisation. An extended version of the basic lens concept
provides for more fluid interaction with large multi-touch
displays. For instance, as shown in Figure 2, after a region of
interest has been selected in a scatterplot using the dominant
hand (here the right hand), a toolbox appears next to the
other side of the lens (near the non-dominant hand), where
the user can use sliders and touch buttons to adjust the lens.
For example, the user can change the degree of the regression
model. The lens can be dragged with one hand, while being
adjusted with the second hand, thus potentially speeding up
performance.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>B. Two-Handed Interaction with Scatterplot Matrices</title>
        <p>
          A scatterplot matrix consists of pairwise scatterplots
arranged in a matrix, with dimensions typically labelled in the
diagonal cells. Since the number of dimensions is usually
high, panning and zooming within the scatterplot matrix is
almost inevitable. With common multi-touch interactions, the
scatterplot or dimension label is dragged to the corner of
the scatterplot matrix for panning. It is not feasible to zoom
into or out of a scatterplot matrix while dragging another
object. Based on two-handed interaction on tablets [
          <xref ref-type="bibr" rid="ref36">36</xref>
          ], a
two-handed technique is proposed whereby the dominant hand
is responsible for dragging items, while the non-dominant
hand performs common operations. As shown on the left side
of Figure 2, the user drags a scatterplot around to reorder
the plots in the scatterplot matrix. Panning is performed by
the non-dominant hand. With this two-handed technique, the
interactions needed to reorder scatterplots in a scatterplot
matrix can be reduced.
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>C. Collaboration using Gestures</title>
        <p>
          On large vertically-mounted collaborative displays, it is not
always desirable to move from one side of the screen to
the other to perform a task. Instead, collaborative gestures
can be used to pass objects. Based on the ideas of Liu et
al. [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], collaborative gestures on scatterplots are proposed.
In the right-hand side of Figure 3, the user on the left is
analyses a scatterplot, while the user on the right selects
another scatterplot of interest. By holding the background of
the scatterplot matrix with one hand, and swiping with the
other hand, the scatterplot is passed over to the partner. The
partner can then decide whether or not to load the scatterplot
for comparison. This technique can also be used for other
tasks. For example, in Figure 4, the user selects a scatterplot
of interest from a scatterplot matrix by touching and holding it
with one hand (here, the left hand) and swipes the other hand
in the direction of the analysis panel to load that scatterplot
for more detailed analysis.
        </p>
      </sec>
      <sec id="sec-3-4">
        <title>D. Collaborative Lens</title>
        <p>In collaborative analysis, visual feedback plays an essential
role. When two analysts work on a vertically-mounted display
without proper visual feedback, they need to communicate
more and turn their heads more often. A collaborative lens
can help ameliorate this issue. As illustrated on the left side
of Figure 3, the user on the left side of the screen creates a
regression lens and regression model in blue. Meanwhile, the
user on the right side of the screen creates their regression
lens and regression model in red. Both users can see the other
user’s regression model reflected in their own regression lens.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>IV. IMPLEMENTATION</title>
      <p>Proof-of-concept interaction techniques for single-user and
collaborative analysis of scatterplots and scatterplot matrices
have been implemented on a vertically-mounted Eyevis
84inch multi-touch display with a resolution of 3840 × 2160
pixels and a frame rate of 60 Hz. Figure 1 demonstrates a
typical setup of the implemented application with two users
working on the screen.</p>
      <p>
        The prototype application is written in Java, using JavaFX
for the user interface and the TUIO [
        <xref ref-type="bibr" rid="ref37">37</xref>
        ] and the TUIOFX
library [
        <xref ref-type="bibr" rid="ref38">38</xref>
        ] for multi-touch interaction. To enable multiple
users to work on the same screen with different widgets
and user interface elements at the same time, a concept
called focusArea from the TUIOFX library is used [
        <xref ref-type="bibr" rid="ref39">39</xref>
        ]. The
application follows the widely-used Model-View-Controller
(MVC) architecture.
      </p>
    </sec>
    <sec id="sec-5">
      <title>V. USE CASE</title>
      <p>
        The use case for the prototype application is to improve
interaction with the Regression Lens on multi-touch screens.
The developed interaction techniques were tested with the
well-known car dataset from the UCI Machine Learning
Repository [
        <xref ref-type="bibr" rid="ref40">40</xref>
        ].
      </p>
      <p>For the interaction technique shown in Figure 1, user A (on
the left) and user B (on the right) select two different plots
from the shared central area containing the scatterplot matrix.
For this technique, the user holds and touches a scatterplot
with one hand and swipes to the right or left with the other
hand to maximise it. This technique is elaborated in detail in
Section III-C. After that, users A and B select an area in the
scatterplot separately and toggle the Collaborative Lens option
in the Floating Toolbox. As described in Section III-D, each
user is now able to observe the regression model of the other
user in their regression lens. Figure 1 shows two users working
side by side on a large vertically-mounted multi-touch display,
after creating two separate Regression Lenses and toggling to
the Double Lens option. The exact state of the screen is shown
in Figure 6. A single Regression Lens with a floating toolbox
is visible in Figure 5.</p>
    </sec>
    <sec id="sec-6">
      <title>VI. PERCEPTION-BASED LEVEL OF VISUAL DETAIL</title>
      <p>CONCEPTS FOR SCATTERPLOTS</p>
      <p>Users of large vertically-mounted high-resolution displays
may take up positions at varying distances from the display,
and hence may perceive more or less detail in the display.</p>
      <p>
        At greater distances from a large high-resolution display, less
detail is perceived. Here, perceived pixel density (PPD) is
defined as the number of pixels mapped to a single cell on
the retina of the user’s eye. PPD increases quadratically as
distance to the screen increases. The human perceptual system
tends to average out too large PPD w.r.t. colour, brightness,
and contrast [
        <xref ref-type="bibr" rid="ref41">41</xref>
        ], for example a red pixel and a green pixel
is perceived as brown.
      </p>
      <p>The perceptual effect of averaging is well known, for
instance in the perception of secondary colours as a mixture
of two primary colours or in the phenomena of metamerism.</p>
      <p>
        More related effects include simultaneous contrast [
        <xref ref-type="bibr" rid="ref42">42</xref>
        ],
afterimages [
        <xref ref-type="bibr" rid="ref43">43</xref>
        ], and the Chubb effect [
        <xref ref-type="bibr" rid="ref44">44</xref>
        ]. Without delving too
deeply into perception psychology, note that a sophisticated
theory for averaging effects are already available and well
described. For the purpose of this discussion with respect
to large high-resolution displays, it is sufficient to state that
the effect of averaging a set of pixels is already exploited
in practice by techniques such as image mosaics [
        <xref ref-type="bibr" rid="ref41">41</xref>
        ] and
halftone techniques [
        <xref ref-type="bibr" rid="ref45">45</xref>
        ], as illustrated in Figure 7.
      </p>
      <p>Since PPD and related averaging effects are a function of
distance from the display, screen distance can be seen as an
interactive parameter which can be exploited for visual data
analysis. Three techniques are proposed to apply a
perceptionbased level of detail to scatterplots on large vertically-mounted
high-resolution displays.</p>
      <p>Firstly, the concept of superpixels is similar to image
mosaics. A superpixel consists of a set of pixels in a small
rectangular area of the screen, for example a regular grid of
say 50×50 pixels. The average colour, brightness, and contrast
properties of superpixels can be used to visualise data for
users farther from the screen. At the same time, the individual
colouring of pixels comprising a superpixel can be used to
visualise more detailed information for users who are closer
to the screen.</p>
      <p>Secondly, the concept of a Screen Progressive Visual Glyph
(SPVG) utilises the colour, brightness, and contrast values of
a glyph to encode different secondary information for closer
users. In Figure 8, the scatterplot on the left visually encodes
two different classes (brown and cyan) in the data. This is
easily perceivable by a distant user. On the right, a user who is
closer can make out an additional level of detail: the dots of the
scatterplot in fact contain an additional histogram representing
the distribution of the related class in the data. In this case,
the circles representing the mapped data points are SPVGs.</p>
      <p>The difference between SPVGs and superpixels is that SPVGs
encode different visual details of the same data at different
distances. In this way, they could be understood as a data
filter concept as well. SPVGs can be placed on the screen
on demand and are not restricted to a regular grid, providing
greater flexibility.</p>
      <p>Thirdly, variational textures are related to halftone
techniques. Structural variations of an underlying texture can be
used to visually encode fine data details for users who are
very close to the screen, while these details will immediately
disappear when the user goes further away.</p>
      <p>
        These proposed approaches for level of visual detail align
well with Shneiderman’s mantra for information visualisation
[
        <xref ref-type="bibr" rid="ref46">46</xref>
        ]: “Overview first, zoom and filter, details on demand”. In
this case, distance from the screen is an additional degree of
Interaction Concepts for Collaborative Visual Analysis of Scatterplots on Large Vertically-Mounted
High-Resolution Multi-Touch Displays
freedom, controlled by each user individually as they move
closer to or further away from the display. The approaches
are discussed as a concept and not implemented yet.
      </p>
    </sec>
    <sec id="sec-7">
      <title>VII. DISCUSSION AND FUTURE WORK</title>
      <p>The concepts described in this paper are first designs of
appropriate touch interaction for the visual interactive analysis
of scatterplot data on large vertically-mounted high-resolution
multi-touch displays. The interactions support small-group
collaborative analysis, by exchanging patterns or settings from
one user’s view to the others. The interaction design is
currently based on user selections, but is generalisable to
other basic techniques. The interaction techniques have been
implemented as a proof of concept. They still need to be
evaluated with real users and real tasks as part of future
work. Mapping out the design space for this combination
of visualisation and display device may well yield further
interesting interaction designs.</p>
      <p>The idea of exploiting perception-based level of detail for
the visualisation of scatterplots on large displays is new.
Detailed information can be rendered inside the marks of
the plot, becoming perceivable once users are closer to the
screen. Again, this is a proof of concept and requires further
development and evaluation.</p>
      <p>
        While large high-resolution displays can improve the
exploration of large scatterplot spaces, further data analysis
support is needed to scale up with the number of data points
and dimensions. Traditional techniques like cluster analysis
and aggregation can help with scalability. Another relevant
line of improvement is to adjust the view to the user’s
need and situation. In [
        <xref ref-type="bibr" rid="ref47">47</xref>
        ], the authors propose using eye
tracking to infer user interest and using this information to
recommend additional relevant but previously unseen views
for exploration. While that work was developed as a desktop
application, it might be interesting to incorporate eye-tracking
support to recommend views for small collaborative team work
on a large display. Moreover, adding group activity recognition
and therefore pro-active interaction, can support collaboration
by preventing information overload [
        <xref ref-type="bibr" rid="ref48">48</xref>
        ].
      </p>
    </sec>
    <sec id="sec-8">
      <title>VIII. CONCLUDING REMARKS</title>
      <p>This paper presented challenges and solutions for
collaborative and single-task multi-touch interaction on large
verticallymounted high-resolution displays. The techniques presented
are well-suited for collaborative analysis tasks with scatterplots
and scatterplot matrices. They are potentially generalisable for
other data exploration and visual analytics practices but require
further implementation and evaluation. Also, perception-based
visualisation of scatterplots is introduced as a possible
direction for further research.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>K.</given-names>
            <surname>Reda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. E.</given-names>
            <surname>Johnson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Papka</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Leigh</surname>
          </string-name>
          , “
          <article-title>Effects of display size and resolution on user behavior and insight acquisition in visual exploration,”</article-title>
          <source>in Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM</source>
          ,
          <year>2015</year>
          , pp.
          <fpage>2759</fpage>
          -
          <lpage>2768</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Ruddle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Fateen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Treanor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Sondergeld</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Ouirke</surname>
          </string-name>
          , “
          <article-title>Leveraging wall-sized high-resolution displays for comparative genomics analyses of copy number variation,” in Biological Data Visualization (BioVis</article-title>
          ),
          <source>2013 IEEE Symposium on. IEEE</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>89</fpage>
          -
          <lpage>96</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Prouzeau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bezerianos</surname>
          </string-name>
          , and
          <string-name>
            <given-names>O.</given-names>
            <surname>Chapuis</surname>
          </string-name>
          , “
          <article-title>Evaluating multi-user selection for exploring graph topology on wall-displays,”</article-title>
          <source>IEEE Transactions on Visualization and Computer Graphics</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>C.</given-names>
            <surname>Andrews</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Endert</surname>
          </string-name>
          , and C. North, “
          <article-title>Space to think: large highresolution displays for sensemaking,” in Proceedings of the SIGCHI conference on human factors in computing systems</article-title>
          . ACM,
          <year>2010</year>
          , pp.
          <fpage>55</fpage>
          -
          <lpage>64</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Jakobsen</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Hornbaek</surname>
          </string-name>
          , “
          <article-title>Up close and personal: Collaborative work on a high-resolution multitouch wall display,” ACM Transactions on Computer-Human Interaction (TOCHI)</article-title>
          , vol.
          <volume>21</volume>
          , no.
          <issue>2</issue>
          , p.
          <fpage>11</fpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T. K.</given-names>
            <surname>Khan</surname>
          </string-name>
          , “
          <article-title>A survey of interaction techniques and devices for large high resolution displays,” in OASIcs-OpenAccess Series in Informatics</article-title>
          , vol.
          <volume>19</volume>
          .
          <string-name>
            <surname>Schloss</surname>
          </string-name>
          Dagstuhl-Leibniz-Zentrum fuer Informatik,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>K.</given-names>
            <surname>Vogt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bradel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Andrews</surname>
          </string-name>
          , C. North,
          <string-name>
            <given-names>A.</given-names>
            <surname>Endert</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Hutchings</surname>
          </string-name>
          , “
          <article-title>Co-located collaborative sensemaking on a large high-resolution display with multiple input devices</article-title>
          ,” Human-Computer
          <source>InteractionINTERACT</source>
          <year>2011</year>
          , pp.
          <fpage>589</fpage>
          -
          <lpage>604</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>P.</given-names>
            <surname>Isenberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Carpendale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bezerianos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Henry</surname>
          </string-name>
          , and J.
          <string-name>
            <surname>-D. Fekete</surname>
          </string-name>
          , “Coconuttrix:
          <article-title>Collaborative retrofitting for information visualization</article-title>
          ,
          <source>” IEEE Computer Graphics and Applications</source>
          , vol.
          <volume>29</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>44</fpage>
          -
          <lpage>57</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>C.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Chapuis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Beaudouin-Lafon</surname>
          </string-name>
          , and E. Lecolinet, “Coreach:
          <article-title>Cooperative gestures for data manipulation on wall-sized displays</article-title>
          ,”
          <source>in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>6730</fpage>
          -
          <lpage>6741</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Morris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Paepcke</surname>
          </string-name>
          , and T. Winograd, “
          <article-title>Cooperative gestures: multi-user gestural interactions for co-located groupware,” in Proceedings of the SIGCHI conference on Human Factors in computing systems</article-title>
          . ACM,
          <year>2006</year>
          , pp.
          <fpage>1201</fpage>
          -
          <lpage>1210</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Inselberg</surname>
          </string-name>
          , “
          <article-title>The plane with parallel coordinates,” The visual computer</article-title>
          , vol.
          <volume>1</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>69</fpage>
          -
          <lpage>91</lpage>
          ,
          <year>1985</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ashdown</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Tuddenham</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Robinson</surname>
          </string-name>
          , “
          <string-name>
            <surname>High-Resolution Interactive</surname>
            <given-names>Displays</given-names>
          </string-name>
          ,” in Tabletops - Horizontal
          <source>Interactive Displays</source>
          ,
          <year>2010</year>
          , pp.
          <fpage>71</fpage>
          -
          <lpage>100</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>L.</given-names>
            <surname>Shao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mahajan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Schreck</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D. J.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          , “
          <article-title>Interactive regression lens for exploring scatter plots,” in Computer Graphics Forum</article-title>
          , vol.
          <volume>36</volume>
          , no. 3. Wiley Online Library,
          <year>2017</year>
          , pp.
          <fpage>157</fpage>
          -
          <lpage>166</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Yi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. A.</given-names>
            <surname>Kang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Stasko</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Jacko</surname>
          </string-name>
          , “
          <article-title>Toward a deeper understanding of the role of interaction in information visualization</article-title>
          .
          <source>” IEEE transactions on visualization and computer graphics</source>
          , vol.
          <volume>13</volume>
          , no.
          <issue>6</issue>
          , pp.
          <fpage>1224</fpage>
          -
          <lpage>31</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>B.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Isenberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. H.</given-names>
            <surname>Riche</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Carpendale</surname>
          </string-name>
          , “
          <article-title>Beyond mouse and keyboard: Expanding design considerations for information visualization interactions</article-title>
          ,
          <source>” IEEE Transactions on Visualization and Computer Graphics</source>
          , vol.
          <volume>18</volume>
          , no.
          <issue>12</issue>
          , pp.
          <fpage>2689</fpage>
          -
          <lpage>2698</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>D.</given-names>
            <surname>Keim</surname>
          </string-name>
          , “
          <article-title>Information visualization and visual data mining</article-title>
          ,
          <source>” IEEE Transactions on Visualization and Computer Graphics</source>
          , vol.
          <volume>8</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Ruddle</surname>
          </string-name>
          , R. G. Thomas,
          <string-name>
            <given-names>R. S.</given-names>
            <surname>Randell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Quirke</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Treanor</surname>
          </string-name>
          , “
          <article-title>Performance and interaction behaviour during visual search on large, high-resolution displays</article-title>
          ,
          <source>” Information Visualization</source>
          , vol.
          <volume>14</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>137</fpage>
          -
          <lpage>147</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Roberts</surname>
          </string-name>
          , “
          <article-title>Exploratory Visualization with Multiple Linked Views,” in Exploring Geovisualization</article-title>
          . Amsterdam: Elseviers,
          <year>2005</year>
          , pp.
          <fpage>159</fpage>
          -
          <lpage>180</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>M.</given-names>
            <surname>Czerwinski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Regan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Meyers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. G.</given-names>
            <surname>Robertson</surname>
          </string-name>
          , and G. Starkweather, “
          <article-title>Toward characterizing the productivity benefits of very large displays</article-title>
          .” in Interact, vol.
          <volume>3</volume>
          ,
          <issue>2003</issue>
          , pp.
          <fpage>9</fpage>
          -
          <lpage>16</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>P.</given-names>
            <surname>Isenberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dragicevic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Willett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bezerianos</surname>
          </string-name>
          , and J.
          <string-name>
            <surname>-D. Fekete</surname>
          </string-name>
          , “
          <article-title>Hybrid-image visualization for large viewing environments,” IEEE transactions on visualization and computer graphics</article-title>
          , vol.
          <volume>19</volume>
          , no.
          <issue>12</issue>
          , pp.
          <fpage>2346</fpage>
          -
          <lpage>2355</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>T.</given-names>
            <surname>Tsandilas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bezerianos</surname>
          </string-name>
          , and T. Jacob, “Sketchsliders:
          <article-title>Sketching widgets for visual exploration on wall displays,”</article-title>
          <source>in Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM</source>
          ,
          <year>2015</year>
          , pp.
          <fpage>3255</fpage>
          -
          <lpage>3264</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhai</surname>
          </string-name>
          ,
          <string-name>
            <surname>G</surname>
          </string-name>
          . Zhao,
          <string-name>
            <given-names>T.</given-names>
            <surname>Alatalo</surname>
          </string-name>
          , J. Heikkila¨,
          <string-name>
            <given-names>T.</given-names>
            <surname>Ojala</surname>
          </string-name>
          , and
          <string-name>
            <given-names>X.</given-names>
            <surname>Huang</surname>
          </string-name>
          , “
          <article-title>Gesture interaction for wall-sized touchscreen display,” in Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct publication</article-title>
          .
          <source>ACM</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>175</fpage>
          -
          <lpage>178</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>M.</given-names>
            <surname>Heilig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Huber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Demarmels</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Reiterer</surname>
          </string-name>
          , “
          <article-title>Scattertouch: a multi touch rubber sheet scatter plot visualization for co-located data exploration,” in ACM International Conference on Interactive Tabletops and Surfaces</article-title>
          . ACM,
          <year>2010</year>
          , pp.
          <fpage>263</fpage>
          -
          <lpage>264</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>R.</given-names>
            <surname>Sadana</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Stasko</surname>
          </string-name>
          , “
          <article-title>Expanding selection for information visualization systems on tablet devices,” in Proceedings of the 2016 ACM on Interactive Surfaces and Spaces</article-title>
          . ACM,
          <year>2016</year>
          , pp.
          <fpage>149</fpage>
          -
          <lpage>158</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>U.</given-names>
            <surname>Kister</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Reipschla</surname>
          </string-name>
          <article-title>¨ger, and</article-title>
          <string-name>
            <given-names>R.</given-names>
            <surname>Dachselt</surname>
          </string-name>
          , “Multilens:
          <article-title>Fluent interaction with multi-functional multi-touch lenses for information visualization,” in Proceedings of the 2016 ACM on Interactive Surfaces and Spaces</article-title>
          . ACM,
          <year>2016</year>
          , pp.
          <fpage>139</fpage>
          -
          <lpage>148</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>K.</given-names>
            <surname>Matkovic´</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Abraham</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Jelovic´, and</article-title>
          <string-name>
            <given-names>H.</given-names>
            <surname>Hauser</surname>
          </string-name>
          , “
          <article-title>Quantitative externalization of visual data analysis results using local regression models</article-title>
          ,” in
          <source>International Cross-Domain Conference for Machine Learning and Knowledge Extraction</source>
          . Springer,
          <year>2017</year>
          , pp.
          <fpage>199</fpage>
          -
          <lpage>218</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Rzeszotarski</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Kittur</surname>
          </string-name>
          , “
          <article-title>Kinetica: naturalistic multi-touch data visualization,” in Proceedings of the 32nd annual ACM conference on Human factors in computing systems</article-title>
          . ACM,
          <year>2014</year>
          , pp.
          <fpage>897</fpage>
          -
          <lpage>906</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>C.</given-names>
            <surname>Andrews</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Endert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Yost</surname>
          </string-name>
          , and C. North, “
          <article-title>Information visualization on large, high-resolution displays: Issues, challenges</article-title>
          , and opportunities,
          <source>” Information Visualization</source>
          , vol.
          <volume>10</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>341</fpage>
          -
          <lpage>355</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>P.</given-names>
            <surname>Isenberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Isenberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hesselmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Von Zadow</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Tang</surname>
          </string-name>
          , “
          <article-title>Data visualization on interactive surfaces: A research agenda</article-title>
          ,
          <source>” IEEE Computer Graphics and Applications</source>
          , vol.
          <volume>33</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>16</fpage>
          -
          <lpage>24</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>A.</given-names>
            <surname>Prouzeau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bezerianos</surname>
          </string-name>
          , and
          <string-name>
            <given-names>O.</given-names>
            <surname>Chapuis</surname>
          </string-name>
          , “
          <article-title>Trade-offs between a vertical shared display and two desktops in a collaborative path-finding task</article-title>
          ,”
          <source>in Proceedings of Graphics Interface</source>
          <year>2017</year>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>E. W.</given-names>
            <surname>Pedersen</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Hornbaek</surname>
          </string-name>
          , “
          <article-title>An experimental comparison of touch interaction on vertical and horizontal surfaces</article-title>
          ,”
          <source>in Proceedings of the 7th Nordic Conference on Human-Computer Interaction: Making Sense Through Design. ACM</source>
          ,
          <year>2012</year>
          , pp.
          <fpage>370</fpage>
          -
          <lpage>379</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Badam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Amini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Elmqvist</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Irani</surname>
          </string-name>
          , “
          <article-title>Supporting visual exploration for multiple users in large display environments,” in Visual Analytics Science</article-title>
          and
          <source>Technology (VAST)</source>
          ,
          <source>2016 IEEE Conference on. IEEE</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33] “Eyevis display 84-inch,”
          <year>2017</year>
          . [Online]. Available: http: //www.eyevis.de/en/products/lcd-solutions/
          <article-title>4k-ultra-hd-lcd-monitors/ 84-inch-4k-uhd-lcd</article-title>
          .html
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>A.</given-names>
            <surname>Prouzeau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bezerianos</surname>
          </string-name>
          , and
          <string-name>
            <given-names>O.</given-names>
            <surname>Chapuis</surname>
          </string-name>
          , “
          <article-title>Towards road traffic management with forecasting on wall displays,” in Proceedings of the 2016 ACM on Interactive Surfaces and Spaces</article-title>
          . ACM,
          <year>2016</year>
          , pp.
          <fpage>119</fpage>
          -
          <lpage>128</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>U.</given-names>
            <surname>Kister</surname>
          </string-name>
          , P. Reipschla¨ger,
          <string-name>
            <given-names>F.</given-names>
            <surname>Matulic</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Dachselt</surname>
          </string-name>
          , “Bodylenses:
          <article-title>Embodied magic lenses and personal territories for wall displays</article-title>
          ,”
          <source>in Proceedings of the 2015 International Conference on Interactive Tabletops &amp; Surfaces. ACM</source>
          ,
          <year>2015</year>
          , pp.
          <fpage>117</fpage>
          -
          <lpage>126</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <surname>K.-P. Yee</surname>
          </string-name>
          , “
          <article-title>Two-handed interaction on a tablet display,” in CHI'04 Extended Abstracts on Human Factors in Computing Systems</article-title>
          . ACM,
          <year>2004</year>
          , pp.
          <fpage>1493</fpage>
          -
          <lpage>1496</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kaltenbrunner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Bovermann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bencina</surname>
          </string-name>
          , and E. Costanza, “
          <article-title>Tuio: A protocol for table-top tangible user interfaces</article-title>
          ,
          <source>” in Proc. of the The 6th Intl Workshop on Gesture in Human-Computer Interaction and Simulation</source>
          ,
          <year>2005</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>M.</given-names>
            <surname>Fetter</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Bimamisa</surname>
          </string-name>
          , “
          <article-title>Tuiofxtoolkit support for the development of javafx applications for interactive tabletops,” in Human-Computer Interaction</article-title>
          . Springer,
          <year>2015</year>
          , pp.
          <fpage>486</fpage>
          -
          <lpage>489</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>M.</given-names>
            <surname>Fetter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Bimamisa</surname>
          </string-name>
          , and T. Gross, “
          <article-title>Tuiofx: A javafx toolkit for shared interactive surfaces</article-title>
          ,
          <source>” Proceedings of the ACM on HumanComputer Interaction</source>
          , vol.
          <volume>1</volume>
          , no.
          <issue>1</issue>
          , p.
          <fpage>10</fpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lichman</surname>
          </string-name>
          , “
          <article-title>UCI machine learning repository</article-title>
          ,”
          <year>2013</year>
          . [Online]. Available: http://archive.ics.uci.edu/ml
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>A.</given-names>
            <surname>Finkelstein</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Range</surname>
          </string-name>
          , “Image Mosaics,” Princeton University, Computer Science Department,
          <source>Technical Report TR-574-98</source>
          , Mar.
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          [42]
          <string-name>
            <given-names>P.</given-names>
            <surname>Burgh</surname>
          </string-name>
          , “
          <article-title>Peripheral viewing and simultaneous contrast,”</article-title>
          <source>The Quarterly Journal of Experimental Psychology</source>
          <volume>16</volume>
          (
          <issue>3</issue>
          ), pp.
          <fpage>257</fpage>
          -
          <lpage>263</lpage>
          ,
          <year>1964</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          [43]
          <string-name>
            <given-names>S.</given-names>
            <surname>Anstis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Rogers</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Henry</surname>
          </string-name>
          , “
          <article-title>Interactions between simultaneous contrast and colored afterimages</article-title>
          ,
          <source>” Vision Research</source>
          , pp.
          <fpage>899</fpage>
          -
          <lpage>911</lpage>
          ,
          <year>1978</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          [44]
          <string-name>
            <given-names>C.</given-names>
            <surname>Chubb</surname>
          </string-name>
          , G. Sperling, and
          <string-name>
            <given-names>J.</given-names>
            <surname>Solomon</surname>
          </string-name>
          , “
          <article-title>Texture interactions determine perceived contrast</article-title>
          ,
          <source>” Proc Natl Acad Sci</source>
          ,
          <volume>86</volume>
          , pp.
          <fpage>9631</fpage>
          -
          <lpage>9635</lpage>
          ,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          [45]
          <string-name>
            <given-names>R.</given-names>
            <surname>Steinbrecher</surname>
          </string-name>
          , “Bildverarbeitung in der praxis,
          <source>” RST-Verlag. Mu¨nchenWien-Oldenburg</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          [46]
          <string-name>
            <given-names>B.</given-names>
            <surname>Shneiderman</surname>
          </string-name>
          , “
          <article-title>The eyes have it: A task by data type taxonomy for information visualizations</article-title>
          ,”
          <source>in IEEE Visual Languages</source>
          ,
          <year>1996</year>
          , pp.
          <fpage>336</fpage>
          -
          <lpage>343</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          [47]
          <string-name>
            <given-names>L.</given-names>
            <surname>Shao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Silva</surname>
          </string-name>
          , E. Eggeling, and T. Schreck, “
          <article-title>Visual exploration of large scatter plot matrices by pattern recommendation based on eye tracking,”</article-title>
          <source>in Proceedings of the 2017 ACM Workshop on Exploratory Search and Interactive Data Analytics. ACM</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>9</fpage>
          -
          <lpage>16</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          [48]
          <string-name>
            <given-names>D.</given-names>
            <surname>Gordon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-H.</given-names>
            <surname>Hanne</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Berchtold</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A. N.</given-names>
            <surname>Shirehjini</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Beigl</surname>
          </string-name>
          , “
          <article-title>Towards collaborative group activity recognition using mobile devices,” Mobile Networks and Applications</article-title>
          , vol.
          <volume>18</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>326</fpage>
          -
          <lpage>340</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>