<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Reflections on Visualization in Motion for Fitness Trackers</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alaul Islam</string-name>
          <email>mohammad-alaul.islam@inria.fr</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lijie Yao</string-name>
          <email>lijie.yao@inria.fr</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anastasia Bezerianos</string-name>
          <email>anastasia.bezerianos@lri.fr</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tanja Blascheck</string-name>
          <email>tanja.blascheck@vis.uni-stuttgart.de</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tingying He</string-name>
          <email>tingying.he@inria.fr</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bongshin Lee</string-name>
          <email>bongshin@microsoft.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Romain Vuillemot</string-name>
          <email>romain.vuillemot@ec-lyon.fr</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Petra Isenberg</string-name>
          <email>petra.isenberg@inria.fr</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Microsoft Research</institution>
          ,
          <addr-line>Redmond, WA</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Université Paris-Saclay</institution>
          ,
          <addr-line>CNRS, Inria, LISN, 91190, Gif-sur-Yvette</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Université de Lyon, École Centrale de Lyon</institution>
          ,
          <addr-line>CNRS, UMR5205, LIRIS, F-69134</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Stuttgart</institution>
          ,
          <addr-line>Stuttgart</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper, we reflect on our past work towards understanding how to design visualizations for fitness trackers that are used in motion. We have coined the term “visualization in motion” for visualizations that are used in the presence of relative motion between a viewer and the visualization. Here, we describe how visualization in motion is relevant to sports scenarios. We also provide new data on current smartwatch visualizations for sports and discuss future challenges for visualizations in motion for fitness trackers.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Visualization in Motion</kwd>
        <kwd>Sports Analytics</kwd>
        <kwd>Wearable Devices</kwd>
        <kwd>Fitness Trackers</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Fitness trackers, such as smartwatches and fitness bands record a variety of data. Most of
these devices also visualize the collected data and make it immediately available to wearers.
Smartwatch faces, in particular, have become mini data dashboards that can give an overview
of data such as step counts, heart rates, locations, sleep information or even device-external
data such as the current temperature or weather predictions. Due to their small screen size
and usage context, fitness tracker screens pose several novel and interesting challenges to
visualization: visualizations need not only to be small and glanceable but also often to be read
in motion. For example, when an athlete trains for a race, they can only aford quick glances
at a smartwatch while running, to concentrate on the path to take and avoid accidents. As
stopping the race to look at a watch is not a desired option, the watch needs to be read while
the runner’s body, including their arms, is moving. During a quick glance at the tracker, the
athlete may want to take in multiple information at once: current race time, heart rate, or
distance run are just three examples. Unfortunately, there is still little advice on how to design
efective information dashboards for fitness trackers, and existing designs are built without
strong empirical foundations.</p>
      <p>To address this problem we have recently begun to work on two directions: a) visualizations
in motion, in which we assess the efects of motion on the perception of visualizations and b)
visualizations for fitness trackers and, in particular, smartwatches. In this paper, we briefly
introduce our past work with a focus on smartwatch-type fitness trackers, provide some new
data on existing smartwatch faces for sports, and outline dedicated challenges for visualization
in motion for fitness trackers.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <p>While neither visualization in motion nor fitness tracker visualization has a long history
of research, some relevant past work does exist. The following background presents the
definition of visualization in motion and briefly outlines the larger research space. The second
section focuses on fitness tracker visualization in the context of health and sports, and that of
visualization in motion in relation to fitness trackers.</p>
      <sec id="sec-2-1">
        <title>2.1. What is Visualization in Motion?</title>
        <p>
          In our recent paper [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], we defined visualization in motion as:
        </p>
        <p>Visual data representations that are used in contexts that exhibit relative motion
between a viewer and an entire visualization.</p>
        <p>
          Visualizations in motion specifically concern relative movement between visualizations and
viewers and, therefore, they are diferent from animation of visualization components that
are meant to express highlights, to smooth transitions between views [
          <xref ref-type="bibr" rid="ref2 ref3 ref4">2, 3, 4, 5</xref>
          ], or to morph
between diferent representations [ 6, 7, 8, 9, 10]. Relative motion between entire visualizations
and viewers is relatively common in the sports context, but has not been explored in depth.
Examples include stationary players who sit in front of a screen while playing a sports game, in
which a game character (e.g., an American football player) moves with attached donut charts
showing data related to the character (Figure 1a), audiences sitting in a stadium while watching
an augmented basketball game that shows data next to players (Figure 1b), people walking
across or driving by physicalizations (Figure 1c and 1d), a person reading how many calories
they have burned from a fitness tracker while exercising (Figure 1e), and a traveler navigating
using a phone while walking (Figure 1f). These scenarios can be grouped into three categories
of visualization in motion:
• moving visualizations &amp; stationary viewer
• stationary visualizations &amp; moving viewer
• moving visualization &amp; moving viewer
Moving visualization &amp; stationary viewer. Stationary visualization &amp; moving viewer. Moving visualization &amp; moving viewer.
        </p>
        <p>In this paper, we bring together our work on fitness trackers and visualization in motion, and
thus focus on the last group that involves moving visualizations &amp; moving viewer. Our specific
focus on fitness trackers was motivated by the fact that they already carry visualizations and
their wearers’ are not only often moving but also have information needs while on the go, such
as learning about their performance and condition.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Fitness Tracker Visualizations</title>
        <p>Choosing what type of data and how to show it to wearers is a fundamental challenge that
can impact how devices are adopted. In our own work, we used commercial fitness trackers
such as fitness bands and smartwatches because our focus was on data representation and
not on the development of new technologies. However, we acknowledge that many types of
wearable displays have been proposed [11] and discuss some challenges related to these in our
research agenda.</p>
        <p>Niess et al. [12] studied the impact of various approaches to represent unmet fitness tracker
goals through visualization on rumination, highlighting that multicolored charts on fitness
trackers may lead to demotivation and negative thought cycles. Havlucu et al. [13] interviewed
20 professional tennis players and found that the players’ abandonment of their trackers was
due to the type of information displayed on the fitness trackers. Participants wished to see
tennis-specific data, recovery rate, and nutrition, as well as precise technical data regarding
their tennis performance, such as where the ball hit the racket, the speed of a stroke, how the
ball bounced of the floor, general mobility on the court, as well as weak points and errors
regarding their own game.</p>
        <p>Outside of the professional sports context, smartwatches also have a lot of potential to be an
essential part of the personal health movement. Yet, even with a potentially large target audience,
visualization guidelines for fitness trackers are still sparse. Most of the past studies discussed
health and physical activity data representations on smartwatches and mentioned the challenges
of representing these data types [14]. Van Rossum [15] suggested smartwatch visualizations
aiming for easy-to-understand, clear visuals, using a black background for contrast, and less
disturbance in dim environments. Albers et al. [16] showed that the tasks that wearers do when
exploring a visualization are influenced by the visualization’s design and choices of visual factors
(e.g., position, color), mapping variables (e.g., raw data, averages), and computational variables
(how aggregated data are computed). Pektaş et al. [17] showed how visualization using icons and
emoji on warnings and alerts could motivate wearers to monitor health related information. In
contrast to these works, we are interested in fitness tracker visualization in motion, specifically
when wearers are on the move during sports activities, which is less explored in the literature.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Visualization in Motion for Wearable and Mobile Devices</title>
        <p>In the context of wearable devices, relative motion is most often created when both viewers
and visualizations are in motion such as during a run or walk. Several previous studies on mobile
phones have shown that walking increased workload and reduced performance in reading tasks
[18, 19, 20]. As cognitive resources need to be similarly shared between navigation and reading
data, it seems reasonable to expect similar negative efects for visualizations in motion on
iftness trackers. However, the exact efects have not been studied in enough depth to make
recommendations for the design of visualization in motion. Although moving participants were
involved in the studies by Schiewe et al. [21] on visualizations for real-time feedback during
running activities, by Amini et al. [22] on in-situ health and fitness data exploration for fitness
trackers, and by Langer et al. [23] on crash risk indication applications for sports smartwatches
in the context of mountain biking, the efects of relative movement between displaying charts
and exercising people received little to no dedicated attention.</p>
        <p>However, we may take inspiration from another research area containing moving viewers and
moving visual targets: immersive analytics. Literature from psychology has shown that walking
in VR may have negative impact on multi-object tracking [24]. In fact, several research eforts
in VR have targeted a viewer’s motion, such as examples illustrated in Locomotion Vault [25].
Examples collected by Locomotion Vault includes one showing that in a virtual environment,
the viewer’s spatial memory can benefit from common motion efects such as walking. Grioui
and Blascheck [26] conducted a first pilot on heart rate reading from a virtual smartwatch in
the context of a VR game that gave preliminary indications that heart rate visualizations in the
form of summary charts might be efective for making decisions to reach heart rate goals. Thus,
how people will perform when reading visualizations in motion in an immersive environment
still requires more dedicated work.</p>
        <p>In summary, literature on how visualizations are read under motion or how they should be
designed to be efective in a sports context is still too sparse to make clear recommendation.
Next, we outline some of our past research on how visualizations are currently designed for
sports-related smartwatch faces before moving on to recommend a research agenda for this
emerging topic.</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Guidelines for Visualization Design</title>
        <p>Ample evidence exists that visualization choice and design will impact the readability of
visualizations without motion. These design factors need to be explored again specifically for
micro (very small) visualizations on fitness trackers used while in motion. Example factors
include representation type [27, 28], the visualization complexity [29, 30], the decoration of
the representation [31, 32, 33], the size of the visualization [34], the color selection [35, 36],
and specifically for a micro display with limited space, the visualization density [ 37]. Previous
research has shown that cognitive overload can occur when too much information is presented
during attention-demanding sports like tennis [38]. As such, information needs to likely be
minimal, context-specific, and glanceable to the wearers. Gouveia et al. [ 39] showed that the
average wearer’s involvement with the trackers was brief, 5-sec, without further interaction.
However, we expect the duration to be much shorter than that during sports activities. Previous
research showed that people could efectively read even complex sleep visualizations on fitness
trackers [40], perform simple comparison tasks with visualizations on smartwatches within
several hundred milliseconds [41], providing evidence that visualizations could be efective
forms of data representations in the context of fitness trackers. However, ambient illumination,
lighting efects [ 42, 43] or motion textures [44] for fitness trackers could also be a possible way
to achieve glanceability during attention-requiring sports activities, during which wearers may
get feedback through color changes, brightness levels, or texture changes.</p>
        <p>Yet, what exact limits are for how much information to be displayed and at what sizes is still
underexplored. Should all data be represented with a visualization? If not, what would be a
good number to have?</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Current Visualizations for Smartwatch Faces</title>
      <p>The screens that wearers of smartwatches look at most often are the “home” screen or
“smartwatch face.” These smartwatch faces show time but also a variety of additional data to
wearers and are often designed for specific themes, including sports. To better understand what
type of data current sports watch faces show to wearers and how this data is represented, we
conducted a systematic review of sports category tagged watch faces from the Facer App [45].</p>
      <sec id="sec-3-1">
        <title>3.1. Data Collection</title>
        <p>We decided to collect watch faces from the Facer App, one of the most popular smartwatch
face distribution websites. It contains a Top100 page that lists the premium or free watch
faces of Apple and WearOS/Samsung smartwatches. Because the list for the Apple Watch did
not consistently contain 100 faces, we chose to focus on the WearOS/Samsung watch faces.
Nevertheless, the watch faces we have collected can also be used in “square-shaped” Apple
watch-like watches. We manually collected the metadata of the top 100 smartwatch faces every
Sunday for one month because the premium list was recalculated on Sunday, starting from
March 14, 2021. The metadata collected for each watch face included its rank, name, category,
link, and thumbnail. Among the 400 top watch faces we collected, 184 were unique watch faces,
as several appeared in the top 100 for multiple weeks. From the 184 unique smartwatch faces,
we found that 42 watch faces were categorized as sports watch faces.</p>
        <p>We analyzed the watch faces with the extracted image. If a design was unclear from the
thumbnail image, we went to the Facer website to look at the simulated watch face graphic.
We group our results according to the data shown on a sport-tagged watch face and data
representation types.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. What Data is Shown on Sport Watch Faces?</title>
        <p>One of the dificulties with designing data visualizations for smartwatch faces is that these
visualizations typically show many types of independent data (steps, weather, battery levels, etc.)
that need to be shown in a coherent watch face design. These non-time/date data functionalities
on smartwatches are called complications [46]. In this sense, watch faces with several
complications can be considered small personal dashboards with distinctive design challenges. These
design challenges include limited display space for a large number of possible complications,
device form factors, as well as the specific context of use, in our case, sports activity, that often
requires information to be readable at a glance. In addition, watch faces require that time or date
is readable and often remains the primary data shown. We present our findings from analyzing
42 sports watch faces in the following.</p>
        <p>Number of Data Types. The watch faces from the sports category contained a median
of six data types, similar to Islam et al.’s smartwatch face survey [47], in which participants
reported a median of five data types.</p>
        <p>Types of Data. Health &amp; fitness related data were the most common. We found 41.05% health
&amp; fitness related data, among which step count and heart rate were the most common. However,
the watch faces also contained 35.37% weather &amp; planetary data such as temperature and sky
condition, and 23.58% device &amp; location related data such as watch and phone battery level.
Among the top 10 most common data items, we found four (step count, heart rate, distance
traveled, and calories burned) that were health &amp; fitness related data. The day’s temperature,
including weather information, moon phase, and sunset/sunrise time, were the most frequently
displayed weather data on sports watch faces. Watch battery level, which is the first and most
displayed data item, as well as phone battery level were device location related data items
displayed on the sports watch faces.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. How are the Sport Watch Faces Designed?</title>
        <p>Watch faces generally were comprised of components that we group into those representing
time, complications, and decorations, each with its representation styles. Some of the example
sports smartwatch faces are shown in Figure 2.</p>
        <p>Time display. Watch faces can be divided into digital, analog, and hybrid watch faces depending
on the time display. Digital watch faces represent time information as HH:MM:SS for hours,
minutes, and potentially seconds. Analog watch faces typically use the hour, minute, and second
hands to indicate the time, to resemble conventional analog watches. Hybrid watch faces show
both digital and analog time displays. Our analysis showed that the majority of premium sports
watch faces were hybrid watch faces (40.5%), followed by digital watch faces (33.3%) and analog
watch faces (26.2%).</p>
        <p>Data Types Representations. We found seven ways of representing data, that were based
on combinations of text, icons, and charts, as shown in Figure 3. As icons, we classified
graphical content not in the strict semiotic sense and more analogously to how they were used
in computing. Here icons are a type of image that represents something else. As such our icons
can be both semiotic symbols G or icons . Figure 3 shows the average number of
representation types on each sports watch face. A simple text label (Only Text, b6p8m) was the
most common representation type and was used for 2 data types on average on each watch
face (M = 2, 95% CI: [1.45, 2.57]). Icons accompanied by text labels (Icon+Text, 68 ) were the
second most common (M = 1.6, 95% CI: [1.17, 2.05]). In Islam et al.’s survey [47], Icon+Text
had been the most common representation type, used to display two kinds of data types on
average on each watch face (M = 2.05, 95% CI: [1.78, 2.32]) followed by Text Only (M = 1.38, 95%
CI: [1.13, 1.66]). Both evaluations clearly show that text is the most frequent way to represent
data on watch faces whiles charts or charts combined with text or icons were rare in practice.
Chart+Text 68bpm (M = 0.69, 95% CI: [0.4, 1]), Chart Only (M = 0.55, 95% CI: [0.38, 0.71]),
Chart+Icon+Text 68bpm (M = 0.45, 95% CI: [0.24, 0.71]), and Chart+Icon (M = 0.14,
95% CI: [0.05, 0.29]) appeared on average less than once per sports watch face. One notable
diference in the data was the diference in Only Icon displays. Examples for representations
that rely purely on a small image, such as weather icons ( ) are still rare on watch faces.
In this sports watch faces analysis, Only Icon displays were, as expected, much more rare and
we saw them only for weather data (14×), moon phases (2×), and wind directions (1×).</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Research Agenda for Visualizations in Motion on Fitness</title>
    </sec>
    <sec id="sec-5">
      <title>Trackers</title>
      <p>When fitness trackers are worn during sports activities that involve moving one’s arms
(walking, running, swimming, skiing, climbing etc.), the displays will be in motion relative to
the wearers gaze. Depending on the activity the relative motion will be more or less predictable,
and more or less quick, and the wearer will have diferent information needs. Next, we outline
several aspects of visualization design for fitness trackers that require more research when the
intended use involves motion.</p>
      <sec id="sec-5-1">
        <title>4.1. Understanding the Influence of Motion</title>
        <p>
          Motion characteristics such as speed, acceleration, trajectories, or direction may have an
impact on the readability of visualizations. Yao et al. [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] conducted two first evaluations about
how donut charts’ moving speed and trajectory afected the reading accuracy. Their results
showed that participants’ performance was better on linear trajectories and slow speed than
that on irregular trajectories and fast speed. However, in their experiment, all participants were
stationary and sat in front of a screen larger than 13 inches. Because fitness trackers have a
much smaller display size and many application scenarios involve moving viewers, the impact
of motion characteristics require further research in this context.
        </p>
        <p>In addition, motion in realistic indoor and outdoor scenarios will entail additional challenges
such as changing lighting conditions, the presence of equipment, and a primary task. The
type of sport itself will largely determine the types of motion characteristics and the extent
of secondary factors. As such dedicated research is likely necessary. The characteristics of
the diferent sport types determine the continuity of the viewer’s movement and the presence
of required sports equipment can directly afect the viewer’s ability to read or even attach a
iftness tracker. For example, swimming goggles may filter certain light, reduce the field of
view, or having to wear heavy coats while skiing might make it dificult to access a wrist-worn
smartwatch screen. Finally, the needed concentration on primary tasks determines the length
of time available for the viewer to read from their fitness tracker.</p>
      </sec>
      <sec id="sec-5-2">
        <title>4.2. Understanding How Context Matters</title>
        <p>The primarily intended context of a fitness tracker’s use needs to be considered in its graphical
and interaction design. The default for some Garmin watches, for example, is to show data
during exercise using a large black font on a white background. No visualizations are shown. Is
this the most efective way to communicate data to wearers or the one that ensures the most
safety during other primary tasks? Especially contexts with divided attention, for example,
glancing during driving, cycling, or running, require further research attention. Here, viewers
can only aford quick glances at watch faces. Visualizations in these settings are dificult to
evaluate and test, and future work is needed not only on which visualizations are glanceable
but also on study methodologies to actually measure glanceability during sports activities.</p>
        <p>Another important factor is the intended task context for watch faces. Islam et al. [48] showed
that with dedicated ideation exercises, watch faces could be easily envisioned that target specific
usage contexts such as sightseeing in their case. Digital watch faces are easy to switch, and
it would be interesting to study the impact of dedicated but changing watch faces on wearers.
As mentioned above, the design as well as the placement of fitness trackers needs to likely be
specific to diferent types of sports. During swimming, for example, it is almost impossible
to read in-situ performance such as heart rate or lap times unless the swimmer stops to see
their smartwatch. Completely new technology might be needed to support certain sports well.
Attaching visualizations directly on the bottom of a swimming pool may, for example, be more
efective information displays for swimmers than a wearable device.</p>
      </sec>
      <sec id="sec-5-3">
        <title>4.3. Display Types</title>
        <p>Being able to focus on the primary task is vital during sports activities. The capabilities of
the technology chosen to display visualizations in motion may have a large impact on how
well athletes can focus on their performance. Heller et al. [11] discussed a design space for
wearable displays with two main dimensions: on-body placement and display content. As they
showed, branching out from commercial fitness trackers to wearable accessories, clothing, or
skin and body projections is a possibility and ample research opportunities for visualization
design exist—not only for performance-oriented displays but also for ambient visualization [49].
In addition, interaction with these displays could be taken into account. Burstyn et al. [50], for
example, presented an interactive wrist worn device prototype in which the display could adjust
to the wearer’s body pose. As hand and arm postures can change rapidly during an activity,
iftness trackers that are body-pose aware could change the rotation, size, and location of a
visualization to be most readable. While not technically “visual,” another possibility to represent
data is to examine sonification, which involves mapping information to sound characteristics.
Godbout and Boyd [51] showed how speed skaters are alerted with sonification when something
is wrong and additionally how they are informed in which way they performed incorrectly.
It would be valuable to explore further how to leverage sonification to facilitate more fluid
smartwatch interaction while “on the go.” Apart from sonification, other non-visual channels
such as touch can also be useful in eyes-free contexts. For example, in Neshati et al.’s work
[52] on tactile line chart reading, a tip on participants’ skin allowed them to perceive the data.
Similarly other tactile methods, including vibration, should be further explored to determine
what kind of and how well data can be read from this sensory channel.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Conclusions</title>
      <p>The goal of this paper is to bring attention to an interesting and still wide-open area of
research in the domain of sports: visualization in motion for fitness trackers. We explained
visualization in motion as a direction of research and how it is relevant to fitness trackers.
We also provided evidence of current practice of sports watch faces and outlined in a brief
research agenda what questions remain to be explored. Our survey on sports watch faces
showed that wearers had six complications on average, in addition to time on their watch faces.
The highest number of complications was 16. Future research is needed to determine how
many complications on a small smartwatch display can efectively communicate information
to wearers when doing sports activities. There are several avenues of scalability to explore:
more data, smaller size, and more visualization in the context of smartwatch visualization in
motion and specifically the glanceability of these visualizations. A general research question
that remains to be solved is how visualizations are read and studied in the context of real
application scenarios. In summary, we discussed the need to research the following aspects of
visualization design for fitness trackers:
• readability of diferent visual designs such as chart types, color choices, etc.,
• scalability of visualization numbers, size, and types of data items,
• glanceability of diferent visual designs,
• the impact of motion factors in the context of specific sports,
• the impact of readability of visualizations under divided attention,
• and diferent sensory modalities for data representation on fitness trackers.</p>
      <p>Visualizations in motion are, however, also relevant in other sports-related scenarios as we
outlined earlier: augmented reality when watching sports, sports video games, or when static
visualizations are read by athletes during their activities. Similar to the specific challenges
outlined for fitness trackers, these other scenarios require future research. We hope that our
paper will be used as a foundation for discussion and inspiration for future work that tackles
the interesting remaining research questions on visualization in motion.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>This work was partly supported by the Agence Nationale de la Recherche (ANR), grant
number are ANR-18-CE92-0059-01 and ANR-19-CE33-0012. Tanja Blascheck is funded by the
Ministry of Science, Research and Art Baden-Württemberg.</p>
      <p>Image credits: All images copyright of the person granting permission: Figure 1a: non-commercial
use under agreement [53], Figure 1b: SportBuzzBusiness [54], Figure 1c: Dario Rodighiero, Figure 1d:
Eddie Camp of Respect New Haven.
[5] R. Veras, C. Collins, Saliency deficit and motion outlier detection in animated scatterplots,
in: Proc. of the Conference on Human Factors in Computing Systems, number 541 in
CHI’19, 2019, pp. 1–12. doi:10.1145/3290605.3300771.
[6] J. Heer, G. Robertson, Animated transitions in statistical data graphics, IEEE Transactions
on Visualization and Computer Graphics 13 (2007) 1240–1247. doi:10.1109/TVCG.2007.
70539.
[7] N. Elmqvist, P. Dragicevic, J. Fekete, Rolling the dice: Multidimensional visual exploration
using scatterplot matrix navigation, IEEE Transactions on Visualization and Computer
Graphics 14 (2008) 1539–1148. doi:10.1109/TVCG.2008.153.
[8] G. G. Robertson, J. D. Mackinlay, S. K. Card, Cone Trees: Animated 3d visualizations of
hierarchical information, in: Proc. of the Conference on Human Factors in Computing
Systems, 1991, pp. 189–194. doi:10.1145/108844.108883.
[9] T. Bladh, D. Carr, M. Kljun, The efect of animated transitions on user navigation in 3D
tree-maps, in: Proc. of the International Conference on Information Visualisation, 2005,
pp. 297–305. doi:10.1109/IV.2005.122.
[10] P. Ruchikachorn, K. Mueller, Learning visualizations by analogy: Promoting visual literacy
through visualization morphing, IEEE Transactions on Visualization and Computer
Graphics 21 (2015) 1028–1044. doi:10.1109/TVCG.2015.2413786.
[11] F. Heller, K. Todi, K. Luyten, An interactive design space for wearable displays, in:
Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction,
MobileHCI ’21, Association for Computing Machinery, New York, NY, USA, 2021. doi:10.
1145/3447526.3472034.
[12] J. Niess, K. Knaving, A. Kolb, P. W. Woźniak, Exploring Fitness Tracker Visualisations to</p>
      <p>Avoid Rumination, ACM, 2020. doi:10.1145/3379503.3405662.
[13] H. Havlucu, I. Bostan, A. Coskun, O. Özcan, Understanding the lonesome tennis players:
Insights for future wearables, in: Proceedings of the 2017 CHI Conference Extended
Abstracts on Human Factors in Computing Systems, CHI EA ’17, ACM, 2017, pp. 1678–
1685. doi:10.1145/3027063.3053102.
[14] A. Neshati, Y. Sakamoto, P. Irani, Challenges in displaying health data on small smartwatch
screens, Studies in health technology and informatics 257 (2019) 325–332. doi:10.3233/
978-1-61499-951-5-325.
[15] M. van Rossum, Patient empowerment via a smartwatch activity coach application: Let
the patient gain back contral over their physical and mental health condition (2020). URL:
http://resolver.tudelft.nl/uuid:472a369f-0915-4486-85d0-40323932e3a9, last visited: July,
2022.
[16] D. Albers, M. Correll, M. Gleicher, Task-driven evaluation of aggregation in time series
visualization, in: Proceedings of the SIGCHI Conference on Human Factors in Computing
Systems, CHI ’14, ACM, 2014, pp. 551–560. doi:10.1145/2556288.2557200.
[17] Ö. Pektaş, M. Köseoğlu, M. Muzny, G. Hartvigsen, E. Årsand, Design of an android wear
smartwatch application as a wearable interface to the diabetes diary application, Academic
Platform – Journal of Engineering and Science 9 (2021) 126–133. doi:10.21541/apjes.
660490.
[18] B. Schildbach, E. Rukzio, Investigating selection and reading performance on a mobile
phone while walking, in: Proceedings of the Conference on Human Computer Interaction
with Mobile Devices and Services (MobileHCI), ACM, 2010, pp. 93–102. doi:10.1145/
1851600.1851619.
[19] T. Mustonen, M. Olkkonen, J. Hakkinen, Examining mobile phone text legibility while
walking, 2004. doi:10.1145/985921.986034.
[20] K. Vadas, N. Patel, K. Lyons, T. Starner, J. Jacko, Reading on-the-go: A comparison of audio
and hand-held displays, 2006. doi:10.1145/1152215.1152262.
[21] A. Schiewe, A. Krekhov, F. Kerber, F. Daiber, J. Krüger, A study on real-time visualizations
during sports activities on smartwatches, in: Proc. of the International Conference on
Mobile and Ubiquitous Multimedia, 2020, pp. 18–31. doi:10.1145/3428361.3428409.
[22] F. Amini, K. Hasan, A. Bunt, P. Irani, Data representations for in-situ exploration of health
and fitness data, in: Proc. of the Conference on Pervasive Computing Technologies for
Healthcare, 2017, pp. 163–172. doi:10.1145/3154862.3154879.
[23] S. Langer, D. Dietz, A. Butz, Towards Risk Indication In Mountain Biking Using Smart
Wearables, Association for Computing Machinery, 2021. doi:10.1145/3411763.3451746.
[24] L. E. Thomas, A. E. Seifert, Self-motion impairs multiple-object tracking, Cognition 117
(2010) 80–86. doi:10.1016/j.cognition.2010.07.002.
[25] M. Di Luca, H. Seifi, S. Egan, M. Gonzalez-Franco, Locomotion vault: The extra mile
in analyzing vr locomotion techniques, in: Proceedings of the 2021 CHI Conference on
Human Factors in Computing Systems, CHI ’21, 2021. doi:10.1145/3411764.3445319.
[26] F. Grioui, T. Blascheck, Study of heart rate visualizations on a virtual smartwatch, in:
Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology,
VRST ’21, Association for Computing Machinery, New York, NY, USA, 2021. URL: https:
//doi.org/10.1145/3489849.3489913. doi:10.1145/3489849.3489913.
[27] C. Ziemkiewicz, R. Kosara, Beyond bertin: Seeing the forest despite the trees, IEEE</p>
      <p>Computer Graphics and Applications 30 (2010) 7–11. doi:10.1109/MCG.2010.83.
[28] C. Ziemkiewicz, R. Kosara, Implied dynamics in information visualization, 2010, pp.</p>
      <p>215–222. doi:10.1145/1842993.1843031.
[29] W. S. Cleveland, R. McGill, Graphical perception: Theory, experimentation, and application
to the development of graphical methods, Journal of the American Statistical Association
79 (1984) 531–554. doi:10.2307/2288400.
[30] J. Talbot, V. Setlur, A. Anand, Four experiments on the perception of bar charts, IEEE
Transactions on Visualization and Computer Graphics 20 (2014) 2152–2160. doi:10.1109/
TVCG.2014.2346320.
[31] J. Díaz, P. Meruvia-Pastor, Oscar ; Vázquez, Improving perception accuracy in bar charts
with internal contrast and framing enhancements, in: Proc. of the International Conference
Information Visualisation, 2018, pp. 159–168. doi:10.1109/iV.2018.00037.
[32] D. Skau, R. Kosara, Readability and precision in pictorial bar charts, in: Proc. of the</p>
      <p>Eurographics, 2017, pp. 91–95. doi:10.2312/eurovisshort.20171139.
[33] D. Skau, L. Harrison, R. Kosara, An evaluation of the impact of visual embellishments in
bar charts, Computer Graphics Forum 34 (2015) 221–230. doi:10.1111/cgf.12634.
[34] X. Cai, K. Efstathiou, X. Xie, Y. Wu, Y. Shi, L. Yu, A study of the efect of doughnut
chart parameters on proportion estimation accuracy: On the doughnut chart proportion
estimation accuracy, Computer Graphics Forum 37 (2018) 330–312. doi:10.1111/cgf.
13325.
[35] D. A. Szafir, Modeling color diference for visualization design, IEEE Transactions on
Visualization and Computer Graphics 24 (2018) 392–401. doi:10.1109/TVCG.2017.2744359.
[36] L. Zhou, C. D. Hansen, A survey of colormaps in visualization, IEEE Transactions on
Visualization and Computer Graphics 22 (2016) 2051–2069. doi:10.1109/TVCG.2015.
2489649.
[37] A. Neshati, F. Alallah, B. Rey, Y. Sakamoto, M. Serrano, P. Irani, SF-LG: Space-Filling Line
Graphs for Visualizing Interrelated Time-Series Data on Smartwatches, Association for
Computing Machinery, New York, NY, USA, 2021. URL: https://doi.org/10.1145/3447526.
3472040.
[38] H. Havlucu, A. Coşkun, O. Özcan, Designing the next generation of activity trackers for
performance sports: Insights from elite tennis coaches, in: Extended Abstracts of the 2019
CHI Conference on Human Factors in Computing Systems, CHI EA ’19, ACM, 2019, pp.
1–7. doi:10.1145/3290607.3312945.
[39] R. Gouveia, E. Karapanos, M. Hassenzahl, How do we engage with activity trackers?
a longitudinal study of habito, in: Proceedings of the 2015 ACM International Joint
Conference on Pervasive and Ubiquitous Computing, ACM, 2015, pp. 1305–1316. doi:10.
1145/2750858.2804290.
[40] A. Islam, R. Aravind, T. Blascheck, A. Bezerianos, P. Isenberg, Preferences and efectiveness
of sleep data visualizations for smartwatches and fitness bands, in: CHI Conference on
Human Factors in Computing Systems, CHI ’22, ACM, 2022. doi:10.1145/3491102.
3501921.
[41] T. Blascheck, L. Besançon, A. Bezerianos, B. Lee, P. Isenberg, Glanceable visualization:
Studies of data comparison performance on smartwatches, IEEE Transactions on Visualization
and Computer Graphics 25 (2019) 630–640. doi:10.1109/TVCG.2018.2865142.
[42] F. Kerber, S. Gehring, A. Krüger, M. Löchtefeld, Adding expressiveness to smartwatch
notifications through ambient illumination, Int. J. Mob. Hum. Comput. Interact. 9 (2017)
1–14.
[43] A. Colley, J. Häkkilä, T. Lappalainen, Concept design for informative illumination on a
snowboard, in: Proceedings of the 2016 ACM International Joint Conference on Pervasive
and Ubiquitous Computing: Adjunct, UbiComp ’16, ACM, 2016, pp. 872–876. doi:10.1145/
2968219.2968540.
[44] M. Lockyer, L. Bartram, B. E. Riecke, Simple motion textures for ambient afect, in:
Proceedings of the International Symposium on Computational Aesthetics in Graphics,
Visualization, and Imaging, CAe ’11, ACM, 2011, pp. 89–96. doi:10.1145/2030441.2030461.
[45] Little Labs, Inc., Facer – thousands of free watch faces for apple watch, samsung gear s3,
huawei watch, and more, 2014. URL: https://www.facer.io/, last visited: July, 2022.
[46] W. Jackson, Pao, SmartWatch Design Fundamentals, Springer, 2019.
[47] A. Islam, A. Bezerianos, B. Lee, T. Blascheck, P. Isenberg, Visualizing information on watch
faces: A survey with smartwatch users, in: IEEE Visualization Conference (VIS), IEEE
Computer Society Press, 2020, pp. 156–160. doi:10.1109/VIS47514.2020.00038.
[48] A. Islam, T. Blascheck, P. Isenberg, Context Specific Visualizations on Smartwatches, in:
EuroVis 2022 – Posters, The Eurographics Association, 2022. doi:10.2312/evp.20221122.
[49] c. Genç, Y. A. Ekmekçioğlu, F. Balci, H. U¯rey, O. Özcan, Howel: A soft wearable with
dynamic textile patterns as an ambient display for cardio training, in: Extended Abstracts
of the 2019 CHI Conference on Human Factors in Computing Systems, CHI EA ’19,
Association for Computing Machinery, New York, NY, USA, 2019, p. 1–6. doi:10.1145/
3290607.3312857.
[50] J. Burstyn, P. Strohmeier, R. Vertegaal, Displayskin: Exploring pose-aware displays on a
lfexible electrophoretic wristband, in: Proceedings of the Ninth International Conference
on Tangible, Embedded, and Embodied Interaction, TEI ’15, ACM, 2015, pp. 165–172.
doi:10.1145/2677199.2680596.
[51] A. Godbout, J. E. Boyd, Corrective sonic feedback for speed skating: A case study, 2010.</p>
      <p>http://hdl.handle.net/1853/49865.
[52] S. Bardot, S. Rempel, B. Rey, A. Neshati, Y. Sakamoto, C. Menon, P. Irani, Eyes-free graph
legibility: Using skin-dragging to provide a tactile graph visualization on the arm, in:
Proceedings of the 11th Augmented Human International Conference, AH ’20, Association
for Computing Machinery, New York, NY, USA, 2020. URL: https://doi.org/10.1145/3396339.
3396344. doi:10.1145/3396339.3396344.
[53] Electronic Arts Inc., Electronics arts user agreement, 2022. https://www.ea.com/legal/
user-agreement?setLocale=en-us Accessed 06 2022.
[54] SportBuzzBusiness, 2021. https://www.sportbuzzbusiness.fr/ Accessed 10 2021.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L.</given-names>
            <surname>Yao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bezerianos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Vuillemot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Isenberg</surname>
          </string-name>
          ,
          <article-title>Visualization in Motion: A Research Agenda and Two Evaluations</article-title>
          ,
          <source>IEEE Transactions on Visualization and Computer Graphics</source>
          (
          <year>2022</year>
          ). doi:
          <volume>10</volume>
          .1109/TVCG.
          <year>2022</year>
          .
          <volume>3184993</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F.</given-names>
            <surname>Chevalier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dragicevic</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Franconeri,</surname>
          </string-name>
          <article-title>The not-so-staggering efect of staggered animated transitions on visual tracking</article-title>
          ,
          <source>IEEE Transactions on Visualization and Computer Graphics</source>
          <volume>20</volume>
          (
          <year>2014</year>
          )
          <fpage>2241</fpage>
          -
          <lpage>2250</lpage>
          . doi:
          <volume>10</volume>
          .1109/TVCG.
          <year>2014</year>
          .
          <volume>2346424</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>B.</given-names>
            <surname>Bach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Pietriga</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Fekete,</surname>
          </string-name>
          <article-title>GraphDiaries: Animated transitions and temporal navigation for dynamic networks</article-title>
          ,
          <source>IEEE Transactions on Visualization and Computer Graphics</source>
          <volume>20</volume>
          (
          <year>2014</year>
          )
          <fpage>740</fpage>
          -
          <lpage>754</lpage>
          . doi:
          <volume>10</volume>
          .1109/TVCG.
          <year>2013</year>
          .
          <volume>254</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Shanmugasundaram</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Irani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gutwin</surname>
          </string-name>
          ,
          <article-title>Can smooth view transitions facilitate perceptual constancy in node-link diagrams?</article-title>
          ,
          <source>in: Proc. of the Graphics Interface</source>
          ,
          <year>2007</year>
          , pp.
          <fpage>71</fpage>
          -
          <lpage>78</lpage>
          . doi:
          <volume>10</volume>
          .1145/1268517.1268531.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>