<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Designing Generic Visualisations for Activity Log Data</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Granit Luzhnica</string-name>
          <email>gluzhnica@know-center.at</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Angela Fessl</string-name>
          <email>afessl@know-center.at</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eduardo Veas</string-name>
          <email>eveas@know-center.at</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Belgin Mutlu</string-name>
          <email>bmutlu@know-center.at</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Viktoria Pammer</string-name>
          <email>viktoria.pammer@tugraz.at</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Graz Univ. of Technology, Inst. of Knowledge Technologies</institution>
          ,
          <addr-line>Inffeldgasse 13 A-8010 Graz</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Know-Center</institution>
          ,
          <addr-line>Inffeldgasse 13 A - 8010 Graz (gluzhnica, afessl, eveas</addr-line>
        </aff>
      </contrib-group>
      <fpage>11</fpage>
      <lpage>25</lpage>
      <abstract>
        <p>Especially in lifelong or professional learning, the picture of a continuous learning analytics process emerges. In this process, heterogeneous and changing data source applications provide data relevant to learning, at the same time as questions of learners to data change. This reality challenges designers of analytics tools, as it requires analytics tools to deal with data and analytics tasks that are unknown at application design time. In this paper, we describe a generic visualization tool that addresses these challenges by enabling the visualization of any activity log data. Furthermore, we evaluate how well participants can answer questions about underlying data given such generic versus custom visualizations. Study participants performed better in 5 out of 10 tasks with the generic visualization tool, worse in 1 out of 10 tasks, and without significant difference when compared to the visualizations within the data-source applications in the remaining 4 of 10 tasks. The experiment clearly showcases that overall, generic, standalone visualization tools have the potential to support analytical tasks sufficiently well.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Reflective learning is invaluable for individuals, teams and institutions to
successfully adapt to the ever changing requirements on them and to continuously
improve. When reflective learning is data-driven, it comprises two stages: data
acquisition and learning analytics. Often, relevant data is data about learner
activities, and equally often, relevant activities leave traces not in a single but
in multiple information systems. For instance, [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] presents an example where
activities relevant for learning about software development might be carried out
in svn, wiki and an issue tracking tool. In the context of lifelong user modeling,
the learning goals and learning environments change throughout life, different
software will be used for learning, while the lifelong (open) user model needs to
store and allow analysis across all collected data [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Furthermore, as ubiquitous
sensing technologies (motion and gestures, eye-tracking, pulse, skin
conductivity, etc.) mature and hence are increasingly used in learning settings, the data
sources for such standalone learning analytics tools will include not only
information systems but also ubiquitous sensors (see e.g., [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] or [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] who calls this
“multimodal learning analytics”). Furthermore, it is frequently the case that
concrete data sources, and consequently the questions that users will need to
ask of data (analytic tasks) are not a priori known at the time of designing the
learning analytics tools. In the context of lifelong learning for instance, at the
time of designing a visualization tool, it cannot be foreseen what kind of
software will be used in the future by the learner. In the context of the current trend
towards rather smaller learning systems (apps instead of learning management
systems) it is plausible to assume that learners may wish to exchange the used
software regularly (and be it only that they switch from Evernote to another
note-taking tool). At the extreme end of generic analytics tools are of course
expert tools like SPSS and R, or IBM’s ManyEyes [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ] for visualizations.
      </p>
      <p>A picture of a continuous learning analytics process emerges, in which
heterogeneous and ever changing data source applications provide relevant data
for learning analytics, at the same time as questions of learners to data also
continuously change. To support such a continuous analytics process, we have
developed a generic visualization tool for multi-user, multi-application activity
log data. In this paper, we describe the tool as well as the results of a task-based
comparative evaluation for the use case of reflective workplace learning. The
generic visualization tool integrates data from heterogeneous sources in
comprehensible visualizations. It includes a set of visualizations which are not designed
for specific data source applications, thus the term generic. It can visualize any
activity log data published on its cloud storage. The only prior assumptions are
that every entry in the data should be: i) time stamped and ii) associated with a
user. The tool thus strikes a balance between generality (few prior assumptions)
and specificity.</p>
      <p>One key concern was whether the developed generic visualizations tools would
be as comprehensible as those designed specifically for a given application or
dataset. In this paper we describe an experiment comparing the performance of
study participants along learning analytics tasks given the generic visualizations
and visualizations custom-designed for the respective data.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        Others before us have pointed out the need to collect and analyze data for
learning across users (multi-user) and applications (multi-application), both in the
community of learning analytics and open learner modeling: Learning analytics
measures relevant characteristics about learning activities and progress with the
goal to improve both the learning process and its outcome [
        <xref ref-type="bibr" rid="ref16 ref23">16,23</xref>
        ]. Open learner
models collect and make intelligible to learners and in some use cases also to
peers and teachers data about learning activities and progress as well, again as
basis for reflection on and improvement of learning [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Also in user modeling, the
visualization of data across users is a relevant topic (e.g., [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]). Clearly, relevant
characteristics about learning activities reside very rarely only in a single system,
and both communities have identified a need to collect and analyze data from
heterogeneous data sources [
        <xref ref-type="bibr" rid="ref12 ref18">12,18</xref>
        ]. For instance, in Kay and Kummerfield [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]
a variety of external data sources (mainly health sensor’s data) is used for
aggregation, analysis and visualization (through external applications) to support
completing Sisphean tasks and achieving long term goals.
      </p>
      <p>
        Visualizations in learning analytics and open learner modeling play the role
of enabling users (most often students, teachers, but also institutions or
policy makers - cf. [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]) to make sense of given data [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Most papers, however,
predefine the visualizations at design time, in full knowledge of the data sources
[
        <xref ref-type="bibr" rid="ref10 ref14 ref15 ref20 ref21 ref6">6,10,14,15,20,21</xref>
        ]. In Kay et al. [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] for instance, teams of students are supported
with open (team) learner models in learning to develop software in teams. The
authors developed a set of novel visualizations for team activity data, and showed
the visualizations’ impact on team performance (learning). In Santos et al. [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ],
student data from Twitter, blog posts and PC activity logs are visualized in a
dashboard. The study shows that such dashboards have a higher impact on
increasing awareness and reflection of students who work in teams than of students
who work alone. Again, data sources are defined prior to developing
visualizations however. In such visualizations, users “simply” need to understand the
given visualizations, but do not need to create visualizations themselves.
On the other end of the spectrum are extremely generic data analytics tools such
as spreadsheets or statistical analysis tools like SPSS or R. Outstanding amongst
such tools is probably IBM’s web-based tool ManyEyes. Users can upload any
data at all in CSV format, label and visualize data. ManyEyes makes no
assumptions at all about uploaded data, but clearly puts the burden of figuring
out what kind of visualizations are meaningful to the users.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Generic Visualization Tool for Activity Log Data</title>
      <p>We have developed a generic visualization tool for activity log data that
addresses two fundamental challenges shared in many scenarios at the intersection
of learning analytics, open learner modeling, and reflective learning on the basis
of (activity log) data: Data from multiple applications shall be visualized; and
at the time of designing the visualization tool, the concrete data sources and
consequently the concrete analytic tasks are unknown.
3.1</p>
      <sec id="sec-3-1">
        <title>A Priori Knowledge about Data</title>
        <p>
          We make only two assumptions about data, namely that they are i) time stamped
and ii) every data entry is associated with a user. The second assumption is useful
because in a lot of learning scenarios, learning is social [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]: Students regularly
work in teams as well as employees in organizations (in the context of workplace
learning). Depending on the applications that are used to collect the activity log
data, and the users’ sharing settings, data from other users may be available.
Therefore, it is reasonable to assume that meaningful insights can be gained by
analyzing not only data from one individual but also data from multiple users.
3.2
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>System Architecture and Data Format</title>
        <p>The generic visualization tool (Vis-Tool) is implemented as a client-server
architecture. It is a web application implemented in HTML5/Javascript and Java.
The Vis-Tool itself does not capture any activity log data, but is responsible for
the visualization of the data in a sophisticated and meaningful way. Through its
server component, it is connected to a could storage that stores application data
and manages access to data. The data source applications store their activity log
data on the cloud storage in so-called spaces: Private spaces store data of only
one user, while shared spaces collect data from multiple users. Single sign-on
provides a common authentication mechanism for all data-source applications,
the Vis-Tool and the cloud storage. The rationale behind this chosen architecture
is to deal with data collection, data analysis and data visualization separately.
The Vis-Tool expects data in an XML format described by a publicly available
XML schema. In addition, the schema must extend a base schema that contains
a unique ID for all objects, a timestamp and a user ID as mandatory fields.
3.3</p>
      </sec>
      <sec id="sec-3-3">
        <title>Single-Column Stacked Visualization</title>
        <p>
          The Vis-Tool organizes visualizations in form of a dashboard style similar to
[
          <xref ref-type="bibr" rid="ref10 ref15 ref21 ref6">6,21,15,10</xref>
          ], but we use a single column for the visualizations. Visualizations are
always stacked on top of each other and share the same time scale, whenever
possible. This is necessary to directly compare the data from different
applications along the very same timeline (see Fig. 1). Users can add charts to their
dashboard using an ”Add” button. Charts can be minimized (”-” button) or
completely removed (”x” button), which are located at the top right corner of
each chart. The position of each chart can be rearranged by using drag and drop.
Thus, a user can easily adapt the visualizations to one’s individual needs.
3.4
        </p>
      </sec>
      <sec id="sec-3-4">
        <title>Chart Types</title>
        <p>The Vis-Tool provides four types of charts with different visual channels: geo
chart, bar chart, line chart and timeline chart (see Figure 1).</p>
        <p>The geo chart is used for data that contains geo positions. Besides the ”latitude”
and ”longitude”, the chart consists also of the “popup header” and “popup text”
as additional visual channels. Both of them are shown in a popup window when
clicking on an entry. The bar chart is available for any structure of data. It
contains the “aggregate” channel and the “operator” setting. While the “aggregate”
channel defines, which data property should be aggregated, the “operator”
defines how the data will be aggregated (count, sum, average, min max) in order to
be displayed. The line chart contains “x-axis”, “y-axis”, and “label” (on hover
text). It is available for data with numerical data properties. Our timeline chart
is similar to the line chart but does not have an “y-axis” channel. All charts
have the “group by” channel. It defines how data can be grouped with the help
of colors. For example, if we use a user id to group the data belonging to one
user, all data captured by this user will be presented with the same color. If
several users are added to a group, all data captured by the users belonging to
this group, will be presented with the same color. This feature makes it possible
to easily spot user patterns across applications.
3.5</p>
      </sec>
      <sec id="sec-3-5">
        <title>Mapping Data to Visualizations</title>
        <p>Users create charts in the Vis-Tool by selecting data they wish to visualize,
selecting a chart type, and then filling the chart’s visual channels. The Vis-Tool,
however, presents only those options to the user that are possible for any given
data. Technically this is solved with chart matching and channel mapping.
3.5.1 Chart Matching For each chart type, we created a chart description
consisting of a list of all channels, mandatory as well as optional, and the data
types that the channels can visualize. At runtime, the XML schemas that
describe the structure of the user data are parsed and the data properties including
their data types are extracted. Based on the extracted data structures and the
available channels per chart, chart matching is performed. The matching
determines whether a chart is able to visualize a dataset described by a parsed schema.</p>
        <p>This is done by checking for each channel of the chart, if the data structure has
at least one property whose data type can be visualized by the given channel.
For instance, the line chart consists of x-axis, y-axis as mandatory channels and
the hover text as optional channel. The x-axis can visualize numeric values and
date-time values and the y-axis can handle numeric values. The hover text
channel is able to handle numeric values, date-times and strings. The line chart will
be available for the parsed data structure, if the structure contains at least one
numeric type or date-time for the x-axis and a numeric type for the y-axis. The
hover text is an optional channel and therefore not of relevance for the decision,
if the line chart is able to present the parsed data structure or not. For a given
data structure, chart matching is performed for each chart type. Those chart
types that match with the given data structures are added to the list of possible
chart types and can be selected.
3.5.2 Channel Mapping Channel mapping takes place if a user selects one
of the available charts. An initial channel mapping is automatically provided to
the user when adding a chart to the visualization. Users can adapt the mapping
of a property to another chart channel via the interface.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Use Case</title>
      <p>
        The Vis-Tool can be used in any use case in which analysis of multi-user and
multi-application activity log data makes sense. A lot of learning analytics and
open learner modeling use cases fall into this category, as argued above. The
task-based comparative evaluation that we subsequently describe and discuss in
this paper assumes a specific use case however. It is one of knowledge workers
who work in a team, carry out a significant amount of their work on desktop
PCs, and spend a significant amount of time traveling. In the sense of reflective
work-integrated learning [
        <xref ref-type="bibr" rid="ref3 ref7">3,7</xref>
        ] knowledge workers would log a variety of aspects
of their daily work, and routinely view the log data in order to gain insights on
their working patterns and change (for the better) their future working patterns.
Concretely, we evaluate the Vis-Tool in comparison to three specific activity log
applications that all have been successfully used and evaluated in the context of
such reflective workplace learning [
        <xref ref-type="bibr" rid="ref17 ref5 ref8">5,8,17</xref>
        ].
      </p>
      <p>
        Collaborative Mood Tracking - MoodMap App3 [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] - is a collaborative
self-tracking app for mood, based on Russell’s Circumplex Model of Affect [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ].
Each mood point is composed of ”valence” (feeling good - feeling bad) and
”arousal” (high energy - low energy). The mood is stated by clicking on a
bidimensional mood map colored according to Itten’s system [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Context
information and a note can be manually added to the mood, while the timestamp is
automatically stored. Depending on the user’s setting, the inserted mood is kept
private or shared with team members. Mood is visualized on an individual as
well as collaborative level. The MoodMap App has been successfully used in
virtual team meetings to enhance team communication by inducing reflection [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <sec id="sec-4-1">
        <title>3 http://know-center.at/moodmap/</title>
        <p>
          Example analysis on MoodMap data for workplace learning are to review and
reflect on the development of individual mood in comparison to team mood, and
in relationship to other events or activities that happen simultaneously.
PC Activity Logging - KnowSelf 4 [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] automatically logs PC activity in
the form of switches between windows (associated with resources like files and
websites as well as applications). Manual project and task recording, as well as
manually inserted notes and comments complete the data captured by the app.
The visualizations are designed to support time management and showcase in
particular the frequency of switching between resources, the time spent in
numerous applications, and the time spent on different activities. KnowSelf has
concretely been used as support for improving time management [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ], but
activity logging data has also been used as basis for learning software development
in an educational context [
          <xref ref-type="bibr" rid="ref21 ref22">21,22</xref>
          ]. Example analyses of PC activity log data for
workplace learning are to relate time spent in different applications to job
description (e.g., the role of developer vs. the role of team leader), and to relate
the time spent on recorded projects to project plans.
        </p>
        <p>
          Geo Tagged Notes - CroMAR [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] is a mobile augmented reality application
designed to show data that was tagged with positions around the user’s place.
The information is overlayed on the video feed of the device’s camera. CroMAR
allows users to create geo-tagged data such as notes and pictures. The notes are
stored in the cloud storage. CroMAR has features that are relevant for
reflecting on any working experience with a strong physical nature. It was specifically
developed for reflection on emergency work, in particular in relation to crowd
management. CroMAR has been evaluated in the domain of civil protection to
review, location-based, what happened during events [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. A typical use case for
knowledge workers would be to reflect both on the content of notes, and their
relation to particular locations (which would typically be in line with customers,
project partners, location-based events, or travel-related locations).
4.1
        </p>
        <sec id="sec-4-1-1">
          <title>The Potential Benefit of Combining Data Across Applications</title>
          <p>
            In prior work, we explored the potential benefit of analyzing data from PC
activity logging data together with collaborative mood tracking data in such a
use case [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ]. As one example, a user’s mood might drop consistently in relation
to a particular project. In addition, we conjecture that mood might also be
related to particular places, or some kinds of work might be carried out more
productively outside the office.
5
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Research Approach: Task-Based Evaluation</title>
      <p>We performed an evaluation to compare custom visualizations in the data source
applications (in-app) with generic visualizations (Vis-Tool). The goal was to
establish how the comprehensibility of generic visualizations, designed without
specific prior knowledge about (meaning of) data, compares to custom in-app
visualizations that were customized for a specific kind of data and task.</p>
      <sec id="sec-5-1">
        <title>4 http://know-center.at/knowself/</title>
        <p>5.1</p>
        <sec id="sec-5-1-1">
          <title>Data preparation</title>
          <p>We prepared a test dataset with data about three users, called D, L and S,
containing two weeks of data from all applications. To do so, we extracted data
from real usage scenarios of the single applications. For MoodMap, we selected
two weeks of the three most active users out of a four-week dataset. For KnowSelf,
we selected three two-week periods of log data out of a 7-month dataset from
a single user. For CroMAR, we used the dataset from a single user who had
travelled significantly in a two-seeks period, and manually created two similar
datasets to simulate three users. The data were shifted in time so that all datasets
for all applications and users had the same start time and end time.
5.2</p>
        </sec>
        <sec id="sec-5-1-2">
          <title>Evaluation Procedure</title>
          <p>
            The evaluation is intended to test the comprehensibility of generic visualizations
for learning analytics. We wanted to investigate how understandable are generic
visualizations compared to the custom visualizations that are specifically
designed for data of one specific application. Our initial hypothesis was that the
generic visualizations could be as meaningful as custom visualizations. As we
wanted to rule out confounding factors from different interaction schemes, we
opted to perform the experiment on paper based mock-ups. These were created
from the datasets by capturing screenshots of the in-app visualizations and the
generic ones generated with the Vis-Tool. We prepared short analytics tasks
(see Table 1) that participants should solve with the use of the given
visualizations. The tasks are plausible in line with the chosen use cases (see Section 4)
above, which were constructed based on use cases of knowledge workers that
were previously evaluated in their working environment [
            <xref ref-type="bibr" rid="ref17 ref8">8,17</xref>
            ] as well as use case
exploration of joint data analysis [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ]. We simulated the hover effect, clicking,
scrolling and zooming by first letting the participants state the action and then
replacing the current mockup with a new corresponding one.
          </p>
          <p>
            The evaluation followed a within-participants design. For each tool (MoodMap
App (MM), KnowSelf (KS), CroMAR (CM)) we created a number of tasks
(MM=4, KS=4, CM=2). We created variants of each task with different datasets
for each condition (Vis-Tool, in-app). Thus, there were 20 trials per participant
(10 tasks in 2 conditions for each participant). Tasks and tools were randomized
across participants to avoid favoring either. We grouped the tasks by tool and
randomized the order of groups, the tasks within groups and the order of
condition (in-app visualization / generic visualization). The experimenter measured
the duration (time to completion) and real performance for each task.
Additionally, subjective scores of difficulty were measured through self-assessment using
the NASA-TLX workload measure [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ]. The tasks were organized in groups, each
containing tasks with data generated from a single log activity application. Table
1 summarizes the tasks per tool.
          </p>
          <p>The study followed the format of a structured interview, where the
experimenter first explained the goals, the applications and the tasks participants
would perform. The participant then proceeded to the first task, which finalized</p>
          <p>T# App Task
1 MM On the given day, to whom did belong the worst single energy (arousal) and
to whom did belong the single worst feeling(valence)?
2 MM On the given day, to whom did belong the worst average energy (arousal)
and to whom did belong the worst average feeling (valence)?
3 MM Find out on which day in the two recorded weeks was entered the best
energy(arousal) and best feeling (valence) of the user!
4 MM Find out on which days (dates) the MoodMap App was not used at all!
5 KS On given day, when exactly (at what time) did the given user had the longest
break? How long was the break?
6 KS Find out on which day in the two recorded weeks did L work (regardless of
breaks) longest?
7 KS Find out which application was most frequently used in the last two weeks
by given user!
8 KS Find out which user used MS Word most often on the given day!
9 CM (a) Find out where (in which Countries) in Europe have notes been taken!
(b) Find out in which cities in Austria did L and D take notes!
10 CM (a) Find out how many notes have been created at Inffeldgasse, Graz!
(b) Find out how many notes have been created in Graz!
Table 1: Tools and evaluation tasks. L, D and S are the initials of the users to
whom the data belong.
with the NASA-TLX. After finishing each group a questionnaire was distributed
to directly evaluate the visual design, comprehensibility and user preference of
in-app visualizations in comparison to the Vis-Tool visualizations.
5.3</p>
        </sec>
        <sec id="sec-5-1-3">
          <title>Participants</title>
          <p>Eight people participated in the experiment, all knowledge workers (researchers
and software developers). Three of them were female, 5 male. 3 participants were
aged between 18-27 and 5 were aged between 28-37.
6</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Results</title>
      <p>Overall, our study participants performed significantly better with the generic
visualization tool in five (T2, T3, T7, T9, T10) out of ten tasks, worse in only
one (T5) task and without significant difference when compared to the in-app
visualizations in the remaining four (T1, T4, T6, T8) tasks. To analyze results,
the Fisher’s Test was used to check the homogeneity of variances. The tf-test
was used to test significance for cases with homogeneous variance. If not, the
Walch-Satterthwaite test was used.
6.1</p>
      <sec id="sec-6-1">
        <title>Workload</title>
        <p>The NASA-TLX includes six metrics, which are considered scales of workload.
We used the simplified R-TLX method to compute workload by averaging the
scores. Figure 3 (MoodMap vs. Vis-Tool), Figure 4 (KnowSelf vs. Vis-Tool)
and Figure 5 (CorMAR vs. Vis-Tool) show the box plots of the significant
results for NASA-TLX metric: mental demand (MD), physical demand (PD),
temporal demand (TD), measured performance (MP) and frustration (F) as
well as the workload (W), computed as the average of all self-evaluation
metrics and the measured performance (MP). Task duration (D) for all apps is
given in Figure 2. The result of the t-test for T2 indicates that participants
experienced significantly less workload when using Vis-Tool than MoodMap,
t(9) = 3.17; p &lt; .01. Also, the task duration was significantly lower in the case
of Vis-Tool, t(9) = 3.18; p &lt; .01. In fact all individual metrics show significantly
better scores in favor of Vis-Tool. For T3, there was a significant less workload
and significant less duration when using Vis-Tool, t(9) = 2.13; p &lt; .05
respectively t(9) = 3.44; p &lt; .01. For T5, there was a significantly lower workload when
using KnowSelf in comparison to Vis-Tool, t(9) = 2.21; p &lt; .05. Individual
metrics show a significant difference in effort and physical demand (see Figure 4). For
T7, except for measured performance (MP), significant differences were found in
every other metric. Participants experienced significantly lower workload using
Vis-Tool, t(9) = 4.60; p &lt; .01. They also spent significantly less time solving the
task with Vis-Tool, t(9) = 3.64; p &lt; .01. In the group CroMAR VS Vis-Tool,
the results of both tasks show significant differences in favor of the Vis-Tool (see
Figure 5). For T9, there was a significant difference in measured performance,
t(9) = 3.16; p &lt; .02. Individual metrics show significant difference in mental
demand. For T10, there was a significantly less workload when using Vis-Tool,
t(9) = 2.36; p &lt; .04. Analysis of individual metrics showed significant differences
in mental and physical demand. Duration was also significantly different in favor
of Vis-Tool, t(9) = 4.68; p &lt; .01.
6.2</p>
      </sec>
      <sec id="sec-6-2">
        <title>Application Preferences and Visual Design</title>
        <p>The summarized results of the user preferences regarding the used apps for
solving the given tasks are presented in Table 2. For the tasks T1-T4 and
T9T10 Vis-Tool was preferred over both MoodMap and CroMAR. For the tasks
T5-T8 the results of Vis-Tool vs. KnowSelf were ambiguous. For T5 and T6
participants preferred KnowSelf whereas for the tasks T7 and T8 they go for the
Vis-Tool. This correlates with TLX where users performed better using KnowSelf
in T5 but much worse in T7.</p>
        <p>
          The results of the question “How did you like the visual design of the
visualisations for the given tasks?” (see Figure 6) showed a clear preference for
the visual design of the Vis-Tool in comparison to the MoodMap (tasks T1-T4)
and CorMAR (tasks T9-T10). In contrast, for the tasks T5-T8 they preferred
the visual design of KnowSelf over that of the Vis-Tool. Regarding the question
“How meaningful were the given visualizations for the given tasks?” the
participants stated that Vis-Tool visualizations where significantly more meaningful
for the given tasks in comparison to the MoodMap and CroMAR (see Figure 6).
Interestingly, there were no significant results regarding Vis-Tool and KnowSelf.
Overall, the performance of study participants was satisfactory with the
VisTool, showing comparable and mostly even better performance when compared
with in-app visualizations. In many cases, study participants had a significantly
lower workload and were significantly quicker to solve the tasks using generic
visualizations: Participants achieved significantly better results with the
VisTool than with the MoodMap App in two out of four tasks in terms of workload
and time to task completion (T2, T3 - see also Figure 2 and 3), better results
with the Vis-Tool than with KnowSelf in one out of four tasks (T7 - see also
Figure 2 and 4) and better results with the Vis-Tool than with CroMAR in two
out of two tasks in terms of task performance (T9) and workload and duration
(T10 - see also Figure 2 and 5). These results are also confirmed by the answers
of the questions regarding the comprehensibility of the visualizations with regard
to the given tasks (see Table 6).
These results are not a statement on the quality of design of the specific apps
per se. All three used activity logging applications have successfully been used to
induce and support learning in the workplace. Rather, the results are a function
of whether the data source applications have been designed to answer the type
of questions about data that study participants were asked to answer in the
evaluation. The focus of CroMAR for instance was in location-related,
augmentedreality-style, visualization of geo-tagged data in order to support situated
reflection on events [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. Quite naturally then, its user interface is less conducive
to answering general questions about data. The focus of KnowSelf on the other
hand was to support users in reviewing their time use daily in order to support
time management [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. This is visible in the comparative results which show
a strong task dependence: Participants find it easier to perform the task that
relates to a single day (T5) with KnowSelf than with the Vis-Tool, but find the
        </p>
        <p>Vis-Tool more supportive in a task that relates to a longer period of time (T7).
Another example of generic visualizations adding benefit to in-app visualizations
is that the data source applications had different support for multiple users:
KnowSelf is a purely single-user application; nonetheless, there is a plausible
interest within teams to know how others in the team use their time. CroMAR
visualizes data from multiple users but does not visually mark which data comes
from which user, and MoodMap App is a real collaborative tracking application.
Our study results therefore clearly showcase that and how generic visualizations
can add benefit to in-app visualizations when users want to solve analytic tasks
beyond those that were known at application design time.
7.2</p>
      </sec>
      <sec id="sec-6-3">
        <title>Visualizing Derived Data Properties</title>
        <p>A limitation of the current implementation of the Vis-Tool is, that it is only
able to display given properties, but cannot calculate new values. For instance,
in KnowSelf, the data entries contain the start and the end time but not the
duration. The visualizations in KnowSelf make use of such derived data properties:
As KnowSelf developers know exactly what kind of data were available, they
could also easily implement calculations based on given data and use these for
visualizations. In the Vis-Tool on the other hand, we have in general too little
prior knowledge about data to automatically perform meaningful calculations
on data in order to compute “derived data properties”. Technically, it would be
possible to extend the Vis-Tool’s user interface such that calculations on given
data can be specified, but we assume that ease of use would be rather difficult to
achieve. In addition, such functionality would increasingly replicate very generic
spreadsheet (e.g., Excel), statistical analysis (e.g., SPSS) or visualization (e.g.,
ManyEyes) functionality. It might be easier overall to shift the burden “back”
to data source applications, in the sense of requiring data source applications to
provide derived values that are of interest themselves.
7.3</p>
      </sec>
      <sec id="sec-6-4">
        <title>Ease of Interaction</title>
        <p>In this work we have focused on the comprehensibility of visualizations. We did
not formally evaluate the user interaction itself, i.e. the process of creating a
specific visualization. However, we are aware that the Vis-Tool requires users to
become familiar with concepts such as mappings and visual channels.
A plausible emerging scenario is to differentiate between two user roles: One role
(expert) would be responsible for creating a set of meaningful visualizations.
The expert would know concretely which data source applications are available
and what kind of analytic tasks users will want to solve. This person does not
need to write code, but needs to have some training or experience with the
Vis-Tool. The set of meaningful visualizations would be stored and serve as
pre-configuration for learners. A second role (learner) would then only need to
load a pre-configured set of visualizations and “use” them, similar to the study
participants in the task-based evaluation discussed in this paper. Of course,
users would have the freedom to explore the mapping interface if interested, and
generate new visualizations. Based on this overall scenario, more complex usage
scenarios for generic visualization tools like ours could be elaborated that involve
for instance sharing and recommending dashboards.
8</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Conclusion</title>
      <p>We have developed a generic visualisation tool for activity log data that
addresses two fundamental challenges shared in many scenarios at the intersection
of learning analytics, open learner modelling, and workplace learning on the
basis of (activity log) data: Data from multiple applications shall be visualised;
and at the time of designing the visualisation tool, the concrete data sources
and consequently the concrete analytic tasks are unknown. The Vis-Tool makes
only two assumptions about data, namely that they are time-stamped and are
associated with users. The comprehensibility of the Vis-Tools visualisations was
evaluated in an experiment along data analytics tasks that were designed on the
background of workplace learning. This evaluation was carried out within the
target user group of knowledge worker, and based on real-world data. It thus
constitutes firm ground, also for other researchers, to compare the suitability of
other generic visualisations with, or to proceed with the next step in the
design process for such a generic visualisation tool, namely the design of the user
interaction process.</p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgments</title>
      <p>The project “MIRROR - Reflective learning at work” is funded under the FP7 of the
European Commission (project nr. 257617). The Know-Center is funded within the
Austrian COMET Program - Competence Centers for Excellent Technologies - under
the auspices of the Austrian Federal Ministry of Transport, Innovation and Technology,
the Austrian Federal Ministry of Economy, Family and Youth and by the State of
Styria. COMET is managed by the Austrian Research Promotion Agency FFG.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>A.</given-names>
            <surname>Bandura</surname>
          </string-name>
          .
          <article-title>Social Learning Theory</article-title>
          . General Learning Press, New York,
          <year>1977</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>P.</given-names>
            <surname>Blikstein</surname>
          </string-name>
          .
          <article-title>Multimodal learning analytics</article-title>
          .
          <source>In Proceedings of the Third International Conference on Learning Analytics and Knowledge</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>D.</given-names>
            <surname>Boud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Keogh</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Walker</surname>
          </string-name>
          .
          <source>Reflection: Turning Experience into Learning</source>
          , pages
          <fpage>18</fpage>
          -
          <lpage>40</lpage>
          . Routledge Falmer, New York,
          <year>1985</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>S.</given-names>
            <surname>Bull</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Kay</surname>
          </string-name>
          .
          <article-title>Student models that invite the learner: The smili:open learner modelling framework</article-title>
          .
          <source>International Journal of Artif. Intell. in Education</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>M.</given-names>
            <surname>Divitini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mora</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Boron</surname>
          </string-name>
          . Cromar:
          <article-title>Mobile augmented reality for supporting reflection on crowd management</article-title>
          .
          <source>Int. J. Mob. Hum. Comput. Interact.</source>
          ,
          <volume>4</volume>
          (
          <issue>2</issue>
          ):
          <fpage>88</fpage>
          -
          <lpage>101</lpage>
          , Apr.
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>E.</given-names>
            <surname>Duval</surname>
          </string-name>
          .
          <article-title>Attention please!: learning analytics for visualization and recommendation</article-title>
          .
          <source>In Proceedings of the 1st International Conference on Learning Analytics and Knowledge</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>M.</given-names>
            <surname>Eraut</surname>
          </string-name>
          and
          <string-name>
            <given-names>W.</given-names>
            <surname>Hirsh</surname>
          </string-name>
          .
          <article-title>The Significance of Workplace Learning for Individuals, Groups and Organisations</article-title>
          ,
          <source>SKOPE Monograph 9</source>
          , Oxford University Department of Economics,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>A.</given-names>
            <surname>Fessl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Rivera-Pelayo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pammer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Braun</surname>
          </string-name>
          .
          <article-title>Mood tracking in virtual meetings</article-title>
          .
          <source>In Proceedings of the 7th European conference on Technology Enhanced Learning, EC-TEL'12</source>
          , pages
          <fpage>377</fpage>
          -
          <lpage>382</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>A.</given-names>
            <surname>Fessl</surname>
          </string-name>
          , G. Wesiak, and
          <string-name>
            <given-names>G.</given-names>
            <surname>Luzhnica</surname>
          </string-name>
          .
          <article-title>Application overlapping user profiles to foster reflective learning at work</article-title>
          .
          <source>In Proceedings of the 4th Workshop on Awareness and Reflection in Technology-Enhanced Learning (Colocated with ECTEL)</source>
          , volume
          <volume>1238</volume>
          <source>of CEUR Workshop Proceedings</source>
          , pages
          <fpage>51</fpage>
          -
          <lpage>64</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>S.</given-names>
            <surname>Govaerts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Verbert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Klerkx</surname>
          </string-name>
          , and
          <string-name>
            <given-names>E.</given-names>
            <surname>Duval</surname>
          </string-name>
          .
          <article-title>Visualizing activities for selfreflection and awareness</article-title>
          .
          <source>In Advances in Web-Based Learning (ICWL 2010)</source>
          , volume
          <volume>6483</volume>
          of Lecture Notes in Computer Science, pages
          <fpage>91</fpage>
          -
          <lpage>100</lpage>
          .
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>J.</given-names>
            <surname>Itten</surname>
          </string-name>
          . Kunst der Farbe. Otto Maier Verlag, Ravensburg, Germany,
          <year>1971</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>J.</given-names>
            <surname>Kay</surname>
          </string-name>
          .
          <article-title>Lifelong learner modeling for lifelong personalized pervasive learning</article-title>
          .
          <source>IEEE Transactions on Learning Technologies</source>
          ,
          <volume>1</volume>
          (
          <issue>4</issue>
          ):
          <fpage>215</fpage>
          -
          <lpage>228</lpage>
          , Oct.
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <given-names>J.</given-names>
            <surname>Kay</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Kummerfeld</surname>
          </string-name>
          .
          <article-title>Bringing together sensor data, user goals and long term knowledge to sup port sisyphean tasks</article-title>
          .
          <source>In Workshop on Hybrid Pervasive/Digital Inference (HPDI</source>
          <year>2011</year>
          ),
          <source>Colocated with Pervasive</source>
          <year>2011</year>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>J. Kay</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Maisonneuve</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Yacef</surname>
            , and
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Reimann</surname>
          </string-name>
          .
          <article-title>The big five and visualisations of team work activity</article-title>
          .
          <source>Intelligent tutoring systems</source>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <given-names>D.</given-names>
            <surname>Leony</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pardo</surname>
          </string-name>
          , L. de la Fuente Valent´ın, D. S. de Castro, and
          <string-name>
            <given-names>C. D.</given-names>
            <surname>Kloos</surname>
          </string-name>
          .
          <article-title>Glass: A learning analytics visualization tool</article-title>
          .
          <source>In International Conference on Learning Analytics and Knowledge</source>
          ,
          <source>LAK '12</source>
          , pages
          <fpage>162</fpage>
          -
          <lpage>163</lpage>
          . ACM,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <given-names>X.</given-names>
            <surname>Ochoa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Suthers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Verbert</surname>
          </string-name>
          , and
          <string-name>
            <surname>E. Duval.</surname>
          </string-name>
          <article-title>Analysis and reflections on the third learning analytics and knowledge conference (lak 2013)</article-title>
          .
          <source>Journal of Learning Analytics</source>
          ,
          <volume>1</volume>
          (
          <issue>2</issue>
          ):
          <fpage>5</fpage>
          -
          <lpage>22</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <given-names>V.</given-names>
            <surname>Pammer</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Bratic</surname>
          </string-name>
          .
          <article-title>Surprise, surprise: Activity log based time analytics for time management</article-title>
          .
          <source>In CHI '13 Extended Abstracts on Human Factors in Computing Systems, CHI EA '13</source>
          , pages
          <fpage>211</fpage>
          -
          <lpage>216</lpage>
          . ACM,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <given-names>A.</given-names>
            <surname>Pardo</surname>
          </string-name>
          and
          <string-name>
            <given-names>C. D.</given-names>
            <surname>Kloos</surname>
          </string-name>
          .
          <article-title>Stepping out of the box: Towards analytics outside the learning management system</article-title>
          .
          <source>In Proceedings of the 1st International Conference on Learning Analytics and Knowledge</source>
          ,
          <source>LAK '11</source>
          , pages
          <fpage>163</fpage>
          -
          <lpage>167</lpage>
          . ACM,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Russell</surname>
          </string-name>
          .
          <article-title>A circumplex model of affect</article-title>
          .
          <source>Journal of personality and social psychology</source>
          ,
          <volume>39</volume>
          (
          <issue>6</issue>
          ):
          <fpage>1161</fpage>
          ,
          <year>1980</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>J. Santos</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Verbert</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Govaerts</surname>
            , and
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Duval</surname>
          </string-name>
          .
          <source>Visualizing PLE usage. EFEPLE11 Workshop on Exploring the Fitness and Evolvability of Personal Learning Environments</source>
          , pages
          <fpage>34</fpage>
          -
          <lpage>38</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>J. L. Santos</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Govaerts</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Verbert</surname>
            , and
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Duval</surname>
          </string-name>
          .
          <article-title>Goal-oriented visualizations of activity tracking: A case study with engineering students</article-title>
          .
          <source>In Proceedings of the 2Nd International Conference on Learning Analytics and Knowledge</source>
          ,
          <source>LAK '12</source>
          , pages
          <fpage>143</fpage>
          -
          <lpage>152</lpage>
          . ACM,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>J. L. Santos</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Verbert</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Govaerts</surname>
            , and
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Duval</surname>
          </string-name>
          .
          <article-title>Addressing learner issues with stepup!: An evaluation</article-title>
          .
          <source>In Proceedings of the Third International Conference on Learning Analytics and Knowledge</source>
          ,
          <source>LAK '13</source>
          , pages
          <fpage>14</fpage>
          -
          <lpage>22</lpage>
          . ACM,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23. G. Siemens and
          <string-name>
            <given-names>P.</given-names>
            <surname>Long</surname>
          </string-name>
          .
          <article-title>Penetrating the Fog: Analytics in Learning and Education</article-title>
          .
          <source>EDUCAUSE Review</source>
          ,
          <volume>46</volume>
          (
          <issue>5</issue>
          ):
          <fpage>30</fpage>
          -
          <lpage>32</lpage>
          +, Sept.
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Tang</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Kay</surname>
          </string-name>
          .
          <article-title>Lifelong user modeling and meta-cognitive scaffolding: Support self monitoring of long term goals</article-title>
          .
          <source>In UMAP Workshops'13</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>F. B. Viegas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Wattenberg</surname>
            ,
            <given-names>F. van Ham</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kriss</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>McKeon</surname>
          </string-name>
          .
          <article-title>Manyeyes: A site for visualization at internet scale</article-title>
          .
          <source>IEEE Transactions on Visualization and Computer Graphics</source>
          ,
          <volume>13</volume>
          (
          <issue>6</issue>
          ):
          <fpage>1121</fpage>
          -
          <lpage>1128</lpage>
          , Nov.
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>