<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>”
in TURIZAM - International Scientific Journal</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1016/j.intcom.2012.07.002</article-id>
      <title-group>
        <article-title>Comic Experience: Narrative &amp; Collaborative Drawing on a Multi-Touch Table in an Art Museum</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Christina Niederer, Stefanie Gro ̈ßbacher, Wolfgang Aigner, Peter Judmaier and Markus Seidl Institute of Creative</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2016</year>
      </pub-date>
      <volume>24</volume>
      <issue>5</issue>
      <fpage>219</fpage>
      <lpage>226</lpage>
      <abstract>
        <p>-Most art museums provide audio guides or, more recently, multi-media guides, with static context such as background information to enrich their exhibits with an extra layer of content. Usually, there is no actual interaction with the museum's exhibit possible, no hands-on experience that fosters a deeper cognitive engagement. The integration of multi-touch tables has a great potential for collaborative experiences. We designed a touch table application that allows for collaborative and active drawing experiences and conducted two usability studies, one in a laboratory setting and one in the field. The design study was structured in three phases: domain and problem analysis, user experience and interface design, and evaluation. The results show that the collaborative aspect - drawing on one picture simultaneously in different personal areas - was accepted and praised by the visitors. The study indicates that museums with mostly passive viewable artefacts can profit from interacitve and collaborative content, which enhances the general experience in their exhibitions.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
    </sec>
    <sec id="sec-2">
      <title>In art museums the exhibition design is limited, because</title>
      <p>
        their focus is on displaying collections of objects such as
paintings, sculptures, multimedia works, and installations.
Usually, there are hardly any opportunities for visitors to
interact with artefacts or other visitors, other than discussing
exhibited objects. Most art museums try to increase their
visitors’ interactivity by handing out handheld devices providing
static content like audio tours [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Multi-touch technology,
in combination with appropriate interaction design concepts,
allows for true interactivity between visitors and the exhibition
objects. The presented research examines the emerging role of
interactivity with exhibition objects by developing a
collaborative drawing and viewing application running on a multi-touch
table and a web application for smartphones. The collaborative
drawing and viewing application adds interactive elements
in accordance to the visitors’ desire for self-expression. The
interactive comic experience specifically developed for the
Karikaturmuseum Krems makes drawing for visitors easier,
actively engaging them with drawing styles of exhibited artists,
and allows collaboration with other visitors, even outside of
the context of the museum.
      </p>
      <p>Based on a user-centered design approach we conducted
a design study to investigate whether using the collaborative
drawing application introduces novel user behaviors or social
interactions. Moreover, we studied how digital brushes have
to be designed and implemented for strokes performed by
human fingers in order to work well on the touch surface,
independently of the target group and its drawing skills. To
answer these questions, we applied an ensemble of research
methods: First, we identified user’s needs and created personas
and scenarios. The needs were then taken into account while
developing paper prototypes and the interactive application.
Furthermore, two usability studies (one in a lab environment,
one in the field) were conducted to evaluate the multi-touch
application in general, and the user interface in particular.</p>
      <p>In the next section we discuss Related Work dealing with
multi-touch and multi-user approaches in museums,
participatory projects and drawing applications. The section
Design Study describes the research methods used during the
development of the application and the application’s features.
In the section Evaluation we give details of the procedure,
participants and test results of the conducted studies. In the
section Discussion we summarize and interpret our findings
of the two usability studies. Finally, we discuss possible
directions for future research in the section Conclusion.</p>
    </sec>
    <sec id="sec-3">
      <title>II. RELATED WORK</title>
      <p>
        Large-scale table-top devices have already demonstrated
their great potential in public use of interactivity and
collaboration in the past. In 2002, the project SmartSkin [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] investigated
a new sensor architecture for making interactive surfaces
sensitive to human hand and finger gestures. Besides technical
achievements, the study of Rekimoto [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] also reported new
insights into interaction techniques using multiple fingers. One
year later, a study with Diamond Touch was conducted by
Dietz and Leigh [4]. They proposed a touch-sensitive input
device which allows multiple, simultaneous users to interact in
an intuitive fashion. Nowadays, multi-touch table-top devices
can be found in various locations such as airports, information
centers, retail stores, and museums [5], [6]. To provide an
overview of work related to our problem domain, we focused
on multi-touch and multi-user table-top applications,
participatory projects, as well as drawing applications.
      </p>
      <sec id="sec-3-1">
        <title>A. Multi-user Table-Top Applications</title>
      </sec>
      <sec id="sec-3-2">
        <title>The Museum of Science and Technology in Islam [7]</title>
        <p>demonstrates 1500 years of history of Muslims on a large
multi-touch table. Visitors can simultaneously interact with
the application and create a social learning experience.
Furthermore, Horn et al. [8] conducted a survey at the Harvard
Museum of Natural History, showing that visitors collaborate
effectively and engage in on-topic discussions of the
exhibition. They presented a design and evaluation of a
tabletop multi-user game to help visitors learn more about
evolution. The multi-touch table installation of Hornecker [9]
in the Berlin Museum of Natural History demonstrates that
information-browsing applications may be inappropriate for
a museum’s context, as it was not used much and hardly
provided discussion topics. The potential of interactive
tabletops was not exploited satisfactorily.</p>
        <p>Multi-user scenarios can also be found in other areas besides
a museum’s context. Blumenstein et al. [10] have described
inter alia general requirements and challenges for multi-user
and multi-device scenarios from the perspective of interactive
data visualization.</p>
      </sec>
      <sec id="sec-3-3">
        <title>B. Participatory Projects</title>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Ideum [11] developed a photo kiosk for the Crystal Bridges Museum of American Art in conjunction with the exhibition</title>
      <sec id="sec-4-1">
        <title>Warhol’s Nature and Jamie Wyeth. The participatory aspect of</title>
        <p>the project was that visitors were able to capture their own
photograph and then choose different style elements to apply
to their photograph based on the works of the two American
artists. After styling their photograph, users could send it via
email to either themselves or others.</p>
        <p>Moreover, the Indianapolis Museum of Art (IMA) developed
a number of participatory projects [12]–[14] that allow visitors
to contribute to the museum experience by creating their own
content and sharing it with the public. In 2013 the IMA
launched a drawing competition with the Matisse, Life in Color
exhibition encouraging visitors to create drawings inspired
by the works of the French artist [12]. This concept based
on an app available on a number of iPads was installed in
the exhibition entrance. The created drawings could then be
submitted via the app to a provided competition website, where
people could view submissions and rate and comment the
drawings. IMA stated that this participatory project worked
well, because visitors could see themselves and/or their works
represented within the network.</p>
      </sec>
      <sec id="sec-4-2">
        <title>C. Drawing Applications</title>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>There are a number of drawing applications on the World</title>
      <p>Wide Web where users can draw on their own device and
then share it with others or draw collaboratively over the
web. Awwapp [15] and sketchpad [16] are two well known
examples. Awwapp offers collaborative drawing by connecting
through the Internet. The available functions are very basic
but effective. Basic functions that most of the applications
include are a pencil with different sizes and color, an eraser,
texts, and sometimes images that can be placed. Deleting the
whole image, as well as saving and sharing it, are additional
functions. Drawing applications on multi-touch tables can be
found in domains like design, in the form of a brainstorming
tool [17] or in educational organizations [18], [19]. Partarakis
et al. [20] presented a painting game for pupils, introducing
physical objects to a large touchscreen. The aim of this
installation was to teach drawing techniques to pre-schoolers.</p>
      <p>Beside the work of Partarakis et al. [20], the usage of
drawing applications on multi-touch tables in a museum’s
context has not been investigated. Apart from these studies,
there is a lack of research on the integration of drawing
applications on multi-touch tables in a museum’s context,
especially for art museums with the focus on collaboration
and participation.</p>
    </sec>
    <sec id="sec-6">
      <title>III. DESIGN STUDY</title>
    </sec>
    <sec id="sec-7">
      <title>Our design study is divided into three parts: gathering information to deduce requirements, conceptual design for an easy-to-use interactive comic experience, and evaluating the application to identify problems.</title>
      <sec id="sec-7-1">
        <title>A. Requirements Research</title>
        <p>The first step in the process of defining requirements was
to collect qualitative data about the potential users of the
museum. In this research phase real world observations and
interviews were conducted. After collection, the information
was modeled in form of personas. In the final stage scenarios
were developed to define the requirements.</p>
        <p>1) Observations &amp; Interviews: Firstly, we physically
visited the museum to gain insights by interviewing the museum’s
employees and conducting observations such as: what does
the exhibition area look like, what is the average exhibition
period or who are the visitors. Four employees of different
functions were interviewed: the director of the museum, a
cashier, and two museum warders. The conversations took
roughly 15 minutes. The museum provided all their data they
had already collected about their visitors over the years.
During the observations we also analyzed published advertising
materials, the gift shop and the guestbook. The document
analysis showed that the guestbook is full of sketches and little
cartoons, showing the visitors’ desire to express themselves
not only graphically, but also by relating their drawings to the
context of the museum’s exhibition.</p>
        <p>2) Personas &amp; Scenarios: Based on the interviews and
observations three personas [21] were created: an older married
couple, a class of high school juniors and a young guy in his
twenties. These personas became the main characters of the
developed scenarios. The scenarios describe their visits to the
museum, why they go there in the first place, how they act
in the museum and how they react to and interact with the
table. Storyboards have been created to illustrate the scenarios
(Figure 1). The output of this process was the requirement
definition.</p>
        <p>3) Results for Requirements: The target group of the
museum includes nearly every age group (young children as
well as retirees), and different social groups (tourists, students,
regulars). Regarding the touch table the target group is reduced
to people interested in technology. The list below presents user
requirements for the target domain:
• Expressing themselves graphically: The paper
guestbook shows that visitors express themselves by drawing
funny sketches based on the exhibition topic.
• Collaborative work: Sketches in the paper guestbook are
often drawn by more than one person.</p>
        <p>• Self-representation: The drawings in the paper
guestbook, which any visitor of the museum can flip through,
are nearly always signed.</p>
      </sec>
      <sec id="sec-7-2">
        <title>B. Design</title>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>Paper prototypes showing the concept of the drawing ap</title>
      <p>plication and the first design of the look-and-feel of the
application were developed. In the next section we describe
the final interactive prototype running on a multi-touch table.</p>
      <p>The core concept of the drawing experience is the narrative
aspect to it. In case of the Karikaturmuseum museum, the
visitors get the chance to become storytellers by drawing
panels for a collaboratively created comic. The collaboration
is not limited to the museum context because user-generated
drawings are exhibited both on the interactive table and the
web application, once the comic is finished. Visitors who do
not wish to draw in public and have their work presented
publicly can interact with the table by flipping through
completed stories. This way, the multi-touch table caters not only
to the needs of visitors who wish to be actively involved in
the exhibition, but also to those who prefer to passively take
in the art presented.</p>
      <p>Start Screen: As proposed and evaluated by Klinkhammer
et al. [22], we divided the whole screen into four personal
working areas seen in Figure 2, where the user can interact
with the tool. The main screen contains elements for drawing
comics (pencils) and one element showing already finished
artwork (book).</p>
      <p>These two features are included to cover the needs of
different forms of participation in museums identified by
Simon [23]. On the one hand, the visitors, who are “creators”,
can produce content by drawing panels for a comic and, on
the other hand, the so called “spectators” read and discover
finished comics.</p>
      <p>To start the application, the user has to drag one element
into their personal working area. The selected element then
pops up in the chosen area and the user can then either start
drawing a panel for one of the provided stories, or look at
completed comics (Figure 2). Figure 3 shows the provided
stories, which are based on famous drawings by Deix, a
wellknown Austrian cartoonist. Furthermore, the user chooses a
story they want to contribute to.</p>
      <p>Collaboration Concept: After choosing a story the users
have to decide if they want to draw on their own or
collaboratively in a group (Figure 4). To work collaboratively, the
system provides the possibility to draw individually on two
different working areas. Each user sees what the other draws in
their own working area. To do so, the user needs to share their
story before starting to draw. The chosen story then reappears
on the main screen, giving other visitors the chance to join
this work. If users choose to work on a comic, they get to see
the last three pictures that have been drawn for the selected
story in the form of a carousel. This way, users get a glimpse
of how the story developed so far, without telling everything
that has been going on up to this point. By not knowing the
whole story the comics should turn out more compelling. This
concept was used to encourage the creativity of every user and
to get interesting stories.</p>
      <p>Sketching: The sketching part of the system provides a
drawing application with various tools: brushes, balloons, text
areas, an eraser and the functionality of undo (Figure 5).</p>
      <p>The basis for the design of the brush implemented on the
multi-touch table was the analogue drawing behaviour with
a pen. To make it easy to use for the broad visitor audience
we integrated one type of brush. The line style of the brush
is comparable to a felt pen. To vary the type of brush, a
thickness slider with a preview area and a colour palette was
implemented. The colours depend on the story the user has
chosen.</p>
      <p>The interaction concept for adding text elements and
bubbles is based on known concepts of graphic applications such
as Adobe InDesign or Photoshop. The text box appears on the
surface and users are able to drag and drop the box into the
place of the picture where they want it to be.</p>
      <p>Related to the text input methods, we decided to integrate
a soft keyboard based on the QWERTY approach [24]. The
physical keyboard elements are mapped to the on-screen
keyboard. The touch elements have a squared shape and the
size of the touch elements was adapted to a finger-friendly
size.</p>
      <p>Fig. 5: Drawing Interface</p>
    </sec>
    <sec id="sec-9">
      <title>Following the Story: After finishing a drawing, users can</title>
      <p>sign the comic panel by filling out a form with a name,
residence and an email address. Then users can see their
picture lined up with the previous panels. This allows the
visitors to see how the drawing just finished integrates into
the whole comic strip.</p>
      <p>By scanning the provided QR-Code on the multi-touch
table, visitors can take the story home with their personal
smartphone seen on Figure 6. The QR-Code leads to a mobile
web application, which links to the comic the users took part
in the museum. So, visitors stay in contact with the exhibition
and the collaborative aspect does not end when leaving the
museum. The integrated QR-Code does not provide extra
information about the exhibited arts in the museum [25], but
rather complements the mobile website.</p>
      <p>Fig. 6: Interface Design for scanning the QR-Code
Reading Finished Comics: The application on the touch
table also provides the possibility to look through finished
comic strips from other visitors. Thus, the visitors can get
an idea of the stories and inspiration for their own sketching
work. The interface is arranged similar to the drawing area
seen in Figure 7. On the left, there is a tool bar showing the
different stories. In the main area, different versions of one
selected story are listed.</p>
      <sec id="sec-9-1">
        <title>C. Prototype</title>
        <p>The prototype was developed for a 40 inch framed high
definition (1920 x 1080 px) table-top, including infrared
tracking to discern the touch points. Up to four museum visitors
may use the application simultaneously. The application was
developed for a multi-touch and multi-user approach and
combines a touch table with mobile devices (Figure 8). The
system is an interactive installation where visitors can do
creative, graphical and collaborative comic storytelling. On the
one hand, the users may sketch a drawing and become part of
a bigger story and, on the other hand, they can look through
already completed artworks by other users.</p>
        <p>The research about different technologies showed that Flash
(Actionscript 3.0) together with the framework Open Exhibits
(http://openexhibits.org/) for recognizing gestures is the
system most suitable for us on the touch table. At the time of
prototype implementation, Flash had a large community and is
well-documented. Furthermore, we chose this platform for our
comic experience application because of the experiences with
Flash on multi-touch tables in previous projects concerning
the stability and easy installation on Windows PCs.</p>
        <p>Our application supports up to four simultaneous users and
the process of drawing needs sensitive reactions by the system.
The gesture framework Open Exhibits provides the advantages
of predefined touch-gestures and the support of simultaneous
touch events, which are needed to develop collaborative
applications. The first step in the implementation phase was to
build a clear object- and action structure, defining which
dataobjects should be used and which actions would be performed
on those objects. A database contains all the data objects and
their relations. This database is also used for our website,
where the users may open their drawn images from home or
on their smartphones.</p>
      </sec>
    </sec>
    <sec id="sec-10">
      <title>IV. EVALUATION</title>
    </sec>
    <sec id="sec-11">
      <title>Two user tests were performed: one was conducted in a laboratory setting (at an early conceptual stage) and one in the field (with a completed first release prototype). The study design and results are presented in the next sections.</title>
      <p>A. User Study in Laboratory Setting</p>
      <p>1) Prototype: The interactive prototype running on the
touch table at this stage of the design study already included
these functions: opening the drawing application via drag and
drop interaction; selecting the brush and setting its width;
drawing on the comic panel; erasing the lines; adding text and
bubble elements and typing text into it. Based on a user-based
usability test we evaluated the prototype to investigate (1) how
effective the drawing application is and (2) how satisfying it
is for the users to draw with their fingers on a multi-touch
table. (3) Furthermore, the text-input method, in our case a
soft keyboard (Figure 8), was part of the analysis. The aim
was to find out how easy it is for visitors to type on a
touchbased keyboard.</p>
      <p>
        2) Procedure and Participants: 13 high school pupils (11
female and 2 male) at the age of 14 to 15 years participated in
the user test. At this point of development the main functions
of the application were fully developed and implemented on
the touch table. The test equipment consisted of two 40 inch
framed table-tops including infrared to discern the touch points
and two DSLR cameras recording the interactions with the
system and the users’ feedback. During the observations
handwritten notes were taken. The participants were supposed to
complete a set of seven predefined tasks and were divided into
two groups: one group consisted of single students working
through the tasks and the second group were four students
working together on four individual areas. Both groups faced
the same tasks to complete and did so simultaneously in two This field study examined (1) how satisfying it is for the user
separate rooms. to draw, (2) how effective the concept of collaborative drawing
3) Study Design: The tasks the students had to complete on a multi-touch table is, and (3) whether stories were being
included: (1) describing what they see, (2) drawing a cat, (3) developed.
letting the cat talk, and (4) changing what the cat is saying. To record user behaviour and interactions remotely, we
Due to the qualitative character of the study, the subjects were installed a webcam beside the touch table and used the
asked to fill out a questionnaire on how well they were able Software iSpy [
        <xref ref-type="bibr" rid="ref5">27</xref>
        ] to adapt the recording time of the camera
to handle the application and how much they enjoyed doing to the opening hours of the museum. We also implemented
so at the end of the usability test. instrumentation functionality to log usage data while using
      </p>
      <p>
        Furthermore, a focus group discussion with all 13 students the application (such as which tools were used, which stories
regarding questions such as: Did they like the application? were chosen, or how long drawing sessions took).
Would they improve certain functions? Do they have general 1) Procedure and Participants: The table was set up in
recommendations? was initiated to get a broad range of one of the rooms of the museum (called Deix room) over
viewpoints and insights. During the test, the thinking aloud a period of one week. Posters on the sides of the table
approach was followed [
        <xref ref-type="bibr" rid="ref4">26</xref>
        ]. explained that it was a university project, that visitors were
4) Test Results: The results suggest that drawing with invited to draw comics and that users would be filmed when
fingers on a multi-touch table is very effective and easy to use, using the table. These were the only explanations museum
though some of the students struggled to draw as accurate as visitors received regarding the use of the table. A camera
they wanted to. filmed the interactions of the visitors when using the table.
      </p>
      <p>The suggestions of the questionnaire indicated that the When closing the application, a pop-up with a voluntary
drawing part is very satisfying for the participants. All students questionnaire appeared. In the background, we logged data
ranked the application between 1 and 3 (grades 1 to 5, 1 to get more insights about the interaction behaviours of the
indicating the highest satisfactory level). They commented visitors.
that they would try drawing on the table in the museum, as The test participants were a random group of visitors,
well. Some students also revealed that they like to be creative. regardless of age and media literacy, who attended the
exRegarding the brush design and variety of colours provided, hibition at the Karikaturmuseum museum in the time period
they expressed the wish for a thinner brush and more colour of one week. The exact number of participants is unknown,
combinations. as the camera that recorded the users was positioned in a way</p>
      <p>The text-input via the keyboard (Figure 9) revealed some that guaranteed their anonymity. Overall, 185 sessions were
room for improvement: during the test it could be observed captured.
that participants had problems with typing on the keyboard. 2) Study Design: The camera was arranged at the side of
The touch areas were too small, causing the keyboard to close the table, filming from high above. So, the whole table was in
itself when they hit the drawing area instead. The subjects also focus while the angle provided the anonymity of the visitors.
called for a cursor. The questionnaire was structured in two parts: After giving
their approximate age, gender and their reason for visiting
the museum (or deciding not to answer), they could choose
between different smileys (laughing, neutral, sad) to state
whether or not they found navigating through the application
easy, they instantly knew how to use the drawing tools, and if
they liked drawing on the table.</p>
      <p>User behaviour was also logged in the background. The
following research questions were the basis for the logging
Fig. 9: The improved keyboard design as a result of the functionality: (1) Which of the four comics based on a Deix
laboratory test. drawing (“king of the cats”, “women on the beach”, “playing
indians”, “hedgehog”) was chosen most often? (2) Did users
close the drawing app before finishing their picture? (3) How
B. Field Study many pictures were finished in total? (4) Which comics did</p>
      <p>The second user test was conducted on-site in the museum, they like to read? (5) During which times of the day was
testing the application in real world circumstances in the field. the table used? (6) On which day of the week was it used
At this point, the development of the prototype was basically most often? (7) Do users prefer working alone over working
finished. Based on the previous prototype for the laboratory in groups? These are some of the questions the log contributed
setting this prototype was improved and extended. The im- answers to.
provements included: one brush with more colours to choose 3) Test Results: Collaboration and Stories - Findings
from (8 main colours); collaborative drawing functionality; a showed that users are more likely to work on a comic alone
keyboard adapted by resizing the keys. The application was instead of in a group. Their favourite story was a story about
extended by the functionality of reading comics. the “king of cats”, but when drawing alone the story about
a “woman on the beach” was chosen most often. Concerning
working in groups, we could observe that people help each
other and work together rather than destroying the work of
the other drawers. Even when people work separately on their
own sketches, they stop to help users with problems in using
the application.</p>
      <p>Interestingly, the developed panels/drawings exhibited less
elements of a comic, such as text boxes or speech bubbles. A
few participants used a thin brush to write texts to complete
their panel instead of using the text tool (Figure 10). Overall,
within 185 sessions, visitors used text elements only every
fourth session and every second session they placed speech
bubbles on their drawings.</p>
      <p>Reading Finished Comics: In the days of the field study, no
comic was finished completely. Since only completed stories
can be read on the table, the visitors were not able to flip
through stories. Overall, the video recording showed that the
interface design and interaction was clear and easy-to-use,
though.</p>
      <p>Participants: In the testing period visitors between the ages
of 11 to 25 and 36 to 50 years attended the exhibition. The
application was used by more women than men.</p>
      <p>Questionnaire: The overall response to the drawing
application was very positive. 48 of 60 visitors rated the drawing
application as positive. Respondents were asked to indicate
whether the tools (brush, text and bubble, undo) are
immediately clear. 41 of 60 participants categorized the tools as very
easy to understand and easy-to-use.</p>
    </sec>
    <sec id="sec-12">
      <title>V. DISCUSSION</title>
    </sec>
    <sec id="sec-13">
      <title>The findings of the two usability and user-experience studies</title>
      <p>can give some implications and experiences for the design of
collaborative drawing applications with the focus on
storytelling on multi-touch tables in art museums. The results seem
to be consistent with other research as shown in the following
sections.</p>
      <p>Concept &amp; Interaction Design: The interface of the
drawing application was designed in analogy to well-known
applications such as Adobe Photoshop or Illustrator. The tool
palette is positioned on the left side of the interface, with the
drawing area next to it. By tapping a tool, a menu opens and
provides the different choices the tool offers. By using drag
and drop or tapping on the selection of the text elements,
for example, they appear on the drawing area. The video
observations of the field study and the personal observation in
the laboratory show that only few users prefer the possibility
to drag and drop elements over tapping on an element and
have it appear on the drawing area. The comic reading section
is structured the same way; on the left, one can choose
between the four different stories and next to the sidebar
the presentation area is positioned. Based on the statements
of the conducted questionnaires, and interaction behaviors of
visitors as seen on the videos of both studies, we can state
that this structure of the interface works well for a wide range
of visitors.</p>
    </sec>
    <sec id="sec-14">
      <title>The user studies showed that one brush is efficient enough</title>
      <p>as long as the thickness can be adjusted properly enough.</p>
      <p>
        Regarding the text-input method we can only interpret the
results of the study. 185 sessions were detected during the
field study in the museum, but only 57 of the drawings were
signed by the visitors and only few comic panels exhibit text
elements or speech bubbles. This may result from the input
method for text, using a keyboard known from the smartphone
applications. So, we can confirm the results of Wigdor et
al. [
        <xref ref-type="bibr" rid="ref6">28</xref>
        ] stating that text input on large multi-touch tables can be
problematic and that more research besides Hinrich et al.’s [24]
study to investigate new methods for textual input ought to be
done.
      </p>
      <p>The concept of collaborative storytelling in this specific
context of the museum Karikaturmuseum Krems works well.
We found some storytelling aspects in one comic, but there
were no completed comics. As there are four different stories
to choose from, it takes a while until one comic is completed.
The number of panels resulting in a comic was defined as
too high. We recommended a number of panels for a story of
approximately 10 panels. A way to get visitors to complete
comics quicker, and thus be able to offer the application’s full
functionality, could be to start off with only one story and have
visitors unlock the other three stories by completing one story
after the other, until all four stories are available. This would
also force users to work collaboratively, which could then lead
to have museum visitors interact more easily. In that way, we
can avoid the problem that there are no comics to read on
the multitouch table, as well. It is important to show at least
a message saying that this area is empty until the predefined
number of panels for one story has been finished.</p>
      <p>
        Collaboration: Our findings show that users are more likely
to work on a comic alone instead of in a group. So, we can
confirm the results of Block et al. [
        <xref ref-type="bibr" rid="ref7">29</xref>
        ] and express the need to
provide a meaningful single-user experience. But surprisingly,
this phenomenon can be noticed differently depending on
the topic of the story. Some topics are more likely to be
drawn collaboratively than others. To ascertain why there is a
noticeable imbalance in the stories with regards to visitors
working alone or collaboratively, more research would be
necessary.
      </p>
      <p>
        Besides the known behavior of social learning and
peripheral interest identified by Hinrichs and Carpendale [
        <xref ref-type="bibr" rid="ref8">30</xref>
        ], we
could determine the fact that users help each other by giving
hints or performing the interaction for their partners on their
personal area.
      </p>
      <p>Research Methods: The conducted research methods, the
user study in the laboratory setting on the one hand, the user
study in the field on the other hand, can be categorized as very
useful. The early on user-centered research in the laboratory
gave important insights on the problems of the interface and
user behaviors while interacting with the drawing application
on the multi-touch table. Here, we were able to discern that the
concept of collaborative storytelling while drawing is effective
and satisfying for users even early in the design process.</p>
      <p>The second usability study in the field with video
recordings, logging and a questionnaire provided different insights
into the usage of the drawing application. The video recordings
gave us the chance to identify the overall usage of the drawing
application and insights into social interactions in groups
or alone. To get more information of the visitors and their
interactions the logging of the data gave us knowledge about
the exact number of sessions, used stories and tools as well
as the age of the visitors. So, the combination of video
recordings and the data logging can be recommended as it is
very useful. In upcoming studies we would integrate a personal
questionnaire again, asking the visitors about their interaction
with the interface to get more insights into the needs and
wishes of the target group.</p>
    </sec>
    <sec id="sec-15">
      <title>VI. CONCLUSION &amp; FUTURE WORK</title>
    </sec>
    <sec id="sec-16">
      <title>The presented study was designed to determine the effects</title>
      <p>of the integration of an application on a multi-touch table in
the context of the Karikaturmuseum Krems. Thus, a drawing
application based on a collaborative concept was developed
and tested in the field and in a laboratory environment.
The results show that there is large potential for introducing
such kind of digital technology in a museum’s context. The
development of systems such as mentioned for collaborative
drawing applications with a storytelling aspect to them for art
museums introduces some challenges, such as:</p>
      <p>
        Interplay Between Table-top and Smart Device:
Concepts for multi-display scenarios that incorporate both large
displays and small personal mobile devices have to be explored
in depth in further studies. We approached this subject by
giving visitors the chance to take elements of the museum’s
exhibition home, thus keeping them connected to the
development of the stories, as well as the museum in general. Calling
up a website on their personal smartphone is a step toward the
multi-display trend defined in 2010 by Isenberg et al. [
        <xref ref-type="bibr" rid="ref9">31</xref>
        ].
      </p>
      <p>
        Collaboration: Museum studies have found that people
often visit exhibitions in groups [9], [
        <xref ref-type="bibr" rid="ref10">32</xref>
        ]. Yet, many
museums offer elements where visitors work on individual tasks
sequentially or parallel, but never collaboratively [
        <xref ref-type="bibr" rid="ref9">31</xref>
        ]. So,
systems should provide aspects of collaborative work not only
in the form of integrating large tabletops but also interaction
concepts and game concepts for working in groups on one task.
With our tool, we can introduce a storytelling approach for a
drawing application with the focus on collaborative drawing,
by allowing museum visitors to draw on their personal working
area, but simultaneously draw in collaboration with other
users.
      </p>
      <p>In future research we plan to focus on identifying which
aspects of the application work well in any museum, which
are specific to a certain type of museum, and which only cover
the particular needs of the Karikaturmuseum Krems.</p>
    </sec>
    <sec id="sec-17">
      <title>ACKNOWLEDGMENTS</title>
      <p>We want to thank Gottfried Gusenbauer, the director of the
Karikaturmuseum Krems, who contributed to the project in
many fruitful discussions. This work was supported by the
Austrian Ministry for Transport, Innovation and Technology
(BMVIT) under the ICT of the future program via the VALiD
project (no. 845598) and by the Austrian Federal Ministry
of Science, Research and Economy under the FFG COIN
program (MEETeUX project, no. 7209770).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>A. M.</surname>
          </string-name>
          <article-title>GmbH</article-title>
          .
          <article-title>HAUS des MEERES - aqua terra zoo - audioguide</article-title>
          .
          <year>2017</year>
          .
          <source>Retrieved August 07</source>
          ,
          <year>2017</year>
          from https://www.haus
          <article-title>-des-meeres</article-title>
          . at/de/Besucherinfo/Audioguide.htm.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <article-title>[2] Kunsthistorisches museum wien</article-title>
          .
          <source>2017. Retrieved August 07</source>
          ,
          <year>2017</year>
          from https://www.khm.at/erfahren/kunstvermittlung/audioguide/.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Rekimoto</surname>
          </string-name>
          , “
          <article-title>Smartskin: An infrastructure for freehand manipulation on interactive surfaces,”</article-title>
          <source>in Proc. of the CHI '02. ACM</source>
          ,
          <year>2002</year>
          , pp.
          <fpage>113</fpage>
          -
          <lpage>120</lpage>
          . [Online]. Available: http://doi.acm.
          <source>org/10</source>
          .1145/503376.503397
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [26]
          <string-name>
            <surname>M. W. Van Someren</surname>
            ,
            <given-names>Y. F.</given-names>
          </string-name>
          <string-name>
            <surname>Barnard</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          <string-name>
            <surname>Sandberg</surname>
          </string-name>
          ,
          <article-title>and others, The think aloud method: A practical guide to modelling cognitive processes</article-title>
          . Academic Press London,
          <year>1994</year>
          , vol.
          <volume>2</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [27] iSpy - open
          <source>source camera security software</source>
          .
          <source>2015. Retrieved April 05</source>
          ,
          <year>2016</year>
          from http://www.ispyconnect.com.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>D.</given-names>
            <surname>Wigdor</surname>
          </string-name>
          , G. Perm,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ryall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Esenther</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Shen</surname>
          </string-name>
          , “
          <article-title>Living with a tabletop: Analysis and observations of long term office use of a multitouch table</article-title>
          ,” in Workshop on TABLETOP '
          <volume>07</volume>
          ,
          <year>2007</year>
          , pp.
          <fpage>60</fpage>
          -
          <lpage>67</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>F.</given-names>
            <surname>Block</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hammerman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Horn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Spiegel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Christiansen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Phillips</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Diamond</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Evans</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Shen</surname>
          </string-name>
          , “
          <article-title>Fluid grouping: Quantifying group engagement around interactive tabletop exhibits in the wild</article-title>
          ,”
          <source>in Proc. of the CHI '15. ACM</source>
          ,
          <year>2015</year>
          , pp.
          <fpage>867</fpage>
          -
          <lpage>876</lpage>
          . [Online]. Available: http://doi.acm.
          <source>org/10</source>
          .1145/2702123.2702231
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>U.</given-names>
            <surname>Hinrichs</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Carpendale</surname>
          </string-name>
          , “
          <article-title>Gestures in the wild: Studying multi-touch gesture sequences on interactive tabletop exhibits,”</article-title>
          <source>in Proc. of the CHI '11. ACM</source>
          ,
          <year>2011</year>
          , pp.
          <fpage>3023</fpage>
          -
          <lpage>3032</lpage>
          . [Online]. Available: http://doi.acm.
          <source>org/10</source>
          .1145/1978942.1979391
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>P.</given-names>
            <surname>Isenberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Hinrichs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hancock</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Carpendale</surname>
          </string-name>
          , Tabletops - Horizontal Interactive Displays. Springer London,
          <year>2010</year>
          , ch.
          <source>Digital Tables for Collaborative Information Exploration</source>
          , pp.
          <fpage>387</fpage>
          -
          <lpage>405</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>U.</given-names>
            <surname>Hinrichs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Carpendale</surname>
          </string-name>
          , “
          <article-title>EMDialog: Bringing Information Visualization into the Museum,”</article-title>
          <source>IEEE Transactions on Visualization and Computer Graphics</source>
          , vol.
          <volume>14</volume>
          , no.
          <issue>6</issue>
          , pp.
          <fpage>1181</fpage>
          -
          <lpage>1188</lpage>
          ,
          <year>Nov 2008</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>