=Paper= {{Paper |id=Vol-2235/paper5 |storemode=property |title=WhoLoDancE: Whole-body Interaction Learning for Dance Education |pdfUrl=https://ceur-ws.org/Vol-2235/paper5.pdf |volume=Vol-2235 |authors=Anna Rizzo,Katerina El Raheb,Sarah Whatley,Rosa Maria Cisneros,Massimiliano Zanoni,Antonio Camurri,Vladimir Viro,Jean-Marc Matos,Stefano Piana,Michele Buccoli,Amalia Markatzi,Pablo Palacio,Oshri Even Zohar,Augusto Sarti,Yannis Ioannidis,Edwin Morley-Fletcher |dblpUrl=https://dblp.org/rec/conf/euromed/RizzoRWCZCVMPBM18 }} ==WhoLoDancE: Whole-body Interaction Learning for Dance Education== https://ceur-ws.org/Vol-2235/paper5.pdf
       WhoLoDancE: Whole-body Interaction
          Learning for Dance Education
   Anna Rizzo1, Katerina El Raheb2ꝏ, Sarah Whatley3, Rosa Maria Cisneros3,
 Massimiliano Zanoni4, Antonio Camurri5, Vladimir Viro6, Jean-Marc Matos8,
Stefano Piana5, Michele Buccoli4, Amalia Markatzi7, Pablo Palacio9, Oshri Even
    Zohar10, Augusto Sarti4, Yannis Ioannidis2 and Edwin-Morley Fletcher1

                              1
                                Lynkeus, Rome, Italy
                    2
                      Athena Research Centre, Athens, Greece
       3
         Centre for Dance Research, Coventry University, United Kingdom
    4
      Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di
                                Milano, Milano, Italy
 5
   Department of Informatics, Bioengineering, Robotics and System Engineering,
                        University of Genova, Genova, Italy
                      6
                        Peachnote GmbH, Munich, Germany
                    7
                      Lykeion Ton Hellenidon, Athens, Greece
                           8
                             K. Danse, Toulouse, France
                         9
                           Instituto Stocos, Madrid, Spain
              10
                 Motek Entertainment, Amsterdam, The Netherlands
                            ꝏkelraheb@di.uoa.gr




Abstract. Dance resides among the most ancestral forms of art, representing a
major asset of the human intangible cultural heritage playing, at the same time,
a primary role in contemporary artistic creation. WhoLoDancE, a Research and
Innovation Action funded under the European Union’s Horizon 2020 research
and innovation programme, aimed at the double goal of preserving its
inheritance and integrating digital technologies into contemporary dance
learning, teaching and choreography through the digitalisation of dance
movements with motion capture techniques, the creation of a large motion
repository - including movements from ballet, contemporary, flamenco and
traditional Greek folk dances - and the implementation of breakthrough
applications ranging from movement quality annotation and segmentation,
similarity search, movement blending, multimodal and virtual reality-based
experiences for self-reflection and experimentation. In this paper, we present
the prototype tools and state-of-the-art results of the project development in its
conclusive phase, highlighting the added value this interdisciplinary approach
could possibly bring to dance learning and practice, the main technical,
practical and cultural challenges encountered in this path and open issues to be
addressed in the months to come, providing hints on future research directions.

Keywords: Dance, Education, Learning, Motion Capture, Information
Technology, Machine Learning, Virtual Reality, Intangible Cultural Heritage




                                       41



  Cultural Informatics 2018, November 3, 2018, Nicosia, Cyprus. Copyright
  held by the author(s).
1       Introduction

WhoLoDancE is a three-year Research and Innovation Action funded under the
European Union’s Horizon 2020 programme (2016-2018), which aimed at developing
and applying breakthrough technologies to dance learning to have a relevant impact on
dance practitioners including researchers, professionals and dance students.
   The project builds around five main interconnected objectives:
   1. investigate bodily knowledge by applying similarity search tools, computational
models and techniques for the automated analysis of non-verbal expressive movement
of dance data to help investigate movement and learning principles, vocabularies,
mental imagery and simulation connected to dance practises;
   2. preserve the cultural heritage by creating a proof-of-concept motion capture
repository of dance motions, along with built-in methods allowing interpolations,
extrapolations and synthesis among different compositions documenting diverse dance
movement practices and learning approaches;
   3. innovate the teaching of dance by developing multimodal experiences and life-
size volumetric displays that through immersive and responsive motion capture data
can identify and respond to collisions between the physical and virtual bodies;
   4. enrich choreography by structuring an interactive repository of motion capture
dance libraries and providing choreographers and dance teachers a custom dance data
blending engine to assemble an infinite number of dance compositions;
   5. widen the access and practice of dance by providing access to the created dance
database through commercially available consumer-grade motion capture devices.

2         WhoLoDancE: an Overview

WhoLoDancE has been conceived as an unprecedented approach to dance, leveraging
breakthrough technologies to digitalise, preserve and convey the European intangible
dance cultural heritage, and bring a profound innovation in the way this long-standing
art is traditionally created, learnt and taught [1]. This has entailed the shared definition
of twelve dance movement principles [2], set as an open conceptual framework for the
subsequent recording of a wide range of dance movements through the use of motion
capture techniques, their annotation and enrichment, and the development of cutting
edge algorithms to explore, analyse and re-elaborate this data at the service of dance
learning and choreography, together with the implementation of multimodal and
virtual body exploration experiences in real time, able to offer dance professionals and
practitioners a new way of self-reflection-based experimentation.
   To achieve these goals, the project work plan has been articulated in three phases:
the former, dedicated to the acquisition of preliminary knowledge and movement data;
the second, directed to the definition of expressive movement and music-dance
representation models and the preliminary deployment of data-driven and model-
driven analysis software; the third one, aimed at the final delivery of data-driven tools
and visual interactive user interfaces, their refinement and evaluation by external
dance experts.




                                            42
2.1       Motion Capture, Knowledge Acquisition and Semantic Models
The first project phase has been pivotal for creating the ground knowledge and
database on which to build and train our data-driven tools. The Consortium planned
three motion capture sessions, leading to the production of a substantial volume of
kinetic material representative of the four selected dance genres: ballet, contemporary,
flamenco and Greek folk dances. This has been developed alongside the acquisition of
preliminary knowledge from end-users, obtained through focus groups and interviews
with dance experts inside and outside WhoLoDancE, necessary for a first definition of
different users’ profiles and use case scenarios. Prior to the motion capture acquisition,
dance partners and IT researchers have jointly defined a set of twelve dance movement
principles, as the open conceptual framework of the whole approach.

2.2     Prototype Finalisation, Definition of Representation and Learning Models
The second, intermediate phase of activity has been dedicated to the delineation of
models for movement expression and music-dance representation, alongside with data-
driven and model-driven analysis software. A unique aspect of WhoLoDancE is in fact
the variety and diversity of tools that have been created through a cross-disciplinary
dialogue, comprising a modular collection of tools for dance practitioners. Among
them, the Consortium has finalised its web-based users’ interface to access the data
repository, further supplemented by an annotator [3], which has been employed by
dance partners for manual annotation of motion capture recordings. This process,
which consisted of the description of dance sequences in respect to a set of movement
qualities agreed between the team of dance experts, has in turn been essential for the
development of a variety of applications, including similarity search, movement
sketching and multi-modal rendering, and constituted the ground data for the training
of machine learning-based algorithms for automatic annotation and segmentation, to
be employed for the enrichment of further movement data. The prototype development
adopted a co-design process based on a continuous dialogue with dance professionals
through workshops and hands-on sessions. This iterative, user-centred approach
provided strategic clues for further improvement and the opportunity to reflect on
concrete cases before evolving the tools to their next version, also contributing to the
definition of the relevant use case scenarios, outlining innovative approaches to dance
teaching, learning and choreographic creation.

2.3     Finalisation, Evaluation and Integration
The last phase of the project, currently in progress, is seeing all partners finalising their
efforts towards the refinement, validation and integration of the final version of the
developed tools into a unique, seamless comprehensive framework, where users will
be able to take the best of what has been attained along with defined use-case
scenarios. Meanwhile, the Consortium is preparing texts, demo videos and tutorials in
view of the upcoming final evaluation of the proposed tools by a community of
selected dance experts, drawing on the Consortium’s large network of professional




                                             43
dancers, choreographers, teachers and learners from different dance disciplines. The
evaluation will rely on a combination of qualitative and quantitative methods and
include web questionnaires - for online users - and live evaluation sessions with one-
to-one interviews. These one-day sessions will entail a theoretical presentation of the
tools followed by a hands-on practical workshop to allow a thorough assessment of
usability, reliability and accuracy of the tools and their added value for the dance
community, other cultural heritage research on dance and beyond.


3      A Diversified Consortium: When Art Meets Technology

WhoLoDancE relies on an interdisciplinary team composed of technologists, such as
computer scientists, acoustic engineers, 3D animation specialists, and dance experts,
including dance researchers, teachers and choreographers coming from different dance
specialties. While the former have been responsible for the acquisition and elaboration
of motion capture data, the conception and implementation of analytics, software and
technologies, the latter have been fundamental for the practical and theoretical dance
knowledge they have been sharing throughout the project development, from the
production of motion data, passing through the segmentation and annotation of dance
sequences and the evaluation of project tools from the users’ side. Most importantly,
the cooperation between technologists and dance experts has been pivotal for the
definition of movement principles, along with the elaboration of use case application
scenarios for the implemented tools. The interpenetration of art and technology has
represented, though, not a bare mixture of diverse competences, but rather a constant
and enriching dialogue between different methods and practises, which have been
fertilising, complementing and innovating each other in new directions.

 3.1   Computer Science and 3D Animation: the Ground Base of WhoLoDancE
Motion capture technology has constituted the very foundation of the whole project,
providing the raw data to model and train machine learning-based algorithms. The
very protagonist of this pivotal phase has been Motek Entertainment, a performance
capture, 3D animation and VR production studio based in Amsterdam. The studio has
also been responsible for the implementation of the dance movement blending engine
and holographic volumetric display technologies.
 Information technology represented the core expertise for the gathering of technical
 requirements, the development of the data management platform and end-user
 interfaces. This role has been played by the Athena Research Center in Information,
 Communication and Knowledge Technologies of Athens, which has also been
 essential for the development of whole-body interaction technologies.
 Another strategic aspect of the project has been, in turn, the elaboration of sound,
 video and movement data into music and dance representation models. This has been
 achieved through joined efforts of acoustic and software engineers from Image and
 Sound Processing Group of Politecnico di Milano, Casa Paganini-InfoMus at
 University of Genova and Peachnote GmbH, Munich. These researchers have worked




                                          44
closely to the development of software libraries for the analysis of music and
movement kinetic and expressive features, and strategies for mapping of movement
qualities into sound and multimodal content; they have implemented and trained
algorithms for leveraging and enriching motion capture data, such as similarity
search, automatic annotation and beat tracking. These models have been in turn
utilised as base for a variety of applications, ranging from search and similarity
systems to real-time motion-based applications.
These efforts have been coordinated by Lynkeus, a strategy consultancy specialised in
the conception and management of EU-funded projects with a specific expertise on
ICT and analytics applications. Lynkeus plays the role of Project Coordinator and has
been responsible for science communication and dissemination activities, exploitation
and IPR management.

3.2     Dance Education and Choreography: Shaping the Project Framework
WhoLoDancE has brought together representatives of Europe’s dance cultural heritage
and the most advanced tendencies in contemporary dance, based on the intersection of
body gesture and digital arts. The former group includes Lykeion ton Hellenidon, a
Consultative organization of the UNESCO Intergovernmental Committee for the
Safeguarding of the Intangible Cultural Heritage active in the preservation of Greek
cultural traditions, a flamenco historian and teacher from the Centre for Dance
Research of Coventry University, and ballet professionals from Instituto Stocos of
Madrid; the latter involves the contemporary dance company K. Danse of Toulouse
and Instituto Stocos. These experienced dancers, teachers and choreographers have
been primarily responsible for the conception of dance movement principles, the
preparation and rehearsal of motion capture sessions and the subsequent annotation
and segmentation of the relevant motion capture data; they have been supporting the
implementation of project tools through insights and feedback from the user’s side and
the development of music and dance representation models.
   As one of Europe’s leading research centers for dance, the Centre for Dance
Research of Coventry University has been chiefly responsible for the elaboration of
use case scenarios and is leading the evaluation of the project tools, thanks to its broad
and varied connection with the dance community at national and international level.


4       The WhoLoDancE Framework

Being in the final year of the project, the Consortium has now finalised a wide range
of prototype tools ranging from user interfaces, movement segmentation and
annotation, similarity search, multimodal and virtual reality applications (Fig. 1),
which are going to be evaluated by a list of selected dance and movement capture
experts prior to public release. Most of the tools are web-based, while others, such as
Choreomorphy, sonification and the blending engine, are standalone applications,
often Unity-based. By the end of the project, the Consortium has envisaged the
harmonisation of all applications into a unique solution leveraging all available




                                           45
functionalities. To this aim, partners are currently working towards the integration of
tools into a single, comprehensive framework that will rely on two different access
modalities: on one side, a web-based front-end dedicated to low-end, distributed
services, freely accessible through any personal device; on the other, a remote access
modality taking the Unity 3D™ game engine as core base, which will allow
performance of high-end, interactive and immersive applications, available upon
licence. This resource is designed to serve all envisaged application scenarios and be
further expandable to possible additional ones in the future.
   Movement Library and Annotator. The WhoLoDancE movement library (WML)
represents a web-based interface to navigate the dance motion repository. A user can
browse or search recordings by their associated metadata (title, genre, annotations,
performer, dance company and date of recording) or create personal playlists. A
multimodal player allows to watch the synchronized playback of the video and the
corresponding motion capture-derived 3D avatar and interact with it, e.g., by rotating
the scene to observe the movement from different perspectives. The embedded
annotation tool enables manual annotation of performances with movement qualities,
with a tabular and a timeline view.
   Segmentation Tool. This tool is designed for manual segmentation of motion
sequences into simpler movement segments. The tool includes a 3D viewer allowing
to rotate the scene, zoom in/out and switch to/from full-screen view, a player to follow
the execution frame by frame and a tabular view.
   Blending Engine. The software is aimed at interactive composition of movement
sequences with mocap data available in the library. Sequences can be assembled in a
linear setup, i.e., blending movements consecutively in time to form a longer, seamless
sequence, or in parallel, i.e., superposing parts of movements to form new movements,
e.g., with the leg movement of one sequence and the upper body part of another.
   Similarity Engine. This system is able to analyse the representation of a selected
movement recording (query) and employ a comparison algorithm to identify the most
similar movements in the library, according to user-defined criteria. On this basis, the
Consortium has built three derived applications for as many application scenarios:
 - in the similarity search web-based, desktop application, the user can select a
      recording of interest in the library as set it as query to search and retrieve the
      most similar movements in regard to a user-defined ‘weighted’ template of
      movement qualities and properties;
 - in the web-based real-time mobile movement search application, the user is able
      to record his/her movement through any smartphone camera and use it as the
      query for similarity search in real time;
 - in the movement sketching tool, the user can capture his/her movement through
      low-cost sensors (e.g., xOSC, Notch) analyse it in relation to a selection of
      expressive movement qualities of choice and search for similar movements, to
      compare one’s motion with the ones of professional dancers in the repository.
   Choreomorphy. Choreomorphy is an interactive system that supports reflective
dance improvisation through the use of motion capture technologies. By wearing a
mocap suit, a user can visualise his/her movements in real time through a 3D avatar
and related volumetric trails, with the possibility of switching among different avatars




                                          46
and settings, facilitating self-reflection and experimentation. The application also
allows to load pre-recorded mocap data and watch them in a variety of avatars,
environments and effects or even in augmented reality through the HoloLens.
   Low-end Virtual Reality Platform. This web-based visualisation layer is designed
to watch mocap recordings as an immersive VR experience on a common smartphone
and can be placed on top of other applications. The platform supports tracking of head
orientation and includes a standard avatar to watch mocap recordings, a system to
watch videos on virtual walls and customizable 3D environments.
   Sonification Tool. Sonification enables to have a real-time, responsive feedback on
different aspects of a dancer’s movement without causing distraction. This multimodal
tool utilises different sensors (e.g., Kinect V2 cameras, XOSC IMU sensors, MYO
sensors) to capture the dancer’s movement, while several EyesWeb [4]-based analysis
modules examine the dancer’s movement and position on stage and stream the
extracted qualities to a sonification environment (e.g., supercollider, Max) that maps
movement qualities with various elements of sonification in real time.




 Fig. 1. WhoLoDancE Framework tools in action: Choreomorphy (top left); low-end virtual
reality platform (bottom left); sonification tool (top right); real-time mobile movement search
application (center-right), movement sketching tool (bottom right). Credits: Amalia Markatzi.




                                              47
5      Towards a multidisciplinary approach to dance

Besides reaching tangible technological achievements in the form of software
applications and multimodal experiences, the WhoLoDancE approach has contributed
to highlight the added value of combining not only information technology into dance,
but also a variety of movement practices coming from different dance genres.
   Dance represents a highly complex human activity, encompassing intangible
cultural heritage and contemporary artistic creation, expressing as an innate and
spontaneous social activity as well as a virtuosic art practice and a codified form of
academic knowledge. For these reasons, it constituted an excellent testing ground of
how technology can provide valuable tools for preservation and documentation of
cultural assets such as dance movements, qualities and practices, but it also revealed
the challenges and affordances when dance and technology can work together to cross
fertilise each other’s practices and methods. In this case, scientific approaches enter
the digital medium, working closely with different dance genres to establish a
mutually beneficial dialogue, highlighting where there are commonalities within
different dance practices, and between dance and other disciplines. These
commonalities, which are summarized and reflected in the WhoLoDancE conceptual
framework, can provide the basis for further application, supporting the teaching and
learning of other dance genres, movement practices and offering models that can
support engagement with other tangible and intangible cultural forms.
   Moreover, by bringing together very varied dance genres, representing very
different traditions, such as ballet and flamenco, contemporary and traditional dances,
the tools needed to be responsive to disciplinary differences whilst identifying
common principles, thereby providing the ideal ground for cross-fertilisation and
inspiration, enriching the opportunities for choreographic creation, for contamination
across dance genres, and stimulating new methods for teaching, learning and creating
dance. Another important dimension of the project that has emerged as pivotal is the
opportunity to analyse dance movement in depth from a variety of perspectives, as
well as from the user’s own position, thereby encouraging self-experimentation,
offering alternatives to traditional modes of mimicking approaches to dance learning.
For instance, the user has the possibility to watch simultaneously a performance from
different perspectives, such as the frontal video and the avatar obtained by mocap
recording, as provided in the motion repository. This aspect, as highlighted within
focus groups with dance experts, emerged as a valuable method when compared with
relying on 2D videos and one perspective cameras only, especially in the field of
traditional Greek folk dances, which are performed in a circle, usually with long and
heavy costumes which can obscure some of the movement detail. But a major asset,
according to dancers from all genres, derives from the possibility of self-reflection, as
provided by tools such as Choreomorphy [5], allowing dancers to observe their own
movement to generate different avatar shapes and volumetric trails, or sonification,
focused on the conversion of movements position and velocity into sound. These tools
support learners in observing their own movement and the movement of others in
more detail and allow users to play with observing variations in movement qualities,
dynamics and spatial properties by accessing different dancing avatars. Such close




                                           48
readings - of a dancer's own movement and that of others doing same, similar or
contrasting movement - may tune perception in new ways, allowing dancers to gain
insights into their own dance development from the 'inside' whilst having the chance to
make connections with other dance practices that may be otherwise unfamiliar to them
[6-7]. The learning thus takes place on a practical level, as well as on a theoretical and
conceptual level, providing knowledge about the history, traditions and contemporary
renditions of cultural (dance) practices and how digital technologies impact on how
these practices are transmitted, learnt and performed.


6       Open challenges

WhoLoDancE represents a pioneering approach to dance, and from its very outset it
has posed several challenges: technical, practical as well as cultural.
   One major issue has been related to the process of collecting dance annotations
from dance experts, which constituted the ground base data for the implementation and
training of machine learning-based algorithms for movement analysis, automatic
annotation above all. This has required the a priori definition of a set of movement
qualities to be agreed within the entire team of experts, followed by a long process of
manual annotation of movements in the form of scores ranging from 0 to 10. These, in
turn, have been utilised to construct movement ratings in respect to specific qualities.
This procedure presented a primary, technical difficulty, as various experts, coming
from diverse dance specialties, showed a different - subjective and cultural -
perception of movement qualities, such as fluidity or heaviness. This, together with the
limited number of individuals involved in the annotation process, made it very hard to
reach consensus values required for an effective algorithm training. To balance this
subjectivity bias, one of the further steps agreed upon is to open the annotation
procedure to a larger community of dance experts and amateurs on the web. This,
however, requires a proper re-design and simplification of the annotation tool and the
relevant procedure, so as to be straightforward and easy-to-use for professionals as
well as non-experts.
   Another important, practical hurdle is linked to the cutting-edge technology and
hardware on which the implemented tools are based on. This means there is much
more to explore in bringing these tools to the everyday life of practitioners, which
would require more affordable, portable and less intrusive technologies. This must be
done in close connection with the definition of users’ needs in real-life settings to
harmonise them in the context of the existing dance and learning practices.
   Closely related to this point resides the major, cultural challenge of the project, the
possibility to integrate the use of technology into dance practice, which is still not
perceived as a real necessity by most of the dance community. This necessarily
involves the active search for opportunities where to demonstrate the pertinence of the
tools through hands-on experiences, such as workshops offered by independent
companies, integrated classes at dance and music conservatories, special sessions in
dance and technology festivals, dedicated events in art-science conferences and
seminars. These would allow dance professionals time to “play” and engage with the




                                           49
applications on their own terms, outside of the “contained” environments in which
they have been presented so far. In this sense, previous evaluative sessions indicate
how these are helpful in swaying people's perspective and opinion and inspiring them
with the vast potentialities offered by these new tools, even traditional art forms such
as flamenco or Greek folk dance. Underpinning the project work is a commitment to
work collaboratively across dance and technology, thereby ensuring that the tools have
‘real world’ benefits to those working in diverse areas of the dance sector.

  Acknowledgments. This project has received funding from the European Union’s
Horizon 2020 research and innovation programme under grant agreement No 688865.


References

1.   Kurin R (2004) Safeguarding Intangible Cultural Heritage in the 2003 UNESCO
     Convention: a critical appraisal. Museum Int 56:66–77. doi: 10.1111/j.1350-
     0775.2004.00459.x
2.   Camurri A, El Raheb K, Even-Zohar O, et al (2016) WhoLoDancE: Towards a
     methodology for selecting Motion Capture Data across different Dance Learning Practice.
     In: Proceedings of the 3rd International Symposium on Movement and Computing. ACM,
     p 43. doi: 10.1145/2948910.2948912
3.   El Raheb K, Kasomoulis, Aristotelis Katifori A, Rezkalla M, Ioannidis Y (2018) A Web-
     based system for annotation of dance multimodal recordings by dance practitioners and
     experts. In: Proceedings of the 5th International Conference on Movement and Computing.
     ACM, p 8. doi: 10.1145/3212721.3212722
4.   Camurri A, Coletta P, Varni G, Ghisio S (2007) Developing multimodal interactive
     systems with EyesWeb XMI. In: Proceedings of the 7th international conference on New
     interfaces for musical expression. ACM, pp 305–308. doi: 10.1145/1279740.1279806
5.   El Raheb K, Tsampounaris G, Katifori A, Ioannidis Y (2018) Choreomorphy: a whole-
     body interaction experience for dance improvisation and visual experimentation. In:
     Proceedings of the 2018 International Conference on Advanced Visual Interfaces. ACM, p
     27. doi: 10.1145/3206505.3206507
6.   Wood K, Cisneros RE, Whatley S (2017) Motion Capturing Emotions. Open Cult Stud
     1:504–513. doi: 10.1515/culture-2017-0047
7.   Alaoui SF, Bevilacqua F, Jacquemin C (2015) Interactive visuals as metaphors for dance
     movement qualities. ACM Trans Interact Intell Syst 5:13. doi: 10.1145/2738219




                                            50