<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Opening musical creativity to non-musicians</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Fabio Morreale</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Experiential Music Lab Department of Information Engineering and Computer Science University of Trento</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2013</year>
      </pub-date>
      <abstract>
        <p>This paper gives an overview of my PhD research that aims at contributing toward the de nition of a class of interfaces for music creation that target non-musicians. In particular, I am focusing on the di erences on design and evaluation procedures with respect to traditional interfaces for music creation that are usually intended to be used by musicians. Supported by a number of preliminary ndings we developed the rst interactive system: The Music Room is an interactive installation which enables people to compose tonal music in pairs by communicating emotion expressed by moving throughout a room.</p>
      </abstract>
      <kwd-group>
        <kwd>Musical interfaces</kwd>
        <kwd>user-experience</kwd>
        <kwd>performing art</kwd>
        <kwd>active listening</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Our contribution</title>
      <p>The rst interface we developed is The Music Room, an installation that provides
a space where people can compose music expressing their emotions through
movements. The visitors experience the installation in couple by informing the
system on the emotions they intend to elicit. The couple directs the generation
of music by providing information about the emotionality and the intensity of
the music they wish to create. To communicate emotions, the analogy with
love is used: the proximity between them a ects the pleasantness of the music,
while their speed a ects the dynamic and intensity. We decided to limit the
interaction dimensions to closeness and speed in order to keep the experience as
simple and intuitive as possible. Proxemics information is acquired by a vision
tracking system. It is then converted into emotional cues and nally passed to
the musical engine. These intuitive compositional rules provide everybody with
unlimited musical outcomes. As regard the generation of music, we developed
Robin, an algorithmic composer that composes original tonal music in piano 1.</p>
      <p>1 Examples of pieces generated at The Music Room can be listened at goo.gl/Ulhgz</p>
    </sec>
    <sec id="sec-2">
      <title>Related works</title>
      <p>This project spans several research areas. The adoption of the metaphor of
gestures and emotions is partially in uenced by previous collaborative interactive
systems for music generation. The rules of the compositional system are founded
research on music perception, while Robin is inspired by existing approaches for
algorithmic composition.
3.1</p>
      <sec id="sec-2-1">
        <title>Interactive Musical System</title>
        <p>
          Research on the design of interactive systems for generative music has been
growing in the last decade. A number of tangible musical interfaces such as the
Reactable [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], Jam-O-Drum [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ], and GarageBand for the iPad, target users
that have at least a minimum musical training as sonic and musical inputs are
adopted. A category of interfaces addresses users to collaborate. In particular,
several works exploit the concept of active listening, an approach where listeners
can interactively control the music content by modifying it in real-time while
listening to it [
          <xref ref-type="bibr" rid="ref18 ref19">18, 19</xref>
          ]. TouchMeDare aims at encouraging strangers to
collaborate for reaching a common creative goal: pre-composed music samples are
triggered when both simultaneously touch a canvas [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]. In the Urban Musical
Game, users manipulate pre-composed music by playing with sensors-equipped
foam balls [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. With Sync'n'Move music is also experienced by collaborative
means [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]. Two users freely move their mobile phones and the level of music
orchestration depends on the synchronization of their movements. In Mappe per
A etti Erranti, a group of people can explore pre-composed music by navigating
a physical and emotional space [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. Once again, collaborative situations are
encouraged as music can only be listened to in its full complexity if the participants
cooperate.
3.2
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Eliciting emotions in music</title>
        <p>
          Related works suggest that the perception of emotions in music depends on
compositional parameters (e.g. tempo, melody direction, mode) and performance
behaviors (articulations, timing, dynamics) whose combinations elicit di erent
emotional responses in the listener [5{7]. Measurement and classi cation of
emotions in music, most of the works in the music domain are operates on Russell's
Circumplex model [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. This model describes emotions as a continuum along two
dimensions: valence and arousal. In 1937, Hevren identi ed the most important
compositional factors in terms of emotions elicitation by labelling them on the
music's expressive character [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. Juslin and Sloboda later reviewed this
categorization by representing the emotions along the valence/arousal dimensions
[
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. There is a consensus that at the compositional level, mode and rhythm are
responsible for valence, while tempo and dynamics are responsible for arousal.
Despite the remarkable amount of works on this area, no signi cant study has
been tried to understand to which extent expertise has a role on judging,
appreciating and perceiving musical pieces. How do non-musicians perceive and
        </p>
        <p>describe music? What are the musical parameters and semantic elements that
are more relevant for them?
3.3</p>
      </sec>
      <sec id="sec-2-3">
        <title>Algorithmic Composition</title>
        <p>
          Generative music composition has been widely explored in the last decade. The
most common approaches are: rule-based, learning-based and evolutionary
composition [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. In rule-based approach, algorithmic rules inspired from music
theory are manually coded into the system [
          <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
          ]. In learning-based approach, the
system is trained with existing musical excerpts and a number of rules are
automatically included [
          <xref ref-type="bibr" rid="ref13 ref14">13, 14</xref>
          ]. Even though this method manages to decrease the
reliance on designer skills on music theory, the output heavily depends on the
training set. Lastly, evolutionary algorithms allow the creation of original and
complex melodies by means of computational approaches inspired by biological
evolution [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. The generated music is original and unpredictable but it might
sound unnatural and lack ecological validity if compared to rule-based systems
that are generally superior in contexts of tonal music [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ].
4
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Results achieved</title>
      <p>
        A number of personal contributions for each of the three research areas were
recently published. At the Interactivity session of CHI 2013, we demoed The
Music Room [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], whose objectives, development, ndings and evaluation are
better discussed on the forthcoming publication at the Special Issue of Pervasive
and Ubiquitous Computing on Designing Collaborative Interactive Spaces.
      </p>
      <p>
        The role of expertise on the evaluation of induced emotions in music was
analysed in a experiment we conducted in 2012 whose results were recently
published on Proceedings of ICME3 [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ].
      </p>
      <p>
        The details on the ideation and implementation of Robin, the algorithmic
composer, are going to be published at Proceedings of SMC2013 [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ].
5
      </p>
    </sec>
    <sec id="sec-4">
      <title>Future works</title>
      <p>
        The last year of my PhD will be mainly devoted toward a formal de nition of
interactive systems for music creation that target non-musicians. The rst
objective is to investigate similarities and di erences with traditional digital musical
interfaces. By date, just a few studies highlighted the di erences between
interfaces for artistic experience and for musical expression but these works didn't
have a follow-up in the last decade [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ]. However, we believe that a number of
relevant di erences exist. By combining personal intuitions with related
literature ndings, we propose a list of potential di erences between the two sets.
Possibly, the output of this study will consist of a categorization of musical
interfaces. The idea is to exhaustively describe musical interfaces by means of a
model composed of a space of number of dimensions such as:
      </p>
      <p>{ Target user
{ Ultimate goal
{ Learning curve
{ Interaction metaphor
{ Level of direction
{ Musical possibilities
{ Role of the audience</p>
      <p>The successive step would consist in testing the proposed dimensions with a
series of existing interfaces. Once validated, I will elaborate on de ning a series
of evaluation principles for each dimension. This will allow interface designers
to position their projects on the model and to evaluate them consequently.</p>
      <p>I'd also wish to tackle a number of challenges regarding The Music Room.
Even though the current implementation received a lot of interest, there is room
for several improvements. A number of innovations to music engine are currently
under development: the quality of music will be enhanced by introducing new
genres and instruments as well as by teaching Robin new compositional rules. I
also aim at further elaborating on the communication of intended emotions to the
system. Temporal aspects will be taken into consideration in order to determine
a general evolution of the experience, by considering recurrence of patterns of
moving close and getting far. Also, we are likely to introduce head pose tracking
in order to have information whether the two people are looking at each other.
This additional information will be used to di erentiate the situations in which
the user are facing or turned and direct the music generation consequently.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Jorda</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Geiger</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alonso</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Kaltenbrunner</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>The reacTable: exploring the synergy between live music performance and tabletop tangible interfaces</article-title>
          .
          <source>Proceedings of the 1st international conference on Tangible and embedded interaction</source>
          (pp.
          <fpage>139</fpage>
          -
          <lpage>146</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2. Camurri, a.,
          <string-name>
            <surname>Volpe</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , De Poli,
          <string-name>
            <given-names>G.</given-names>
            , &amp;
            <surname>Leman</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          (
          <year>2005</year>
          ).
          <article-title>Communicating expressiveness and a ect in multimodal interactive systems</article-title>
          .
          <source>IEEE Multimedia</source>
          ,
          <volume>12</volume>
          (
          <issue>1</issue>
          ),
          <fpage>43</fpage>
          -
          <lpage>53</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Balkwill</surname>
            ,
            <given-names>L.-L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thompson</surname>
            ,
            <given-names>W. F.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Matsunaga</surname>
            ,
            <given-names>R. I. E.</given-names>
          </string-name>
          (
          <year>2004</year>
          ).
          <article-title>Recognition of emotion in Japanese, Western, and Hindustani music by Japanese listeners</article-title>
          .
          <source>Japanese Psychological Research</source>
          ,
          <volume>46</volume>
          (
          <issue>4</issue>
          ),
          <fpage>337</fpage>
          -
          <lpage>349</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Fritz</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jentschke</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gosselin</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sammler</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peretz</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Turner</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Friederici</surname>
            ,
            <given-names>A.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koelsch</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,(
          <year>2009</year>
          ).
          <article-title>Universal Recognition of Three Basic Emotions in Music, Current Biology</article-title>
          , Volume
          <volume>19</volume>
          ,
          <string-name>
            <surname>Issue</surname>
            <given-names>7</given-names>
          </string-name>
          ,
          <issue>14</issue>
          <year>April 2009</year>
          , Pages
          <fpage>573</fpage>
          -
          <lpage>576</lpage>
          , ISSN 0960-
          <fpage>9822</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Juslin</surname>
            <given-names>P.N.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Sloboda</surname>
            <given-names>J.A.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>Handbook of music and emotion: theory, research, applications</article-title>
          . Oxford University Press.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Temperley</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2001</year>
          ).
          <article-title>The Cognition of Basic Musical Structures</article-title>
          . Mit Press.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Gabrielsson</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lindstrm</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , (
          <year>2001</year>
          ).
          <article-title>The in uence of musical structure on emotional expression</article-title>
          .
          <source>Music and emotion: Theory and research</source>
          . Series in a ective science., (pp.
          <fpage>223</fpage>
          -
          <lpage>248</lpage>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Russell</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>1980</year>
          ).
          <article-title>A circumplex model of a ect</article-title>
          .
          <source>Journal of personality and social psychology.</source>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Hevner</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          (
          <year>1937</year>
          ).
          <article-title>The A ective Value of Pitch and Tempo in Music</article-title>
          .
          <source>The American Journal of Psychology</source>
          ,
          <volume>49</volume>
          (
          <issue>4</issue>
          ),
          <fpage>621</fpage>
          -
          <lpage>630</lpage>
          CR - Copyright &amp;
          <volume>169</volume>
          ; 1937 University of Illinois Press.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Todd</surname>
            ,
            <given-names>P. M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Werner</surname>
            ,
            <given-names>G. M.</given-names>
          </string-name>
          (
          <year>1999</year>
          ).
          <article-title>Frankensteinian methods for evolutionary music composition. Musical networks: Parallel distributed perception and performance</article-title>
          ,
          <volume>313</volume>
          -
          <fpage>339</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Rader</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>1974</year>
          ).
          <article-title>A method for composing simple traditional music by computer</article-title>
          . IT press, Cambridge, MA
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Steedman</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>1984</year>
          ).
          <article-title>A Generative grammar for jazz chord sequences</article-title>
          .
          <source>Music Perception</source>
          Vol.
          <volume>2</volume>
          , No. 1,
          <string-name>
            <surname>Fall</surname>
          </string-name>
          ,
          <year>1984</year>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Simon</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morris</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Basu</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2008</year>
          ).
          <article-title>MySong: automatic accompaniment generation for vocal melodies. Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems</article-title>
          (pp.
          <fpage>725</fpage>
          -
          <lpage>734</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Hiller</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Isaacson</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>1992</year>
          ).
          <article-title>Musical composition with a high-speed digital computer</article-title>
          ,
          <source>Machine models of music.</source>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Wiggins</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Papadopoulos</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <article-title>Phon-amnuaisuk,</article-title>
          <string-name>
            <given-names>S.</given-names>
            , &amp;
            <surname>Tuson</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          (
          <year>1998</year>
          ).
          <article-title>Evolutionary Methods for Musical Composition</article-title>
          .
          <source>United Kingdom 06 - Biological and medical sciences.</source>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Nierhaus</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <source>Algorithmic Composition: Paradigms of Automated Music Generation</source>
          . Springer
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Blaine</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Perkis</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          (
          <year>2000</year>
          ).
          <article-title>The Jam-O-Drum interactive music system: a study in interaction design</article-title>
          .
          <source>In Proceedings of the 3rd conference on Designing interactive systems: processes, practices, methods, and techniques (DIS '00)</source>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Rowe</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>1993</year>
          ).
          <article-title>Interactive music systems: Machine listening and composition</article-title>
          . MIT Press, Cambridge MA,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Camurri</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Canepa</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Volpe</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>Active listening to a virtual orchestra through an expressive gestural interface: The Orchestra Explorer</article-title>
          .
          <source>Proceedings of the 7th international conference on New interfaces for musical expression</source>
          (pp.
          <fpage>56</fpage>
          -
          <lpage>61</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Camurri</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Varni</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Volpe</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>2010</year>
          ).
          <article-title>Towards analysis of expressive gesture in groups of users: computational models of expressive social interaction. Gesture in Embodied Communication</article-title>
          and
          <string-name>
            <surname>Human-Computer</surname>
            <given-names>Interaction</given-names>
          </string-name>
          , (
          <year>1947</year>
          ),
          <fpage>122</fpage>
          -
          <lpage>133</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Van Boerdonk</surname>
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tieben</surname>
            <given-names>Rob</given-names>
          </string-name>
          , Klooster S., Van den
          <string-name>
            <surname>Hoven E.</surname>
          </string-name>
          , (
          <year>2009</year>
          ).
          <article-title>Contact through canvas: an entertaining encounter</article-title>
          .
          <source>Personal and Ubiquitous Computing</source>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Rasamimanana</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bevilacqua</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bloit</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schnell</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Flty</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cera</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frechin</surname>
            ,
            <given-names>J.-L.</given-names>
          </string-name>
          , et al. (
          <year>2012</year>
          ).
          <article-title>The urban musical game: using sport balls as musical interfaces</article-title>
          .
          <source>CHI 2012 Proceedings</source>
          ,
          <fpage>1027</fpage>
          -
          <lpage>1030</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Varni</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mancini</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Volpe</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Camurri</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2010</year>
          ).
          <article-title>Sync'n'Move: social interaction based on music and gesture</article-title>
          .
          <source>User Centric Media</source>
          ,
          <fpage>31</fpage>
          -
          <lpage>38</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Morreale</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Masu</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De Angeli</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rota</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>The Music Room</article-title>
          .
          <source>CHI'13 Extended Abstracts on Human Factors in Computing Systems</source>
          ,
          <volume>3099</volume>
          -
          <fpage>3102</fpage>
          (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Morreale</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Masu</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De Angeli</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fava</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>The E ect of Expertise in Evaluating Emotions in Music</article-title>
          .
          <source>Proceedings of the 3rd International Conference on Music &amp; Emotion.</source>
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Morreale</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Masu</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De Angeli</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>Robin: An Algorithmic Composer For Interactive Scenarios</article-title>
          .
          <source>Proceedings of Sound And Music Computing.</source>
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Blaine</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fels</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <article-title>Contexts of collaborative musical experiences</article-title>
          .
          <source>In Proceedings of the 2003 conference on New interfaces for musical expression (NIME '03).</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>