<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Hyperinstruments as interactive systems of music composition</article-title>
      </title-group>
      <abstract>
        <p>Form shaping has been a principal focus of music composition since the mid XX Century, when the classical musical structures and listening practices started to be questioned by the avant-gardes. An advanced and pioneering system of composition was represented by Xenakis's use of computers for elaborating scores from stochastic processes inspired by physical laws and complex mathematical behaviours; through non-linear mass distributions analogous forms were produced in macro-dimensions (orchestra) and micro-sounds (electronics). Starting from the 80s, a further development of this approach was based on the mapping of interactive actual human gestures onto pitched and synthetic sound contours [1]. More recently Godoy and Jensenius have been exploring cognitive and computational correspondences between human gestures and music, rooting their concept of music on the traditional electroacoustic idea of the sound-object as a primary building block [2]. A sound object is a gestural form-bearing perceptual unit, a fragment of concrete sound typically in the range of a few seconds, which can be seen as a structural counterpart of the more traditional element called “musical note”. The notion of gesture as a sensitive metaphor for the interpretation and analysis of music forms has become a consolidated topic over the last decades, blurring boundaries between score-based and electroacoustic composition. The current development of sensing systems, such as sound analysis in real-time and motion tracking, are supplying factual means for researching into the field of performance-based interactive music. Since their origin, the interactive behaviours of Hyper-instruments has been implemented as a means of empowering performers to intentionally influence the electroacoustic outputs of score-based compositions through their performance gestures on-stage [3]. Starting from the notion of interactivity we consider the potentials of the current sensing systems to be part of complex, digitally formalized, compositional networks and processes, in the light of the current emergence of embodied cognition frameworks. This paper explores topics bridging the meanings of music composition and music gesture, presenting as a conclusion some hypotheses which support innovative systems of performance based real-time digital composition implemented by the writer.</p>
      </abstract>
      <kwd-group>
        <kwd />
        <kwd>Scores as instruments</kwd>
        <kwd>gesture-based composition</kwd>
        <kwd>physical computing</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>The concept of a contribution by the performer to the compositional music process is
an ancient topic, since traditionally music scores allow degrees of freedom to
individual and even on-the-fly performance choices. Pre-classical scores mostly
delineate frameworks organised as pitch/duration note-based “discourses”, expanding
in metric/harmonic sequences within defined macro-forms. The score, as a designed
representation, need to be completed by live ornamental and polyphonic contributions
from the performer, who shares and knows the relative compositional technique.</p>
      <p>It should be noted that extra-European traditional practices mostly neglect to
consider composition and performance as distinct roles, and in the case of written
music documents we most often find collections of tunes, patterns, lexicons, symbolic
associations, congruous behaviours to be mastered and “composed” by the “performer”,
who elaborates original expressions from sets of principles. The Werktraue idea of a
score as an ideally whole connotative entity started to emerge within the Western
Classical and Romantic era, shifting the role of the performer to the more constrained
responsibility of a subjective accomplishment of the Work, taking into account a subset
of implicit meanings, whose sonic realisation denotes an art of interpretation. In this
way music scores can be seen as a full symbolic representation of the sounds required
by the composer, in other words as the Text of the composition.</p>
    </sec>
    <sec id="sec-2">
      <title>1.1 Recording technologies</title>
      <p>
        The development of recording technologies during the past century appears to have
caused a dual process: on the one hand dramatically increasing the need for a perfect
and objective accordance of live performances with the written Classical score, but on
the other hand offering the status of a corollary textuality to multiple recorded
performances, often quite different from one another. Through recording we can
objectively analyse different performance renderings of a same score, we can also
extract and examine features of non-written compositions, and even textually evaluate
free improvisations, since they are recorded on a support [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        This situation led to the rethinking of some terms of the debate about what
composition is. In addition, the recording technologies allowed to analyse and
formalise the most subtle sound morphologies allowing timbre to be considered as a
principal object of knowledge and compositional treatment. Timbre parameters started
to be structurally relevant and no longer confined to the standardised and ancillary roles
traditionally given by the Western tradition. In this context John Cage’s claim about the
impossibility of an exhaustive textuality of the music score, and the following
deduction that every score has clear degrees of indeterminacy, appears significant: the
choice of which parameters should be more precisely defined inside a score is therefore
a social habit, or an individual decision [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>1.2 Unconventional scores</title>
      <p>The mid XX Century witnesses wide-ranging experimentations of new scores,
abandoning traditional note-oriented approaches and developing non-connotative
features such as action notations (defining which instrumental gesture is to be
performed irrespective of the resulting sound), free-graphic approaches, timbre and
process-oriented notations, verbal instructions, combinatorial systems and circuits.</p>
    </sec>
    <sec id="sec-4">
      <title>1.3 Interactivity</title>
      <p>
        The persistence of traditional notation strategies now offers a multilayered landscape
which, in the case of software composition, currently allows to produce programs
intrinsically intended both as representational and operative, in other words acting as
scores and instruments at the same time [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. This assertion can be considered as a
kernel of interactivity in composition. The radical thrust of conceiving composition as a
combined action of textual machines treated as instruments was pioneered at the advent
of electroacoustic music through physical manipulations of recording tapes, variable
voltage controls of mathematical rules actually synthesising sounds, and algorithmic
systems of note composition through rule-based or data-driven combinatorial processes.
      </p>
      <p>
        Interactivity is underlined by Horacio Vaggione’s action approach to composition,
escaping linear formalisations towards multi-syntactical strategies borrowed from
object-oriented programming methods. In this perspective algorithms are not seen as
abstractions allocating mechanisms towards a result to be straightly listened to, but
rather as processing tools producing their own rules and incapsulating the listening
action of the composer as part of the operation [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. In fact the potentials to exploit
computation for analysis, symbolic representations (such as scores and rules) and
sound synthesis even in one single environment currently allow networking, contextual
and semantic behaviours previously unpredictable in terms of complexity. In this
direction we could quote, among others, productions and researches oriented to
multiagent ecosystem methods of real-time composition inscribing human choices and
environmental conditions as part of AI procedures [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <sec id="sec-4-1">
        <title>2.Sound and gesture</title>
        <p>Electroacoustic music is characterised by the direct manipulation of sound on supports
(recording tools, editors or softwares). In this way the so called sound-based
composition potentially allows to bypass the presence of a traditional performer and a
symbolic representation (score), embedding sound synthesis, transformations,
organisation, storage and diffusion inside a group of machines: sound can thus be
directly shaped without any intermediate layer, by means of a chosen studio-machine
acting as an instrument-support tool. In this way becomes natural to create music
derived by real-life sounds, thereby extending the concept of music timbre.</p>
        <p>Traditional music theories were grounded on the concept of music notes, discrete
chunks of “ideally pure” sounds, sharing a scalar space of frequencies (pitch) and
durations, functionally organised through standardised or innovative macro-forms often
relating to dance, poetry, mathematics, architecture, or rhetoric figures. In the last
century, the further extension of the notion of music sound to all the possible audible
phenomena, of which traditional instrumental sounds are a special family, produced
new contrasting theories mostly developing Schaeffer’s concept of Musique Concréte.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>2.1 Sound objects and morphologies</title>
      <p>
        Schaeffer's phenomenological approach to music explored the perceptual qualities of
real-world sounds, creating an idea of composition based on sound fragments that exist
in reality, considered as discrete and complete “sound objects”, aiming to remove
music from the idea of structured “sound abstractions” [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The “sound object” is a
fragment of recorded tape, or a continuous sound repetition through a closed groove
(the so-called sillon fermée). Through repetition or de-contextualising manipulation the
“sound object” is abstracted from its reality becoming an object of music
contemplation. In the age of analog technologies in the mid XX century, this extraction
of sound objects was a physical action/gesture of composition through slice/paste
strategies acting upon the actual recording support. The length of the sound-object,
broadly modelled on the archetype of the “note”, shares with the note the potential to
be treated in a phonetic fashion. Forcing the linguistic comparison we might argue that
the note is open to be seen as an arbitrary sound potentially part of a pseudo-logical
music organisation, and in this sense many older theories and pedagogic approaches
stress language-based metaphors describing music forms as phrases, periods and
macro-structural abstractions, generally intended as devoid of arbitrary meanings.
Differently, a sound-object retains its concrete overall shape: it is a small perceptual
pattern, a unit of an audible gesture, in a sense a “timbre” block. The last work of
Schaeffer represents a systematic effort to organise a lexicon of typo-morphologies of
sound objects, a Solfége based on the perceptual surface characters of these catalogued
sound units: in other words on their action/perceptual content. Principal categories of
the inventory relate to iteration, continuity, grain, impact, saturation, allure, profile and
internal dynamic [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>
        Among the multiple productions and theories developed after Schaeffer,
Spectromorphology is currently considered as a main electroacoustic compositional
frame and subject of reflection. The accent is placed on the time and spatial features of
sound in relation to the macro-evolution and dynamic consistencies of the composed
sound, not confining the analysis to object typologies, but showing an event-based
constitution of the virtual-sound world of electroacoustic music. Framed by the main
categories of gesture and texture, sound movements are catalogued in terms of the
rooted/floating qualities, trajectories, propagations, multi-dimensional and behavioural
aspects of sound organisation [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
    </sec>
    <sec id="sec-6">
      <title>2.2. Time Scales of Music</title>
      <p>
        On the other hand, starting from Stockhausen’s pioneering research, and taking into
account the developments of sound science and digital sound processing research,
music sound categories can be unified in terms of time-perception inside the so-called
theory of the Time Scales of Music [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. In this sense Macro, Meso and Sound Object
time scales appear to be falling within a time range consciously detectable and
analysable by humans and traditionally scored and represented. Sound objects share a
similar time scale (a few seconds) with respect to the traditional music notes
(approximately from 200 milliseconds until 3-4 seconds), while macro and meso levels
can be easily reabsorbed into the terms of traditional macro and intermediate music
forms. Micro time scales can instead describe and compute events and manipulations
difficult to be logically managed prior to the advent of digital means.
      </p>
      <p>
        The fastest events perceivable and producible by humans cannot be below a
threshold of 100 milliseconds ca., and the human spontaneous tendency is to group
them in patterns when they are very quick. Below this threshold we find a blurring
zone of roughness and reverberation extremely important to detect the character of
sound attacks and dynamics linked to a global unconscious identification of timbre and
emotional qualities of the sound. The time scale roughly between 1 and 20 milliseconds
pertains to the perception of pitch (from 50 to1000 Hz.). A faster timescale, from less
than 1 millisecond until a few milliseconds, relates to filtering, digital effects, and
interestingly to the real perception of timbre qualities through unconscious auditory
fusion [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
    </sec>
    <sec id="sec-7">
      <title>2.3. Digital Composition</title>
      <p>The software potential to declare, compute and process heterogeneous functions
proceeding through diversified time scales obviously represents a huge advantage.
Musical programming, formal and/or graphic representations help to empower complex
kinds of analysis and to frame consistent music structures, which need to be
“performed” by the system (automatically or by human actions) in order to generate a
composition. For this specific purpose, it seems unimportant whether the
“compositional performance” happens in real-time (on stage) as opposed to off-line and
step-by-step (in studio), or if the result is intended for producing a notated score rather
than to directly shape sounds.</p>
      <p>
        The relevant fact is that every kind of Computer Aided Composition involves
softwares to enact processes implied by a final composition, generally too complex to
be fully controlled by a human mind, and requiring a human response
(or evaluation/choice) in front of non-deterministic outputs resulting from the initial
conditions set by the composer: obviously algorithms are a huge collection of tools, not
the composition. The focus on processes and interactive design whose output cannot be
fully foreseen show a non-classical attitude to viewing the essence of the composition
as the living dialectic between diverse entities and agents [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]: composers can be
interested in showing the autonomous results of the composed pre-conditions, or be
part of the system in order to live-constrain the system, maybe adding further layers.
If the result has instead to be a fixed score, in any case composers can operate a choice
for the most successful final work from among different outputs generated by
nondeterministic systems, or exploit computers only for local problem solving.
      </p>
    </sec>
    <sec id="sec-8">
      <title>2.4. Notions of Gesture</title>
      <p>
        If traditional scores depend on the performance gesture (at least imagined in the case of
an expert) in order to be realised, and are probably the final fixed result of previous
instrumental/conceptual gestures, new technologies appear to have more intimately
embedded gestural approaches to composition, as previously mentioned while
discussing on sound-objects and spectromorphology. If gesture appears as a native
rationale in the field of sound-based composition, since a “concrete sound” is
intrinsically a gesture, we notice a growing trend to deploy the category of gesture also
in score-based, even traditional, music. Bierwisch defined music as a gestural form
because of its iconic and combinatorial status, dynamically oriented to shape surfaces,
contours and irregularities, navigating through structures, in opposition to language
which is essentially a logic form [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
      <p>
        Gestures denote non-verbal transfers of information through body movements not
necessarily conveying conventional meaning, and often emphasising emotion and
expression. An interesting isomorphism linking gesture and music regards the joining
of physical motions with human intentions, by a rhetoric attitude calling for a feedback
[
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] . Sound and gesture share a physical, dynamic, spatial and semiotic attitude, and in
case of sound producing (instrumental) gestures they manifest a joint intention,
semiosis and embodiment. In this sense the action upon a controller cannot be defined
as a gesture. But the trajectories of notes on a score, just as the direct sounds on a
support, are indeed considered as gestures relying on their physical, semiotic or
perceptual consistencies.
      </p>
      <sec id="sec-8-1">
        <title>3.Interactive Music</title>
        <p>Interactive music needs a sensing input coming from the real world and its factual
status relates to digital processes. Sensing is a kind of physical computing, which
exploits audio input (microphones or pickups) and/or motion tracking mainly in the
form of optical and inertial systems, and can also be integrated by force detectors and
potentially any other means of body and environment monitor systems. What happens
in the world flows as a vector of data acting as a collection of variables in real-time,
depending on the quality of the analysis, the kind of features and trajectories chosen to
be extracted, and the types of interaction wanted by the composer. In other words the
complexity of this hermeneutic step relies on a transparent transformation of low-level
physical quantities into mid/high levels of meaningful features.</p>
        <p>
          In the case of audio input treated as a data collector, interactive artists exploit
objects of analysis relating to acoustic knowledge and music theories. Motion tracking
often involves algebra, geometry and kinaesthetic descriptors, taking into account the
current consolidating tendency towards a search for corporeal high-level features often
relevant to embodied cognition theories. In this sense the body is seen as a mediator
between matter and mind and the search moves to defining the relations between
corporeal articulations (countable patterns of movement) and subjective intentions like
non-verbal messages, socially shared techniques of movement, functional cues and
behavioural resonances [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. A subset of analysis linking the Schaefferian sound
typomorphologies to the functional segmentation of music-related-actions such as
soundproducing, excitatory, modulatory or sound-accompanying actions can be found in the
field of the so-called Music Retrieval Ontologies [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. Machine learning systems are
sometimes applied for the detection of complex gestures such as bow-movements [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ].
Music Information Retrieval is mostly concerned with the implementation of objects
able to extract information from the raw audio signal processing its spectrum, the
iterative patterns of amplitude or brightness contours, in order to return significant
perceptual features through complex reverse engineering, giving rise to high-level
music descriptors.
        </p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>3.1 Composition and instrument</title>
      <p>Interactivity allows a live dialogue between performance on stage and electronics,
allowing a consistency partially lost when the Live Electronics are controlled by
offstage machines, and even by on-stage controllers.</p>
      <p>If the performer turns a switch on the electric guitar we will hear the sound effects
changing, but if the sound effects are variably dependent on the kinds of patterns,
timbres or intensities currently played by the guitar we notice an increase in complexity
and expectancy. It is self-evident that interactive systems hybridise the concepts of
instrument, performance and composition. Since performance influences the electronic
sound, playing an instrument involves playing also the electronics and the final relation
between both: in other words to “live-compose” a multilayered structure. In this case
software composition must be procedural, modular and reactive (in a sense
“performative”). Software interactive design shows overlapping aspects among the
categories of instrument and composition.</p>
      <p>
        Therefore interactive music is often inscribed in a pre-composed score, in this way
the spread of the interconnections becomes local and is absorbed by the planning
responsibility of the composer; the composition can also leave small or large windows
of free exploration to the performer offering more elastic results. Many systems are
instead based on improvisation, opening a broad HMI dialogue whose responsibility is
shared by the performer and the composer-programmer. Radical experimental
approaches involve one single performer prototyping his/her interactive languages and
exploring new music boundaries [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. It is well known that interactive systems can also
allow the audience to gain channels of influence upon a live performance. A taxonomy
of interactivity can be built on the continuum among the range of complexity of the
systems. When just a few linearly shaped parameters drive the variable machine
response the system is defined as instrument-like, while greater complexity relates to a
more compositional response [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. Originally complexity was linked to an idea of
unpredictability sometimes useful for increasing human creativity by enhancing the
sense that a machine interacts instead of simply reacting; the improvement in sensing
tools and high-level descriptors obviously contributes to the perceptiveness of any
systems. We further note the possibility to discriminate between note-based and
soundbased approaches, the latter approach being more involved in timbre and spatial
electronic treatments. Note-based interactive systems, originally built upon the MIDI
protocol, were able to manage in real-time traditional note-oriented “languages”,
allowing to implement HMI systems dialoguing in terms of music symbols and
structures. Currently softwares easily mix and swap both approaches.
      </p>
      <p>
        Hyperinstruments (also called Digitally Augmented Instruments) are a special
family of interactive systems implementing an acoustic-digital unity focused on the
typical performance actions of the traditional instruments. Through features extracted
by sound analysis and/or motion tracking upon the sound-producing gestures, and a net
of digital mappings, they follow a “chamber music” ideal continuity from performance
to digital composition [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <sec id="sec-9-1">
        <title>4.Gestural systems of real-time composition</title>
        <p>Hyperinstruments, since not physically modifying traditional instruments, relay on
acknowledged techniques and expressive rhetoric patterns. The idea of navigating
within virtual worlds is currently quite common, often at the cost of losing continuity
with the real world. Augmented Reality, as a true world filled of data, needs gestures
(non-verbal transfers of meaning), rather than controllers. The goal of my systems is an
intimate sound re-appropriation of symbolic score-machine flows.</p>
        <p>
          My reference software is the interactive music environment MAX/Msp [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ],
through which I recollect networks of sensing data coming from a minimal equipment
of audio pickups and/or inertial motion units [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ], whose resulting features are
analysed through specialised libraries. Compositions are themes (narratives) upon
which the performer is requested to operate a search, to make choices, to explore the
sounds coming from the electronics, elaborating individual strategies. External verbal
scores tell the performers how to influence the overall sound result and to how to guide
the system, which variably develops in part automatically and in part as a consequence
of the performance. The laptop screen acts as a variable animated score proposing and
responding (sometimes generating interactively common notation as a result of the
performance gestures that have to be sight performed in a loop). In the case of an
ensemble the performers send reciprocal messages, interactive scores and elaborate
onthe-fly pre-determined collective goals. The performers can gain a detailed knowledge
of the interaction through rehearsal, but they can also interact loosely, intuitively and
discovering step-by-step. The verbal scores inform the performers which are the means
to interact with, and they can monitor the composition behaviour by listening and
through the visual screen. Depending on each single composition the performers can
communicate and interact through note-intervals (onset/pitch detection), instrumental
timbres, rhythmic patterns, contrasting music sequences (in this case recognised by the
system through machine learning), or pitch ranges. In the case of motion tracking the
best results have been obtained by bowing styles recognition and sound accompanying
gestures. Performers learn how to expand their gestures in order to integrate their
acoustic result with the system’s behaviour and sound as a single consistency.
Each interaction is a special software instance focusing on specific techniques,
sound/event search, performance problem solving according to the benchmark “fiction”
trajectory [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ].
        </p>
        <p>The systems are intended as gesture-based compositions since the non-linear nodes
of the local mappings are constrained by input gestures which are physical signals,
intentions, and performance techniques mediated by software symbolic actuators.
The input gestures (timbres, note contours, sound patterns) are intimately complex and
the performer has to understand how the machine selects their features and modulates
the “socially” goal-oriented tasks. The semiosis between human and system
(and through humans in the case of an ensemble) operates through scores and
representations. Scores are generated as gestural resonances, local messages and
autonomy/heteronomy negotiations displaying the specific narrative. In this sense
performer and pre-programmed system are treated as agents of a single environment for
shared strategies of composition. Improvisation is allowed as an emergent strategy
of contextual adaptivity, but performers need to predetermine fixed individual
strategies not in order to gain control (since the system is self-regulating) but in order
to gain a maximum of meaning.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>I.Xenakis</surname>
          </string-name>
          , Formalized Music, Pendragon Press, New York,
          <year>1992</year>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.I.</given-names>
            <surname>Godøy &amp; M.Leman</surname>
          </string-name>
          , Musical Gestures:Sound, Movements and Meaning, Routledge, NewYork,
          <year>2009</year>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>T.</given-names>
            <surname>Machover</surname>
          </string-name>
          , Hyperinstruments.
          <source>A progress report 1987-1991</source>
          , MIT Media Laboratory (
          <year>1992</year>
          ), http://opera.media.mit.edu/publications/ (last accessed 7/17)
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>V.</given-names>
            <surname>Caporaletti</surname>
          </string-name>
          ,
          <article-title>I processi improvvisativi nella musica</article-title>
          , LMI, Lucca,
          <year>2005</year>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Cage</surname>
          </string-name>
          , Silence, Wesleyan University Press Paperback,Middletown,
          <year>1961</year>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>N.</given-names>
            <surname>Schnell</surname>
          </string-name>
          &amp;
          <string-name>
            <surname>M.Battier</surname>
          </string-name>
          , Introducing Composed Instruments,
          <source>Technical and Musicological Implications, Proceedings of the 2002 Conference on New Instruments for Musical Expression</source>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>H.</given-names>
            <surname>Vaggione</surname>
          </string-name>
          ,
          <article-title>Some ontological remarks about music composition processes</article-title>
          ,
          <source>Computer Music Journal</source>
          <volume>25</volume>
          :
          <issue>1</issue>
          (
          <issue>2001</issue>
          ),
          <fpage>54</fpage>
          -
          <lpage>61</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Eigenfeldt</surname>
          </string-name>
          ,
          <article-title>Real-time Composition as Performance Ecosystem</article-title>
          ,
          <source>Organised sound 16:2</source>
          (
          <issue>2011</issue>
          ),
          <fpage>143</fpage>
          -
          <lpage>153</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>P.</given-names>
            <surname>Schaeffer</surname>
          </string-name>
          ,
          <article-title>A la recherche d'une musique concrète</article-title>
          ,
          <source>Éditions du Seuil</source>
          , Paris,
          <year>1952</year>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>P.</given-names>
            <surname>Schaeffer</surname>
          </string-name>
          <article-title>Traité des objets musicaux</article-title>
          ,
          <source>Éditions du Seuil</source>
          , Paris,
          <year>1966</year>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D.</given-names>
            <surname>Smalley</surname>
          </string-name>
          ,
          <article-title>Spectromorphology: explaining sound-shapes, Organised sound, 2:2 (</article-title>
          <year>1997</year>
          ),
          <fpage>107</fpage>
          -
          <lpage>126</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>C.</given-names>
            <surname>Roads</surname>
          </string-name>
          , Microsound, MIT Press, Cambridge, Mass.,
          <year>2004</year>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>T.</given-names>
            <surname>Whishart</surname>
          </string-name>
          ,
          <article-title>Audible design, Orpheus The Pantomime Ltd</article-title>
          ., York,
          <year>1994</year>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Di Scipio</surname>
          </string-name>
          ,
          <article-title>A Constructivist Gesture of Deconstruction. Sound as a Cognitive Medium</article-title>
          ,
          <source>Contemporary Music Review</source>
          ,
          <volume>33</volume>
          :
          <fpage>1</fpage>
          ,
          <fpage>87</fpage>
          -
          <lpage>102</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Bierwisch</surname>
          </string-name>
          ,
          <article-title>Musik und Sprache: überlegungen zu ihrer Struktur und Funktionsweise</article-title>
          , Peters, Leipzig,
          <year>1979</year>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>C.</given-names>
            <surname>Cadoz &amp; M.W.Wanderley</surname>
          </string-name>
          ,
          <article-title>Music-gesture, in Trends in gestural control of music</article-title>
          , eds. M.Battier &amp;
          <string-name>
            <surname>M.W.Wanderley</surname>
          </string-name>
          , Ircam Centre Pompidou, Paris,
          <year>2000</year>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>M.</given-names>
            <surname>Leman</surname>
          </string-name>
          , Embodied Music Cognition and Mediation Technology, MIT Press, Cambridge, Mass.,
          <year>2007</year>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>R.I.Godøy</given-names>
            et al.,
            <surname>Classifying</surname>
          </string-name>
          <string-name>
            <given-names>Music-Related</given-names>
            <surname>Actions</surname>
          </string-name>
          ,
          <source>Proceedings of 12th International Conference on Music Perception and Cognition</source>
          , (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bevilacqua</surname>
          </string-name>
          et.al,
          <source>The Augmented String Quartet: Experiments and Gesture Following, Journal of new Music research 41:1</source>
          (
          <issue>2012</issue>
          ),
          <fpage>103</fpage>
          -
          <lpage>119</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>G.</given-names>
            <surname>Lewis</surname>
          </string-name>
          , Too Many Notes: Computers, Complexity and Culture in Voyager,
          <source>Leonardo Music Journal</source>
          <volume>10</volume>
          (
          <year>2000</year>
          ),
          <fpage>33</fpage>
          -
          <lpage>39</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>R.</given-names>
            <surname>Rowe</surname>
          </string-name>
          ,
          <source>Interactive Music Systems</source>
          , MIT Press, Cambridge, Mass.,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>[22] http://cycling74.com/</mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>[23] https://sites.google.com/site/speckledcomputing/cello2</mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>[24] https://nicolabaroni.com/artworks</mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>