<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Hyperinstruments as interactive systems of music composition</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Nicola</forename><surname>Baroni</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Istituto</forename><surname>Di</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Alta</forename><surname>Formazione Musicale Conservatorio</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Claudio</forename><forename type="middle">Monteverdi</forename><surname>Bolzano</surname></persName>
						</author>
						<title level="a" type="main">Hyperinstruments as interactive systems of music composition</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">E136EB373F85E65AA2E383535CC6B585</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T08:36+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Scores as instruments</term>
					<term>gesture-based composition</term>
					<term>physical computing</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Form shaping has been a principal focus of music composition since the mid XX Century, when the classical musical structures and listening practices started to be questioned by the avant-gardes. An advanced and pioneering system of composition was represented by Xenakis's use of computers for elaborating scores from stochastic processes inspired by physical laws and complex mathematical behaviours; through non-linear mass distributions analogous forms were produced in macro-dimensions (orchestra) and micro-sounds (electronics). Starting from the 80s, a further development of this approach was based on the mapping of interactive actual human gestures onto pitched and synthetic sound contours <ref type="bibr" target="#b0">[1]</ref>. More recently Godoy and Jensenius have been exploring cognitive and computational correspondences between human gestures and music, rooting their concept of music on the traditional electroacoustic idea of the sound-object as a primary building block <ref type="bibr" target="#b1">[2]</ref>. A sound object is a gestural form-bearing perceptual unit, a fragment of concrete sound typically in the range of a few seconds, which can be seen as a structural counterpart of the more traditional element called "musical note". The notion of gesture as a sensitive metaphor for the interpretation and analysis of music forms has become a consolidated topic over the last decades, blurring boundaries between score-based and electroacoustic composition. The current development of sensing systems, such as sound analysis in real-time and motion tracking, are supplying factual means for researching into the field of performance-based interactive music. Since their origin, the interactive behaviours of Hyper-instruments has been implemented as a means of empowering performers to intentionally influence the electroacoustic outputs of score-based compositions through their performance gestures on-stage <ref type="bibr" target="#b2">[3]</ref>. Starting from the notion of interactivity we consider the potentials of the current sensing systems to be part of complex, digitally formalized, compositional networks and processes, in the light of the current emergence of embodied cognition frameworks. This paper explores topics bridging the meanings of music composition and music gesture, presenting as a conclusion some hypotheses which support innovative systems of performance based real-time digital composition implemented by the writer.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>1.Composition and scores</head><p>The concept of a contribution by the performer to the compositional music process is an ancient topic, since traditionally music scores allow degrees of freedom to individual and even on-the-fly performance choices. Pre-classical scores mostly delineate frameworks organised as pitch/duration note-based "discourses", expanding in metric/harmonic sequences within defined macro-forms. The score, as a designed representation, need to be completed by live ornamental and polyphonic contributions from the performer, who shares and knows the relative compositional technique. It should be noted consider composition and performance as distinct roles, and in music documents we most often find associations, congruous behaviours to be mastered and "composed" by the "performer", who elaborates original expressions from sets of principles. The score as an ideally whole connotative entity started to emerge within the Western Classical and Romantic era, shifting the role of the performer to the more constrained responsibility of a subjective accomplishment of the Work, taking in of implicit meanings, whose sonic realisation denotes an art of way music scores can be seen as a full symbolic representation of the sounds required by the composer, in other words as the Text of the composition</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1">Recording technologies</head><p>The development of recording technologies during the past century appears to have caused a dual process: on and objective accordance of live performances with the written Classic the other hand offering the statu performances, often quite different from one another objectively analyse different performance renderings of a same score, we can also extract and examine features of non free improvisations, since This situation led to the rethinking of composition is. In addition formalise the most subtle sound morphologies allo principal object of knowledge and compositional treatment. Timbre parameters started to be structurally relevant and no longer traditionally given by the Wes impossibility of an exhaustive textuality of the music score, and the deduction that every score has clear degrees of indeterminacy choice of which parameters should be more pr a social habit, or an individual decision <ref type="bibr" target="#b4">[5]</ref>. should be noted that extra-European traditional practices mostly neglect composition and performance as distinct roles, and in the case of written sic documents we most often find collections of tunes, patterns, lexicons, symbolic ns, congruous behaviours to be mastered and "composed" by the "performer", who elaborates original expressions from sets of principles. The Werktraue score as an ideally whole connotative entity started to emerge within the Western Romantic era, shifting the role of the performer to the more constrained responsibility of a subjective accomplishment of the Work, taking into account a subset of implicit meanings, whose sonic realisation denotes an art of interpretation. In this ic scores can be seen as a full symbolic representation of the sounds required by the composer, in other words as the Text of the composition.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>technologies</head><p>The development of recording technologies during the past century appears to have a dual process: on the one hand dramatically increasing the need for and objective accordance of live performances with the written Classical score, but on the other hand offering the status of a corollary textuality to multiple recorded nces, often quite different from one another. Through recording we can objectively analyse different performance renderings of a same score, we can also extract and examine features of non-written compositions, and even textually evaluate ns, since they are recorded on a support <ref type="bibr" target="#b3">[4]</ref>. This situation led to the rethinking of some terms of the debate about what composition is. In addition, the recording technologies allowed to analyse and formalise the most subtle sound morphologies allowing timbre to be considered as a principal object of knowledge and compositional treatment. Timbre parameters started turally relevant and no longer confined to the standardised and ancillary roles traditionally given by the Western tradition. In this context John Cage's claim of an exhaustive textuality of the music score, and the deduction that every score has clear degrees of indeterminacy, appears significant: the choice of which parameters should be more precisely defined inside a score is therefore a social habit, or an individual decision <ref type="bibr" target="#b4">[5]</ref>. European traditional practices mostly neglect to case of written collections of tunes, patterns, lexicons, symbolic ns, congruous behaviours to be mastered and "composed" by the "performer",</p><p>Werktraue idea of a score as an ideally whole connotative entity started to emerge within the Western Romantic era, shifting the role of the performer to the more constrained account a subset interpretation. In this ic scores can be seen as a full symbolic representation of the sounds required</p><p>The development of recording technologies during the past century appears to have matically increasing the need for a perfect score, but on multiple recorded . Through recording we can objectively analyse different performance renderings of a same score, we can also written compositions, and even textually evaluate some terms of the debate about what the recording technologies allowed to analyse and wing timbre to be considered as a principal object of knowledge and compositional treatment. Timbre parameters started confined to the standardised and ancillary roles s claim about the following , appears significant: the inside a score is therefore</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.2">Unconventional scores</head><p>The mid XX Century witnesses wide-ranging experimentations of new scores, abandoning traditional note-oriented approaches and developing non-connotative features such as action notations (defining which instrumental gesture is to be performed irrespective of the resulting sound), free-graphic approaches, timbre and process-oriented notations, verbal instructions, combinatorial systems and circuits.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.3">Interactivity</head><p>The persistence of traditional notation strategies now offers a multilayered landscape which, in the case of software composition, currently allows to produce programs intrinsically intended both as representational and operative, in other words acting as scores and instruments at the same time <ref type="bibr" target="#b5">[6]</ref>. This assertion can be considered as a kernel of interactivity in composition. The radical thrust of conceiving composition as a combined action of textual machines treated as instruments was pioneered at the advent of electroacoustic music through physical manipulations of recording tapes, variable voltage controls of mathematical rules actually synthesising sounds, and algorithmic systems of note composition through rule-based or data-driven combinatorial processes.</p><p>Interactivity is underlined by Horacio Vaggione's action approach to composition, escaping linear formalisations towards multi-syntactical strategies borrowed from object-oriented programming methods. In this perspective algorithms are not seen as abstractions allocating mechanisms towards a result to be straightly listened to, but rather as processing tools producing their own rules and incapsulating the listening action of the composer as part of the operation <ref type="bibr" target="#b6">[7]</ref>. In fact the potentials to exploit computation for analysis, symbolic representations (such as scores and rules) and sound synthesis even in one single environment currently allow networking, contextual and semantic behaviours previously unpredictable in terms of complexity. In this direction we could quote, among others, productions and researches oriented to multiagent ecosystem methods of real-time composition inscribing human choices and environmental conditions as part of AI procedures <ref type="bibr" target="#b7">[8]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>2.Sound and gesture</head><p>Electroacoustic music is characterised by the direct manipulation of sound on supports (recording tools, editors or softwares). In this way the so called sound-based composition potentially allows to bypass the presence of a traditional performer and a symbolic representation (score), embedding sound synthesis, transformations, organisation, storage and diffusion inside a group of machines: sound can thus be directly shaped without any intermediate layer, by means of a chosen studio-machine acting as an instrument-support tool. In this way becomes natural to create music derived by real-life sounds, thereby extending the concept of music timbre.</p><p>Traditional music theories were grounded on the concept of music notes, discrete chunks of "ideally pure" sounds, sharing a scalar space of frequencies (pitch) and durations, functionally organised through standardised or innovative macro-forms often relating to dance, poetry, mathematics, architecture, or rhetoric figures. In the last century, the further extension phenomena, of which new contrasting theories</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Sound objects</head><p>Schaeffer's phenomenological approach to music explored the perceptual qualities of real-world sounds, creating an idea of composition based on sound fragments that exist in reality, considered as discrete and complete music from the idea of structured fragment of recorded tape, or a continuous sound repetition through a closed groove (the so-called sillon ferm "sound object" is abstracted from its reality becoming an object of music contemplation. In the age of analog technologies in the mid XX century, this extraction of sound objects was a physical action/gesture of composition through slice/paste strategies acting upon the broadly modelled on the archetype of the be treated in a phonetic fashion. Forcing the note is open to be music organisation, and in this sense many older theories and pedagogic approaches stress language-based metaphors describing music forms as phrases, periods and macro-structural abstractions, Differently, a sound-object retains its concrete overall shape: it is a small perceptual pattern, a unit of an audible gesture, in a sense a Schaeffer represents a system sound objects, a Solfége sound units: in other words on their action/perceptual content. Principal categories of the inventory relate to iteration, continuity, grain, impact, saturation, a internal dynamic <ref type="bibr" target="#b9">[10]</ref>. extension of the notion of music sound to all the possible traditional instrumental sounds are a special family, theories mostly developing Schaeffer's concept of Musique Concr objects and morphologies Schaeffer's phenomenological approach to music explored the perceptual qualities of creating an idea of composition based on sound fragments that exist in reality, considered as discrete and complete "sound objects", aiming to remove music from the idea of structured "sound abstractions" <ref type="bibr" target="#b8">[9]</ref>. The "sound object gment of recorded tape, or a continuous sound repetition through a closed groove sillon fermée). Through repetition or de-contextualising manipulation the is abstracted from its reality becoming an object of music contemplation. In the age of analog technologies in the mid XX century, this extraction of sound objects was a physical action/gesture of composition through slice/paste strategies acting upon the actual recording support. The length of the sound broadly modelled on the archetype of the "note", shares with the note the potential to be treated in a phonetic fashion. Forcing the linguistic comparison we might argue that to be seen as an arbitrary sound potentially part of a pseudo and in this sense many older theories and pedagogic approaches based metaphors describing music forms as phrases, periods and structural abstractions, generally intended as devoid of arbitrary meanings.</p><p>object retains its concrete overall shape: it is a small perceptual pattern, a unit of an audible gesture, in a sense a "timbre" block. The last work of Schaeffer represents a systematic effort to organise a lexicon of typo-morphologies of ge based on the perceptual surface characters of these catalogued sound units: in other words on their action/perceptual content. Principal categories of o iteration, continuity, grain, impact, saturation, allure, profile and possible audible family, produced Concréte.</p><p>Schaeffer's phenomenological approach to music explored the perceptual qualities of creating an idea of composition based on sound fragments that exist , aiming to remove sound object" is a gment of recorded tape, or a continuous sound repetition through a closed groove contextualising manipulation the is abstracted from its reality becoming an object of music contemplation. In the age of analog technologies in the mid XX century, this extraction of sound objects was a physical action/gesture of composition through slice/paste actual recording support. The length of the sound-object, , shares with the note the potential to the linguistic comparison we might argue that seen as an arbitrary sound potentially part of a pseudo-logical and in this sense many older theories and pedagogic approaches based metaphors describing music forms as phrases, periods and void of arbitrary meanings. object retains its concrete overall shape: it is a small perceptual block. The last work of morphologies of characters of these catalogued sound units: in other words on their action/perceptual content. Principal categories of llure, profile and Among the multiple productions and theories developed after Schaeffer, Spectromorphology is frame and subject of reflection. The accent is place sound in relation to the sound, not confining t constitution of the virtual categories of gesture and texture, sound movements are catalogued in terms of rooted/floating qualities, aspects of sound organisation <ref type="bibr" target="#b10">[11]</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Time Scales</head><p>On the other hand, starting from account the developments of sound science and digital sound processing research, music sound categories can be unified in terms of time theory of the Time Scales of Music <ref type="bibr" target="#b11">[12]</ref>. In this sense Macro, Meso and Sound Object time scales appear to be falling within analysable by humans and traditionally scored and represented. Sound objects share a similar time scale (a few seconds) with respect to the traditiona (approximately from 200 milliseconds until 3 can be easily reabsorbed in forms. Micro time scales can instead describe and compute events and manipulations difficult to be logically managed prior to the advent of digital means.</p><p>The fastest events perceivable and producible by humans cannot be below a threshold of 100 milliseconds ca., and the human spontaneous tendency is to group them in patterns when they are very quick. Below this threshold we find zone of roughness and reverberation extremely important to detect the character of sound attacks and dynamics linked to a global unconscious identification of timbre and emotional qualities of the sound. The time scale roughly between 1 and 20 mi pertains to the perception of pitch than 1 millisecond until a few milliseconds, relates to filtering, digital effects, and interestingly to the real perception of timbre qualities through unco fusion <ref type="bibr" target="#b12">[13]</ref>.</p><p>Among the multiple productions and theories developed after Schaeffer, is currently considered as a main electroacoustic compositional reflection. The accent is placed on the time and spatial fe sound in relation to the macro-evolution and dynamic consistencies of the composed sound, not confining the analysis to object typologies, but showing an event constitution of the virtual-sound world of electroacoustic music. Framed by the main categories of gesture and texture, sound movements are catalogued in terms of rooted/floating qualities, trajectories, propagations, multi-dimensional and behavioural aspects of sound organisation <ref type="bibr" target="#b10">[11]</ref>. Scales of Music he other hand, starting from Stockhausen's pioneering research, and taking in account the developments of sound science and digital sound processing research, music sound categories can be unified in terms of time-perception inside the so of the Time Scales of Music <ref type="bibr" target="#b11">[12]</ref>. In this sense Macro, Meso and Sound Object ales appear to be falling within a time range consciously detectable and by humans and traditionally scored and represented. Sound objects share a similar time scale (a few seconds) with respect to the traditional music notes ly from 200 milliseconds until 3-4 seconds), while macro and meso levels bsorbed into the terms of traditional macro and intermediate music forms. Micro time scales can instead describe and compute events and manipulations difficult to be logically managed prior to the advent of digital means.</p><p>The fastest events perceivable and producible by humans cannot be below a threshold of 100 milliseconds ca., and the human spontaneous tendency is to group when they are very quick. Below this threshold we find zone of roughness and reverberation extremely important to detect the character of sound attacks and dynamics linked to a global unconscious identification of timbre and emotional qualities of the sound. The time scale roughly between 1 and 20 mi pertains to the perception of pitch (from 50 to1000 Hz.). A faster timescale, from less than 1 millisecond until a few milliseconds, relates to filtering, digital effects, and interestingly to the real perception of timbre qualities through unconscious auditory Among the multiple productions and theories developed after Schaeffer, as a main electroacoustic compositional on the time and spatial features of evolution and dynamic consistencies of the composed object typologies, but showing an event-based sound world of electroacoustic music. Framed by the main categories of gesture and texture, sound movements are catalogued in terms of the dimensional and behavioural pioneering research, and taking into account the developments of sound science and digital sound processing research, perception inside the so-called of the Time Scales of Music <ref type="bibr" target="#b11">[12]</ref>. In this sense Macro, Meso and Sound Object me range consciously detectable and by humans and traditionally scored and represented. Sound objects share a l music notes macro and meso levels the terms of traditional macro and intermediate music forms. Micro time scales can instead describe and compute events and manipulations difficult to be logically managed prior to the advent of digital means.</p><p>The fastest events perceivable and producible by humans cannot be below a threshold of 100 milliseconds ca., and the human spontaneous tendency is to group a blurring zone of roughness and reverberation extremely important to detect the character of sound attacks and dynamics linked to a global unconscious identification of timbre and emotional qualities of the sound. The time scale roughly between 1 and 20 milliseconds timescale, from less than 1 millisecond until a few milliseconds, relates to filtering, digital effects, and nscious auditory</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Digital Composition</head><p>The software potential to declare, compute and process heterogeneous functions proceeding through diversified time scales obviously represents a huge advantage. Musical programming, formal and/or graphic representations help to empower complex kinds of analysis and to frame consistent music structures, which need to be "performed" by the system (automatically or by human actions) in order to generate a composition. For this specific purpose, it seems unimportant whether the "compositional performance" happens in real-time (on stage) as opposed to off-line and step-by-step (in studio), or if the result is intended for producing a notated score rather than to directly shape sounds. The relevant fact is that every kind of Computer Aided Composition involves softwares to enact processes implied by a final composition, generally too complex to be fully controlled by a human mind, and requiring a human response (or evaluation/choice) in front of non-deterministic outputs resulting from the initial conditions set by the composer: obviously algorithms are a huge collection of tools, not the composition. The focus on processes and interactive design whose output cannot be fully foreseen show a non-classical attitude to viewing the essence of the composition as the living dialectic between diverse entities and agents <ref type="bibr" target="#b13">[14]</ref>: composers can be interested in showing the autonomous results of the composed pre-conditions, or be part of the system in order to live-constrain the system, maybe adding further layers.</p><p>If the result has instead to be a fixed score, in any case composers can operate a choice for the most successful final work from among different outputs generated by nondeterministic systems, or exploit computers only for local problem solving.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4.">Notions of Gesture</head><p>If traditional scores depend on the performance gesture (at least imagined in the case of an expert) in order to be realised, and are probably the final fixed result of previous instrumental/conceptual gestures, new technologies appear to have more intimately embedded gestural approaches to composition, as previously mentioned while discussing on sound-objects and spectromorphology. If gesture appears as a native rationale in the field of sound-based composition, since a "concrete sound" is intrinsically a gesture, we notice a growing trend to deploy the category of gesture also in score-based, even traditional, music. Bierwisch defined music as a gestural form because of its iconic and combinatorial status, dynamically oriented to shape surfaces, contours and irregularities, navigating through structures, in opposition to language which is essentially a logic form <ref type="bibr" target="#b14">[15]</ref>.</p><p>Gestures denote non-verbal transfers of information through body movements not necessarily conveying conventional meaning, and often emphasising emotion and expression. An interesting isomorphism linking gesture and music regards the joining of physical motions with human intentions, by a rhetoric attitude calling for a feedback <ref type="bibr" target="#b15">[16]</ref> . Sound and gesture share a physical, dynamic, spatial and semiotic attitude, and in case of sound producing (instrumental) gestures they manifest a joint intention, semiosis and embodiment. In this sense the action upon a controller cannot be defined as a gesture. But the trajectories of notes on a score, just as the direct sounds on a support, are indeed considered as gestures relying on their physical, semiotic or perceptual consistencies.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>3.Interactive Music</head><p>Interactive music needs a sensing input coming from the real world and its factual status relates to digital processes. exploits audio input (microphones or p form of optical and inertial systems, and can also be potentially any other means of body and environment monitor systems. What happens in the world flows as a vector of data ac depending on the quality of the analysis, the kind of features and trajectories chosen to be extracted, and the types of interaction wanted by the composer. In other words the complexity of this hermeneutic step relies on a tran physical quantities into mid/high levels of meaningful features.</p><p>In the case of audio input treated as a data collector objects of analysis relating to acoustic knowledge and music th often involves algebra, geometry and kinaesthetic descriptors, taking in current consolidating tendency towards a search for corporeal high relevant to embodied cognition theories. In this sense between matter and min corporeal articulations (countable patterns of movement) and subjective intentions like non-verbal messages, socially shared techniques of movement, behavioural resonances <ref type="bibr" target="#b16">[17]</ref>. A morphologies to the functional segmentation of music producing, excitatory, modulatory or sou field of the so-called Music Retrieval Ontologies <ref type="bibr" target="#b17">[18]</ref>. Machine learning systems are sometimes applied for the Music Information Retrieval is mostly concerned with the implementation of objects able to extract information from the raw audio signal processing its iterative patterns of amplitude perceptual features through music descriptors.</p><p>Interactive music needs a sensing input coming from the real world and its factual es to digital processes. Sensing is a kind of physical computing, which exploits audio input (microphones or pickups) and/or motion tracking mainly in the optical and inertial systems, and can also be integrated by force detectors and potentially any other means of body and environment monitor systems. What happens as a vector of data acting as a collection of variables in real depending on the quality of the analysis, the kind of features and trajectories chosen to be extracted, and the types of interaction wanted by the composer. In other words the complexity of this hermeneutic step relies on a transparent transformation of low physical quantities into mid/high levels of meaningful features.</p><p>case of audio input treated as a data collector, interactive artists exploit objects of analysis relating to acoustic knowledge and music theories. Motion tracking often involves algebra, geometry and kinaesthetic descriptors, taking into account the current consolidating tendency towards a search for corporeal high-level features often relevant to embodied cognition theories. In this sense the body is seen as a mediator between matter and mind and the search moves to defining the relations between corporeal articulations (countable patterns of movement) and subjective intentions like verbal messages, socially shared techniques of movement, functional cues and behavioural resonances <ref type="bibr" target="#b16">[17]</ref>. A subset of analysis linking the Schaefferian sound typo morphologies to the functional segmentation of music-related-actions such as sound producing, excitatory, modulatory or sound-accompanying actions can be found in Music Retrieval Ontologies <ref type="bibr" target="#b17">[18]</ref>. Machine learning systems are the detection of complex gestures such as bow-movements Music Information Retrieval is mostly concerned with the implementation of objects able to extract information from the raw audio signal processing its spectrum, the iterative patterns of amplitude or brightness contours, in order to return significant perceptual features through complex reverse engineering, giving rise to Interactive music needs a sensing input coming from the real world and its factual Sensing is a kind of physical computing, which mainly in the integrated by force detectors and potentially any other means of body and environment monitor systems. What happens of variables in real-time, depending on the quality of the analysis, the kind of features and trajectories chosen to be extracted, and the types of interaction wanted by the composer. In other words the sparent transformation of low-level interactive artists exploit eories. Motion tracking account the level features often body is seen as a mediator relations between corporeal articulations (countable patterns of movement) and subjective intentions like functional cues and chaefferian sound typoactions such as soundan be found in the Music Retrieval Ontologies <ref type="bibr" target="#b17">[18]</ref>. Machine learning systems are movements <ref type="bibr" target="#b18">[19]</ref>.</p><p>Music Information Retrieval is mostly concerned with the implementation of objects spectrum, the in order to return significant high-level</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Composition and instrument</head><p>Interactivity allows a live dialogue between performance on stage and electronics, allowing a consistency partially lost when the Live Electronics are controlled by offstage machines, and even by on-stage controllers.</p><p>If the performer turns a switch on the electric guitar we will hear the sound effects changing, but if the sound effects are variably dependent on the kinds of patterns, timbres or intensities currently played by the guitar we notice an increase in complexity and expectancy. It is self-evident that interactive systems hybridise the concepts of instrument, performance and composition. Since performance influences the electronic sound, playing an instrument involves playing also the electronics and the final relation between both: in other words to "live-compose" a multilayered structure. In this case software composition must be procedural, modular and reactive (in a sense "performative"). Software interactive design shows overlapping aspects among the categories of instrument and composition. Therefore interactive music is often inscribed in a pre-composed score, in this way the spread of the interconnections becomes local and is absorbed by the planning responsibility of the composer; the composition can also leave small or large windows of free exploration to the performer offering more elastic results. Many systems are instead based on improvisation, opening a broad HMI dialogue whose responsibility is shared by the performer and the composer-programmer. Radical experimental approaches involve one single performer prototyping his/her interactive languages and exploring new music boundaries <ref type="bibr" target="#b19">[20]</ref>. It is well known that interactive systems can also allow the audience to gain channels of influence upon a live performance. A taxonomy of interactivity can be built on the continuum among the range of complexity of the systems. When just a few linearly shaped parameters drive the variable machine response the system is defined as instrument-like, while greater complexity relates to a more compositional response <ref type="bibr" target="#b20">[21]</ref>. Originally complexity was linked to an idea of unpredictability sometimes useful for increasing human creativity by enhancing the sense that a machine interacts instead of simply reacting; the improvement in sensing tools and high-level descriptors obviously contributes to the perceptiveness of any systems. We further note the possibility to discriminate between note-based and soundbased approaches, the latter approach being more involved in timbre and spatial electronic treatments. Note-based interactive systems, originally built upon the MIDI protocol, were able to manage in real-time traditional note-oriented "languages", allowing to implement HMI systems dialoguing in terms of music symbols and structures. Currently softwares easily mix and swap both approaches.</p><p>Hyperinstruments (also called Digitally Augmented Instruments) are a special family of interactive systems implementing an acoustic-digital unity focused on the typical performance actions of the traditional instruments. Through features extracted by sound analysis and/or motion tracking upon the sound-producing gestures, and a net of digital mappings, they follow a "chamber music" ideal continuity from performance to digital composition <ref type="bibr" target="#b2">[3]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>4.Gestural systems of real-time composition</head><p>Hyperinstruments, since not physically modifying traditional instruments, relay on acknowledged techniques and expressive rhetoric patterns. The idea of navigating within virtual worlds is currently quite common, often at the cost of losing continuity with the real world. Augmented Reality, as a true world filled of data, needs gestures (non-verbal transfers of meaning), rather than controllers. The goal of my systems is an intimate sound re-appropriation of symbolic score-machine flows.</p><p>My reference software is the interactive music environment MAX/Msp [22], through which I recollect networks of sensing data coming from a minimal equipment of audio pickups and/or inertial motion units [23], whose resulting features are analysed through specialised libraries. Compositions are themes (narratives) upon which the performer is requested to operate a search, to make choices, to explore the sounds coming from the electronics, elaborating individual strategies. External verbal scores tell the performers how to influence the overall sound result and to how to guide the system, which variably develops in part automatically and in part as a consequence of the performance. The laptop screen acts as a variable animated score proposing and responding (sometimes generating interactively common notation as a result of the performance gestures that have to be sight performed in a loop). In the case of an ensemble the performers send reciprocal messages, interactive scores and elaborate onthe-fly pre-determined collective goals. The performers can gain a detailed knowledge of the interaction through rehearsal, but they can also interact loosely, intuitively and discovering step-by-step. The verbal scores inform the performers which are the means to interact with, and they can monitor the composition behaviour by listening and through the visual screen. Depending on each single composition the performers can communicate and interact through note-intervals (onset/pitch detection), instrumental timbres, rhythmic patterns, contrasting music sequences (in this case recognised by the system through machine learning), or pitch ranges. In the case of motion tracking the best results have been obtained by bowing styles recognition and sound accompanying gestures. Performers learn how to expand their gestures in order to integrate their acoustic result with the system's behaviour and sound as a single consistency. Each interaction is a special software instance focusing on specific techniques, sound/event search, performance problem solving according to the benchmark "fiction" trajectory <ref type="bibr">[24]</ref>.</p><p>The systems are intended as gesture-based compositions since the non-linear nodes of the local mappings are constrained by input gestures which are physical signals, intentions, and performance techniques mediated by software symbolic actuators. The input gestures (timbres, note contours, sound patterns) are intimately complex and the performer has to understand how the machine selects their features and modulates the "socially" goal-oriented tasks. The semiosis between human and system (and through humans in the case of an ensemble) operates through scores and representations. Scores are generated as gestural resonances, local messages and autonomy/heteronomy negotiations displaying the specific narrative. In this sense performer and pre-programmed system are treated as agents of a single environment for shared strategies of composition. Improvisation is allowed as an emergent strategy of contextual adaptivity, but performers need to predetermine fixed individual strategies not in order to gain control (since the system is self-regulating) but in order to gain a maximum of meaning.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 .</head><label>1</label><figDesc>Figure 1.Example of XVIII harmonisation and polyphony invention. F.E. Niedt</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 .</head><label>2</label><figDesc>Figure 2. Mix of common and action notations K. Penderecki , Capriccio per Siegfried Palm Schott, Mainz, 1968</figDesc><graphic coords="2,122.76,566.16,92.40,110.88" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>on and action notations Figure 3 .</head><label>3</label><figDesc>Graphic and electroacoustic score Capriccio per Siegfried Palm L. Berio, Thema (Omaggio a Joyce), t, Mainz, 1968 Suvini Zerboni, Milano, 1958 Numbers as a guide for extemporary , Clarendon press, Oxford, 1989</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 .</head><label>4</label><figDesc>Figure 4.</figDesc><graphic coords="4,215.16,490.44,215.16,174.96" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 4 .</head><label>4</label><figDesc>Figure 4. Typo-morphologies of sound objects<ref type="bibr" target="#b9">[10]</ref> </figDesc><graphic coords="4,121.68,490.44,93.48,206.04" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 5 .</head><label>5</label><figDesc>Figure 5.</figDesc><graphic coords="5,215.16,263.52,215.16,101.16" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 5 .</head><label>5</label><figDesc>Figure 5. Gestural dynamics of spectromorphology events<ref type="bibr" target="#b10">[11]</ref> </figDesc><graphic coords="5,157.20,263.52,57.96,101.16" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 6 .</head><label>6</label><figDesc>Figure 6.Plot of gesture segmentations<ref type="bibr" target="#b18">[19]</ref> </figDesc><graphic coords="7,215.16,451.20,206.52,182.40" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Formalized Music</title>
		<author>
			<persName><forename type="first">I</forename><surname>Xenakis</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1992">1992</date>
			<publisher>Pendragon Press</publisher>
			<pubPlace>New York</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">I</forename><surname>Godøy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Leman</surname></persName>
		</author>
		<title level="m">Musical Gestures:Sound, Movements and Meaning</title>
				<meeting><address><addrLine>NewYork</addrLine></address></meeting>
		<imprint>
			<publisher>Routledge</publisher>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">T</forename><surname>Machover</surname></persName>
		</author>
		<ptr target="http://opera.media.mit.edu/publications/" />
		<title level="m">Hyperinstruments. A progress report 1987-1991</title>
				<imprint>
			<date type="published" when="1992-07-17">1992. 7/17</date>
		</imprint>
		<respStmt>
			<orgName>MIT Media Laboratory</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">I processi improvvisativi nella musica</title>
		<author>
			<persName><forename type="first">V</forename><surname>Caporaletti</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2005">2005</date>
			<publisher>LMI</publisher>
			<pubPlace>Lucca</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Silence</title>
		<author>
			<persName><forename type="first">J</forename><surname>Cage</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1961">1961</date>
			<publisher>Wesleyan University Press Paperback</publisher>
			<pubPlace>Middletown</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Introducing Composed Instruments, Technical and Musicological Implications</title>
		<author>
			<persName><forename type="first">N</forename><surname>Schnell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Battier</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2002 Conference on New Instruments for Musical Expression</title>
				<meeting>the 2002 Conference on New Instruments for Musical Expression</meeting>
		<imprint>
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Some ontological remarks about music composition processes</title>
		<author>
			<persName><forename type="first">H</forename><surname>Vaggione</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computer Music Journal</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="54" to="61" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Real-time Composition as Performance Ecosystem</title>
		<author>
			<persName><forename type="first">A</forename><surname>Eigenfeldt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Organised sound</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="143" to="153" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">A la recherche d&apos;une musique concrète</title>
		<author>
			<persName><forename type="first">P</forename><surname>Schaeffer</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1952">1952</date>
			<publisher>Éditions du Seuil</publisher>
			<pubPlace>Paris</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Schaeffer</surname></persName>
		</author>
		<title level="m">Traité des objets musicaux</title>
				<meeting><address><addrLine>Paris</addrLine></address></meeting>
		<imprint>
			<publisher>Éditions du Seuil</publisher>
			<date type="published" when="1966">1966</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Spectromorphology: explaining sound-shapes</title>
		<author>
			<persName><forename type="first">D</forename><surname>Smalley</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Organised sound</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="107" to="126" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Roads</surname></persName>
		</author>
		<title level="m">Microsound</title>
				<meeting><address><addrLine>Cambridge, Mass</addrLine></address></meeting>
		<imprint>
			<publisher>MIT Press</publisher>
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Audible design, Orpheus The Pantomime Ltd</title>
		<author>
			<persName><forename type="first">T</forename><surname>Whishart</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1994">1994</date>
			<pubPlace>York</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">A Constructivist Gesture of Deconstruction. Sound as a Cognitive Medium</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">Di</forename><surname>Scipio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Contemporary Music Review</title>
		<imprint>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="87" to="102" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Bierwisch</surname></persName>
		</author>
		<title level="m">Musik und Sprache: überlegungen zu ihrer Struktur und Funktionsweise</title>
				<meeting><address><addrLine>Leipzig</addrLine></address></meeting>
		<imprint>
			<publisher>Peters</publisher>
			<date type="published" when="1979">1979</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Music-gesture</title>
		<author>
			<persName><forename type="first">C</forename><surname>Cadoz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">W</forename><surname>Wanderley</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Trends in gestural control of music</title>
				<editor>
			<persName><forename type="first">M</forename><surname>Battier</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><forename type="middle">W</forename><surname>Wanderley</surname></persName>
		</editor>
		<meeting><address><addrLine>Paris</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2000">2000</date>
		</imprint>
		<respStmt>
			<orgName>Ircam Centre Pompidou</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Embodied Music Cognition and Mediation Technology</title>
		<author>
			<persName><forename type="first">M</forename><surname>Leman</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2007">2007</date>
			<publisher>MIT Press</publisher>
			<pubPlace>Cambridge, Mass</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Classifying Music-Related Actions</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">I</forename><surname>Godøy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of 12th International Conference on Music Perception and Cognition</title>
				<meeting>12th International Conference on Music Perception and Cognition</meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">The Augmented String Quartet: Experiments and Gesture Following</title>
		<author>
			<persName><forename type="first">F</forename><surname>Bevilacqua</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of new Music research</title>
		<imprint>
			<biblScope unit="volume">41</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="103" to="119" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Too Many Notes: Computers, Complexity and Culture in Voyager</title>
		<author>
			<persName><forename type="first">G</forename><surname>Lewis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Music Journal</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="33" to="39" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<title level="m" type="main">Interactive Music Systems</title>
		<author>
			<persName><forename type="first">R</forename><surname>Rowe</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1993">1993</date>
			<publisher>MIT Press</publisher>
			<pubPlace>Cambridge, Mass</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
