<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Overviewing  a  Field  of  Self-­Organising  Music  Interfaces:   Autonomous,  Distributed,  Environmentally  Aware,   Feedback  Systems    </article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Phivos-Angelos Kollias CICM - Centre de recherche en Informatique et Création Musicale Esthétique, Musicologie, Danse et Création Musicale Université de Paris VIII Paris</institution>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <abstract>
        <p>  This paper aims to identify and discuss the music field of self-organising music: an emerging field based on different forms of self-organising music interfaces, that is to say 'intelligent' sound/music systems characterised among others by autonomy, distributed/decentralised feedback processes and of environmental awareness. A music field based on systems-oriented concepts (cybernetics, general systems theory, complexity) and which is formed spontaneously by individual cases of composers-researchers with unique yet converging approaches. We are describing the general context of self-organising music and presenting different cases of composers-researchers that deal with the subject both from a technical and a theoretical perspective. We conclude the paper suggesting the search for a systemoriented shared musical language in order to broaden and evolve the field's musical though.</p>
      </abstract>
      <kwd-group>
        <kwd>  self-organising music</kwd>
        <kwd>systems-oriented music</kwd>
        <kwd>feedback instruments</kwd>
        <kwd>audio feedback systems</kwd>
        <kwd>generative audio systems</kwd>
        <kwd>autonomous music agents</kwd>
        <kwd>artificial music intelligence</kwd>
        <kwd>autonomous instruments</kwd>
        <kwd>feature-feedback systems</kwd>
        <kwd>adaptive synthesis</kwd>
        <kwd>audible eco-systemic interface</kwd>
        <kwd>eco-composition</kwd>
        <kwd>performance ecosystems</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>1 We have to clarify that, we are not considering music created by
algorithms/instruments designed by others than the composer
himself/herself less important; nor do we believe that music by
composers without explicit knowledge of the used
algorithms/instruments creates less significant music. On the
contrary, we can imagine several works of great originality and
theory, complexity) and which is formed spontaneously by
individual cases of composers-researchers with unique yet
converging approaches. A music field where common
concepts such as self-organisation, emergence, environment
and feedback are applied in different levels, technically or
metaphorically. We are aiming in connecting the dots among
individual cases which are fed conceptually and technically
by the same systems-oriented context, and which result to
very similar technological means.</p>
      <p>By investigating different cases based on similar approaches
of intelligent music interfaces, our aim is to outline the
existence of a common ground; a common ground mainly
shaped by common technological characteristics which
consequently may have aesthetic consequences and
implications.</p>
      <p>Our investigation concerns cases that contribute to the
emerging field of self-organising music with some form of
originality – through an active model or some suggested
advancement. Furthermore, we are interested in approaches
where the technological domain is tightly interconnected
with the compositional material and the conceptual/aesthetic
principles in use. Our investigation is not interested in cases
that are producing music through self-organising algorithms
acquired by other researchers; where the algorithms are used
as ‘found objects’ without the knowledge of their conceptual
origins.1 We are focusing on outlining some representative
cases in order to establish a common ground of
selforganising music. The collection of the approaches we
expose, even if it is far from exhaustive, intends to be
representative.</p>
      <p>
        But, can we really talk about a consistent music field of
‘selforganising music’? In other words, do the similarities and
convergences among composers allow us to speak of a
musical movement? This being the case, what are the
common characteristics among composers-researchers that
form a music movement of this kind? Then again, can we
artistic value expressed by algorithms/instruments designed by
others or without detailed knowledge on the principles of the
algorithms’/instruments’ design – for instance, new works for
orchestra performed by traditional instruments.
talk about an aesthetic movement or are they just conceptual
and technical coincidences that cannot legitimise a general
classification into a movement? Or is it maybe our natural
tendency to look for patterns everywhere, driving us to
project meaning into something amorphous and arbitrary –
as if we were looking for recognizable patterns in the night
sky by making links between stars? And if we can talk about
a self-organising music, what are these similarities and
convergences between the different composers? What are the
means of expression and the language of this musical
stream? What are its aesthetic characteristics and what are
the limits?
SELF-­ORGANISING  MUSIC  INTERFACES    
We can describe self-organising music interfaces, in general
terms, as those interfaces composed by generative music
processes directly influenced by their sonic environment;
where the sonic behaviour emerges as a ‘complex adaptive
system’ [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], resulting from numerous interaction at a basic
organisational level.
      </p>
      <p>
        Dynamically controlled audio feedback is an elementary yet
crucial form of self-organisations as it is found at the basic
organisational level: sound organises sound itself, i.e. sound
self-organises. For that, controlled audio feedback is the
most common feature of self-organising approaches.
This is why denominations based on the concept of feedback
are relevant enough to describe our field and give to the
concept of feedback the central importance it deserves. For
that, the use of the term feedback is common: audio feedback
systems [
        <xref ref-type="bibr" rid="ref11 ref17">11, 17</xref>
        ], feature-feedback systems [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] or feedback
instruments [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. We are dealing with music interfaces
including at least one feedback function as a vital structural
part. It’s important to clarify that the feedback function
should be playing a vital role for the entire use of the system,
and without which, the whole system would not be
functional.2
The feedback function is not only important for the audio
domain, but also, it is used as a control signal, giving the
possibility to observe and to guide other processes that are
mapped with. Consequently, in this approach the
composer/sound artist, instead of working only in the audio
domain with DSP, he/she is also working in ‘composing the
interactions’ with ‘control signal processing’ (CSP) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
Although we recognise the central importance of feedback’s
role, we have a preference for the term ‘self-organising’. That
is because, the term ‘feedback’ has been broadly overused,
to the extent that it does not suggests any particular
epistemology anymore. Instead, ‘self-organising’ is more
2 A counterexample would be a feedback function observing the
sonic environment in order to preserve energy by switching off the
whole system – something important in terms of energy efficiency,
but which does not actively influence the sonic result.
specific and it is a clearly linked term to the systemic
epistemology.
      </p>
      <p>
        We are interested in the self-organising work from a
compositional perspective. However, if a work is primarily
created in order to be listened to,3 the listener’s perspective
is of equal importance: the self-organising work as a process
of the listeners cognition; in other terms, the perceptual
manifestation of the work as a self-organising process
between the listener's active listening and sound [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
Here, we aim to outline the field around the concept of a
selforganising music, by seeking technological as well as
conceptual similarities and convergences among approaches
of certain composers-researchers that we believe to be
representative. We present some surveys dealing with the
subject in similar ways yet under different denominators.
We have previously suggested an elementary schematic
model of self-organising music [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] (Figure 1). Based
on second-order cybernetics [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], the model describes a
system as a feedback loop with two inputs: the goal-input
and the perturbations-input. Considering the system as an
organised whole, the model describes the entire system as an
emergent function of a feedback loop; an emerging system
by individual interacting feedback functions. The model
describes the whole system’s emerging function; but also, it
can describe any organisational level at which
selforganising processes take place, regardless of their temporal
scale.
Each system’s goal can be determined and be altered
(statically or dynamically) by an external user. Alternatively,
3 Considering the case of music composed purely for the pleasure
of the composing process per se. Even in this case, music is
transmitted and it is perceived through sound medium. Therefore,
even if the music will never be heard by an auditor, its creator will
be its unique auditor.
 
in more advanced systems, the goal can be self-determined
and self-regulated by the system. In the case of a
selfdetermined goal, we are talking about a second-order system
– in other terms, a learning system [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] – capable of changing
the way in which it reacts with the environment.
      </p>
      <p>
        We have previously described the self-organising work
manifesting as an emergent complex; resulting from the
interactions between some given structures and a certain
performance/installation context; interactions which are
defined by a model [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. In self-organising music, the
element of autonomy offers certain vitality to the work, an
expressive spontaneity and a direct communication among
the real-time sound production of the work, the acoustic
space and the participant-listeners. The work’s autonomy
may cause the composer to relinquish a great degree of direct
control over the end result. Nevertheless, we have suggested
that it is possible to create a type of intelligent music
interface where a desirable series of behavioural states can
be provoked each time; a series of states which will be
similar even when the circumstances change [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. In this
approach, a user/performer listens and changes behavioural
states accordingly; in the meanwhile, the self-organising
music system responds by continuously adapting to the new
conditions. The sounds result is a direct sonification of each
state’s adaptation. This way, there is an intentional control of
the overall sonic properties’ self-organisation. The
user/performer is in direct interaction with the
selforganising music interface while the user-interface is sound
per se.
      </p>
      <p>
        As a case study, we have previously discussed our work
Ephemeron (2008-2018) [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]; a self-organising
4
work with a constantly developing algorithm , emerging
each time through systemic interactions among 8-21
speakers, 2-4 microphones and the specific sonic
environment of the performance/installation.
      </p>
      <p>
        Sanfilippo and Valle’ investigation uses the term feedback
systems [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], comparing and presenting eighteen different
approaches – including our Ephemeron interface – with the
use of feedback as common denominator. Their investigation
attempts to expose an analytical framework comprised of six
categories:
      </p>
    </sec>
    <sec id="sec-2">
      <title>1)   information encoding: analogue or digital</title>
      <p>2)   information rate: audio signal or control signal
3)   environmental openness: open or closed
4)   trigger mode: internal or external
5)   adaptability: absent or present
6)   human-machine interaction: absent or present
Similarly to our schematic model described above (Figure 1),
Sanfilippo and Valle present a schematic diagram in order to
visualize their typology (Figure 2). Their diagram’s goal
4 Before each presentation, the algorithm is updated with technical
improvements in terms of stability and performance, but also with
additional functionalities which expand its capabilities.
seems to be a schematic explanation of the different
categories rather than a definition; that is why it may include
features that do not affect the essence of self-organising
music understanding (such as internal or external triggering),
but it may be important for understanding the general
classifications.</p>
      <p>
        Morris uses the term feedback instruments [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Although his
model is the result of his personal observations, we find it
relevant in describing the essential characteristics of
feedback instruments. Morris’ classification includes four
categories [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]:
1)   the loop can be:
a)   electric
b)   electroacoustic
c)   digital
2)   the intervention, the modifications on the feedback
sound, for example may be:
a)   delay period change, which creates a pitch-shift
effect
b)   phase shift, which changes certain resonant modes,
as in the case of a violin touching a string results
in a natural harmonic.
c)   filtering change, which alters the active frequency
range of a feedback or causes a range of resonant
frequencies.
3)   the interruption, the action of stopping the feedback:
a)   manual interruption, for example switching off a
microphone
b)   a shutter, like an envelope that dynamically forms
the feedback’s amplitude
c)   a pitch shifter, changing the self-amplification of a
frequency range
4)   the excitation, which triggers the feedback resonance:
a)   unintentional sounds – ‘noise’
b)   intentional sounds – ‘played’ sounds
      </p>
    </sec>
    <sec id="sec-3">
      <title>c)   iterative feedback sounds – the use of another</title>
      <p>
        feedback as a sound source for the feedback
system
Surges, Smyth and Puckette talk about generative audio
systems, i.e. feedback network systems focused on dynamic
filtering [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. In this type of system, the output signal is used
to dynamically control the coefficients of all-pass filters that
are redefined to be flexible yet stable. They refer to them as
‘audio systems’ in order to distinguish them from ‘music
systems’: as they explain, in audio systems, there is a strong
coupling between lower-level organisation sound production
and higher-level sound organisation [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
      </p>
      <p>
        Kim, Wakefield and Nam also talk about audio feedback
systems [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. It is interesting to note their interaction with
our music research and in particular the concept of
intentional control of sound properties we have previous
described [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Similarly to what we suggest, Kim et al.
suggest a goal-oriented feedback system in which, the
intended sound characteristics are specified as
goalconditions [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. However, in their approach, they replace the
level of self-organisation in the system performed by a
human-agent by adding an additional organisational level
including machine learning techniques; a process that
observes and guides the parameters to the desirable
goalstate each time.
      </p>
      <p>
        Collins talks about autonomous agents, where their design
responds to questions of musical artificial intelligence [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
His discussion concerns systems with features of machine
learning techniques emulating perceptual abilities. The
machine learning techniques use a simulation of human
perception pertaining to the peripheral and central auditory
system. However, the algorithms perceptual abilities can
change or exceed the original human abilities from which
they were modelled. We stretch Collins’ remark, that the
artificial intelligences of these systems do not have a
physical presence, as is the case with any manifestation of
artificial intelligence techniques [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. We add that,
autonomous agents, similarly to any self-organising music
systems, have no embodied intelligence. They are only a
piece of software coded in a piece of hardware [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ].
Blackwell &amp; Young use the term self-organised music5 [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
Their approach is based on swarm intelligence [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], a case of
distributed self-organisation: the system’s global behaviours
emerge as a complex whole comprised of local agents with
simple behaviours. Blackwell &amp; Young's approach is based
on the original work of Reynolds, who created visual
simulations of bird swarms [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. In Reynold’s approach,
each unit has a rather simple movement behaviour: Each bird
has its own autonomous behaviour, while at the same time,
each bird is a particle of the swarm, interacting with all other
5 We note here that our use of the term of self-organisation into
music is a direct reference to systems theories (see Kollias 2008 &amp;
2011), independently from Blackwell &amp; Young. For our part, we
use the term self-organising music to describe all cases of music
particle-birds. The complexity of bird cloud behaviour
emerges through the local interactions between individual
birds [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Similarly, Blackwell &amp; Young apply the same
principle in the micro-temporal domain by using the
paradigm of granular synthesis: sonic grains take the role of
self-organising particles which form self-organising swarms
of sound, what they call the swarm granulator [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. In this
practice, we find a bottom-up approach in which time scales
emerge – from the micro-structural level to the
mesostructural level – from which consequently larger formal
structures emerge.
      </p>
      <p>
        Holopainen also uses the terms autonomous (like Collins)
and self-organisation (like Kollias or Blackwell &amp; Young)
to synthesise the term self-organised sound with autonomous
instruments [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. He also uses the terms feature-feedback
systems or adaptive synthesis. Although referring to the same
field his interest in the subject is non-real-time, unlike the
approaches we have discussed above. Consequently,
selforganisation takes place as a set of non-linear algorithmic
interactions, without a physical environment (acoustic or
social); they are abstract interactions that occur in a virtual
space and time. For that, we may consider ‘autonomous
instruments’ (at least according to Holopainen's use) rather
as adaptive effects, including simulated perceptual
characteristics, using feature extraction techniques. As he
says, it is a special case of algorithmic composition, which
resides at the sub-symbolic level [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Way may consider his
approach as a case of non-environmentally aware
selforganising music – since there is no physical environment.
Di Scipio's approach has an important leading role in the
field of self-organising music as one of the first to contribute
theoretically and musically. Di Scipio proposes his audible
eco-systemic interface, in which music emerges as an
ecosystem of interactions between the algorithm, the sound
environment and the resulting sound [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. He talks about an
audible interface, because all interactions take place at an
auditory level, avoiding any visual representation. Although
he refers to the term ecosystem, his references are closer to
system theories (interactions between systems and parts of
systems) than those of ecology (interactions between
organisms in an environment).
      </p>
      <p>
        Keller seeks to find a common field between different
composers for what he calls eco-composition: as the common
denominator, he defines the integration of natural
phenomena in the compositional process, integrated with the
formal, perceptual and/or social factors in the work’s
material [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. As he says, many composers use
environmental concepts, but with different terminologies
depending on the focus of their interests – just as is the case
of the perspective we suggest through self-organising music.
It is interesting that in Keller’ suggestion, all factors – the
where the work is self-organising. Whereas, the case of Blackwell
&amp; Young is a rather special case of music self-organisation.
formal, the perceptual, the social – are interconnected,
having an equal importance, without any of them being
considered as an extra-musical factor. In addition, we should
mention his suggestion of the correlation between different
time scales and emerging perceptual scales that pass from the
personal perspective to the social perspective (Figure 3).
Even if we find several system-oriented concepts in his
perspective, Keller tries to establish a field based on
ecological studies.
      </p>
      <p>
        Waters refers to performance ecosystems [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. He describes
music as a complex system from the viewpoint of sonic and
social perspective. He distinguishes three parts: the
performer and his ‘corporeality (bodilyness)’, the instrument
and the goal-oriented approach and finally the environment
and its ‘otherness’ in regard to the system of
performerinstrument. In his survey, he refers to various approaches that
include ecosystem relationships through technology.
According to Waters, the performance ecosystem is not
merely a metaphor inspired by natural ecosystems. On the
contrary, he suggests that the musical trend is interconnected
with our corporality, our sensory agility and our interaction
with the environment [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ].
      </p>
      <p>CONCLUSION  
We have investigated and identified a new and active field of
composers-researchers who deal with the subject of what we
can call self-organising music. A music whose means of
expression is the computer; the tools are microphones,
controllers, sensors and so on; the expression material is the
"live" electroacoustic sound that includes the source of its
production but also the space in which it is expressed.
We can identify a shared tendency inspired by
systemoriented theories towards a self-organising music practice.
However, we can find as many different approaches as
composers who practice them. Each composer tends to
choose a perspective according to his/her own priorities and
values to interpret the systems concepts in a different
manner. In this sense, several authors use the same terms to
explain different things; or conversely, others may use
different terms to deal with similar themes. Consequently,
the music discussion tends to be in rather vague terms,
dealing with extra-musical subjects such as metaphors,
modelling through visual representation, or imprecise
abstractions.</p>
      <p>However, apart from the more or less vague common
concepts, a field of convergence between different authors
arises from the fact that they publish and discuss their
algorithms’ blueprints (or their circuits) or even the
algorithms’ code. Consequently, this results in a more
concrete source of discussion and an important tool for
technical exchange. Compared with systems terminology
which is a meta-language, and thus highly abstract by
definition, an algorithmic blueprint is a clear and
welldefined reference point: i.e. diagrams with well-defined
symbols and connections representing interconnected DSP
modules.</p>
      <p>Nevertheless, it cannot change the fact that it is a point of
convergence around purely technical characteristics. Thus, it
does not suggest a specific set of aesthetics. We would like
to emphasise that, until now, to the best of our knowledge,
we cannot find a musical language based on systems
epistemology which is really linked with musical material;
either a systems’ musical language that deals equally with
the organisation, creation and processing of sound per se,
and not merely with poetic references or connections with
techniques in a vague manner.</p>
      <p>We would like to ask some open questions: would it be
possible to reach a point where we will have and use a
system-oriented shared musical language? A language with
which we could describe, discuss and imagine what we call
self-organising music – as is the case of the conventional
musical language for notated music, or for instance the
spectromorphological terminology, for acousmatic music?
This could be a powerful tool of broadening musical thought
through systemic conceptual and methodological tools.
Where music would really be genuinely linked with systems
thinking and not just inspired by its concepts.</p>
      <p>
        However, even if it was possible, who would take the
responsibility to ‘impose’ a language with the possibility to
be used by many? Would it be someone able to take the
decisions for everyone, by preparing a language and
exposing it in the form of an ‘aesthetic manifesto’ – as was
the case rather often in past art history? Nonetheless, if a
‘specialist’ proved to be able to do this, from our systemic
viewpoint, this would appear to us as an authoritarian
tendency while imposing itself on the possibility of social
self-organisation. Or otherwise, what if it was a team of peers
– that is to say, respectable colleagues on the field with equal
and similar skills – with its own criteria in determining a
systemic language? As this was the case with Macy
conferences, the very source of systems thinking, organised
in order to construct a shared consensual metalanguage [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]
[
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. Once again, from our viewpoint, we can see the danger
of a certain kind of elitism, and again the problem of a
dictating and opposing the tendency of a social
selforganisation.
      </p>
      <p>Since the demand for a common language that can be shared
and used by the community cannot be imposed, the only
legitimate way would be again a collaborative project. And
if we are talking about true self-organisation, this project
itself should be equally self-organising. A kind of project that
would determine the conditions under which a common
language could be built or chosen, tested and shared. We
could imagine a form of wiki capable of responding to this
demand, where any choice would be genuinely open, and the
language self-organising. We leave the proposal open.
ACKNOWLEDGMENTS  
We would like to thank Triantafyllos Gkikopoulos for his
thorough remarks and for his precious reflections. In
addition, we acknowledge the important feedback from the
anonymous peer-reviewers of this article. Finally, we would
like to thank Vincent Van Heerden for proof reading the
article.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.   Ludwig von Bertalanffy.
          <year>1968</year>
          .
          <article-title>General System Theory: Foundation, Development, Applications</article-title>
          . Reference to the
          <source>revised edition</source>
          ,
          <year>1969</year>
          . George Braziller, New York.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.   Eric Bonabeau, Marco Dorigo, and
          <string-name>
            <given-names>Guy</given-names>
            <surname>Theraulaz</surname>
          </string-name>
          .
          <year>1999</year>
          .
          <article-title>Swarm Intelligence: From Natural to Artificial Systems</article-title>
          . Oxford University Press, New York, NY/Oxford, UK.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.   Tim Blackwell, and
          <string-name>
            <given-names>Michael</given-names>
            <surname>Young</surname>
          </string-name>
          .
          <year>2004</year>
          .
          <article-title>Selforganised music</article-title>
          .
          <source>Organised Sound</source>
          <volume>9</volume>
          ,
          <issue>2</issue>
          :
          <fpage>123</fpage>
          -
          <lpage>136</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.   Nicholas M. Collins,
          <year>2006</year>
          .
          <article-title>Towards autonomous agents for live computer music: Realtime machine listening and interactive music systems</article-title>
          .
          <source>PhD Dissertation</source>
          . University of Cambridge, Cambridge, UK.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.   Agostino Di Scipio.
          <year>2003</year>
          . '
          <article-title>Sound is the interface': From interactive to ecosystemic signal processing</article-title>
          .
          <source>Organised sound: An international journal of music technology 8</source>
          , 3:
          <fpage>269</fpage>
          -
          <lpage>277</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.   Hugh Dubberly, Paul Pangaro, and
          <string-name>
            <given-names>Usman</given-names>
            <surname>Haque</surname>
          </string-name>
          .
          <year>2009</year>
          .
          <article-title>ON MODELING: What is interaction? Are there different types?</article-title>
          .
          <source>Interactions 16</source>
          ,
          <issue>1</issue>
          :
          <fpage>69</fpage>
          -
          <lpage>75</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.   Francis Heylighen, and
          <string-name>
            <given-names>Cliff</given-names>
            <surname>Joslyn</surname>
          </string-name>
          .
          <year>2001</year>
          . Cybernetics and
          <string-name>
            <surname>Second-Order Cybernetics</surname>
          </string-name>
          . Encyclopedia of Physical Science &amp; Technology. Academic Press, New York,
          <fpage>155</fpage>
          -
          <lpage>170</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.   John H. Holland.
          <year>2006</year>
          .
          <article-title>Studying Complex Adaptive Systems</article-title>
          .
          <source>Journal of Systems Science and Complexity</source>
          <volume>19</volume>
          ,
          <issue>1</issue>
          :
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.   Risto Holopainen.
          <year>2012</year>
          .
          <article-title>Self-organised sound with autonomous instruments: Aesthetics and experiments</article-title>
          .
          <source>PhD Dissertation</source>
          . University of Oslo, Oslo, Norway.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.   Damián Keller, and
          <string-name>
            <given-names>Ariadna</given-names>
            <surname>Capasso</surname>
          </string-name>
          .
          <year>2006</year>
          .
          <article-title>New concepts and techniques in eco-composition</article-title>
          .
          <source>Organised Sound</source>
          <volume>11</volume>
          ,
          <issue>1</issue>
          :
          <fpage>55</fpage>
          -
          <lpage>62</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.   Seunghun Kim, Graham Wakefield, and
          <string-name>
            <given-names>Juhan</given-names>
            <surname>Nam</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Augmenting environmental interaction in audio feedback systems</article-title>
          .
          <source>Applied Sciences</source>
          <volume>6</volume>
          ,
          <issue>5</issue>
          :
          <fpage>125</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.  
          <string-name>
            <surname>Phivos-Angelos Kollias</surname>
          </string-name>
          .
          <year>2008</year>
          .
          <article-title>Ephemeron: Control over Self-Organized Music</article-title>
          .
          <source>In Proceedings of the 5th International Conference of Sound and Music Computing (SMC '08)</source>
          ,
          <fpage>138</fpage>
          -
          <lpage>146</lpage>
          . Revised version of 2009 published in: Hz
          <source>Music Journal</source>
          ,
          <volume>14</volume>
          . Retrieved December 12,
          <year>2017</year>
          from http://www.fylkingen.se/hz/n14/kollias.html
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.  
          <string-name>
            <surname>Phivos-Angelos Kollias</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>The Self-Organising Work of Music</article-title>
          .
          <source>Organised Sound</source>
          ,
          <volume>16</volume>
          , 2:
          <fpage>192</fpage>
          -
          <lpage>199</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.  
          <string-name>
            <surname>Phivos-Angelos Kollias</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Vers une pensée musicale orientée-système : l'oeuvre musicale autoorganisante</article-title>
          .
          <source>PhD dissertation</source>
          . University of Paris VIII, Paris, France.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.   Jeffrey Morris.
          <year>2007</year>
          .
          <article-title>Feedback instruments: Generating musical sounds, gestures, and textures in real time with complex feedback systems</article-title>
          .
          <source>In Proceedings of International Computer Music Conference (ICMC '07)</source>
          ,
          <fpage>469</fpage>
          -
          <lpage>476</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.   Craig W. Reynolds.
          <year>1987</year>
          .
          <article-title>Flocks, herds and schools: A distributed behavioral model</article-title>
          .
          <source>ACM SIGGRAPH computer graphics 21</source>
          , 4:
          <fpage>25</fpage>
          -
          <lpage>34</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.   Dario Sanfilippo and
          <string-name>
            <given-names>Andrea</given-names>
            <surname>Valle</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Feedback systems: An analytical framework</article-title>
          .
          <source>Computer Music Journal</source>
          <volume>37</volume>
          ,
          <issue>2</issue>
          :
          <fpage>12</fpage>
          -
          <lpage>27</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.   Greg Surges, Tamara Smyth, and
          <string-name>
            <given-names>Miller</given-names>
            <surname>Puckette</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Generative audio systems using powerpreserving all-pass filters</article-title>
          .
          <source>Computer Music Journal</source>
          <volume>40</volume>
          ,
          <issue>1</issue>
          :
          <fpage>54</fpage>
          -
          <lpage>69</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.   Francisco J. Varela.
          <year>1988</year>
          .
          <string-name>
            <given-names>Cognitive</given-names>
            <surname>Science</surname>
          </string-name>
          .
          <article-title>A cartography of Current Ideas</article-title>
          .
          <article-title>Reference to the enlarged French version: Invitation aux sciences cognitive</article-title>
          .
          <source>1996. Editions du Seuil</source>
          , Paris.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.   Simon Waters.
          <year>2007</year>
          .
          <article-title>Performance ecosystems: Ecological approaches to musical interaction</article-title>
          .
          <source>In Proceedings of Electroacoustic Music Studies Network (EMS '07)</source>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.   Simon Waters.
          <year>2011</year>
          .
          <article-title>Performance ecosystems: Editorial</article-title>
          .
          <source>Organised Sound</source>
          <volume>16</volume>
          ,
          <issue>2</issue>
          :
          <fpage>95</fpage>
          -
          <lpage>96</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.   Norbert Wiener.
          <year>1948</year>
          .
          <article-title>Cybernetics, or control and communication in the animal and the machine</article-title>
          .
          <source>Reference to the 2nd paperback edition</source>
          ,
          <year>1965</year>
          . The MIT Press, Cambridge, MA.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>