=Paper= {{Paper |id=Vol-2659/saffiotti |storemode=property |title=On human-AI collaboration in artistic performance |pdfUrl=https://ceur-ws.org/Vol-2659/saffiotti.pdf |volume=Vol-2659 |authors=Alessandro Saffiotti,Peter Fogel,Peter Knudsen,Luis de Miranda,Oscar Thörn |dblpUrl=https://dblp.org/rec/conf/ecai/SaffiottiFKMT20 }} ==On human-AI collaboration in artistic performance== https://ceur-ws.org/Vol-2659/saffiotti.pdf
       On Human-AI Collaboration in Artistic Performance
      Alessandro Saffiotti1 and Peter Fogel2 and Peter Knudsen3 and Luis de Miranda4 and Oscar Thörn5


Abstract. Live artistic performance, like music, dance or acting,                 Artistic performance, like music, dance or acting, provides an ex-
provides an excellent domain to observe and analyze the mechanisms             cellent setting to observe and analyze the mechanisms of human-
of human-human collaboration. In this note, we use this domain to              human collaboration, especially in live or improvised performance. It
study human-AI collaboration. We propose a model for collaborative             seems natural, thus, to study human-AI collaboration in this setting.
artistic performance, in which an AI system mediates the interac-              Artistic creativity in general, and in music in particular, has been a
tion between a human and an artificial performer. We then instantiate          topic of interest for computer researchers since the early days of com-
this model in three case studies involving different combinations of           puter science [15] and AI [27]. Ada Lovelace already noted in 1843
human musicians, human dancers, robot dancers, and a virtual drum-             that computers “could potentially process not only numbers but any
mer. All case studies have been demonstrated in public live perfor-            symbolic notations, including musical and artistic ones” [19]. Today,
mances involving improvised artistic creation, with audiences of up            there is a rich literature of computational approaches to music [7, 29],
to 250 people. We speculate that our model can be used to enable               including many AI systems for music composition and improvisa-
human-AI collaboration beyond the domain of artistic performance.              tion [26]. As pointed out by Thom [38], however, most of these sys-
                                                                               tems focus on the offline creation of music, and not on the online
                                                                               collaborative performance between the human and the AI musicians:
1     Introduction                                                             the latter is what is usually referred to as co-creativity [21, 22]. No-
                                                                               table exceptions in computational music are the early work on jazz
Should AI systems augment humans, replace humans, or collaborate               improvisation by Walker [41], and the work on a marimba playing
with humans? This question is being regularly asked by both citizens           robot by Hoffman and Weinberg [17]. Co-creativity has also been
and policy makers, often boosted by the media, and researchers in all          studied in other artistic areas, like theater [32], as well as in the more
fields are increasingly faced with the many facets of this question.           general field of human-computer interaction [25].
   For the purpose of our discussion, we define the above three mod-              In this paper, we study AI systems capable of on-line, collabora-
els as follows. Consider a task T , traditionally performed by a hu-           tive interaction with humans in the context of artistic performance,
man. In the augmentation model, T is still performed by the hu-                with a specific focus on live music improvisation. The main contri-
man, and this is empowered with new tools and functionalities built            bution of this paper is a general model for collaborative artistic per-
through AI. In the replacement model, T is instead performed by an             formance, in which an AI system mediates the interaction between
artificial agent built using AI technology. In both these models, the          a human and an artificial performer. We show how this model has
task is performed by a single agent. In the collaboration model, by            been instantiated in three concrete case studies involving different
contrast, T is performed jointly by two independent but collaborating          combinations of human musicians, human dancers, robot dancers,
agents: a human agent, and an AI-based artificial agent.                       and a virtual drummer. We also briefly discuss the complex problem
   The topic of collaborative AI, or how to make AI systems that col-          of how to evaluate human-AI collaborative interaction, especially in
laborate with humans in performing a joint task, is the subject of in-         an artistic context. We hope that our model can contribute to a bet-
creasing interest in the AI community. The emphasis on the collabo-            ter understanding of the general mechanisms that enable successful
ration model as opposed to the replacement model is also in line with          collaboration between AI systems and humans.
the Ethics Guidelines produced by the European High Level Expert
Group [14], and later adopted by the European Commission [10], that            2   A model for Human-AI collaborative artistic
insists that humans should maintain agency and oversight with re-                  performance
spect to AI systems. Aspects of collaborative AI have been studied in
several areas, including human-robot teams [18], shared agency [5],            The model that we propose for Human-AI collaboration in artistic
hybrid human-AI intelligence [42], mixed-initiative systems [11] and           performance is illustrated in Figure 1. In this model, an AI system is
symbiotic systems [8]. In this note, we contribute to the study of             used as a mediator to coordinate the performance of two autonomous
human-AI collaboration in a particularly telling domain: collabora-            agents: a human performer, and an artificial performer. This model
tive artistic performance.                                                     therefore comprises three elements: two performers and a mediator.
                                                                                  For illustration purposes, Figure 1 shows a guitar player as human
1 School of Science and Technology, Örebro University, Örebro, Sweden.       performer and a dancing robot as artificial performer. We emphasize,
    Contact email: asaffio@aass.oru.se                                         however, that we use the term “artificial performer” in a broad sense,
2 University of Skövde, Sweden.
3 School of Music, Theatre and Art, Örebro University.
                                                                               to mean any agent that generates physical outcomes: this could be a
4 School of Humanities, Education and Social Sciences, Örebro University.     physical robot producing movements, a virtual instrument producing
5 School of Science and Technology, Örebro University.                        sounds, or a projector producing artistic visualizations. In the case
Copyright © 2020 for this paper by its authors. Use permitted under Creative   studies reported below, we use an off-the-shelf commercial virtual
 Commons License Attribution 4.0 International (CC BY 4.0).                    drummer and a off-the-shelf humanoid robot as artificial performers.
Figure 1. Black box view of our model for Human-AI collaboration. The         Figure 2. White box view of our model, as implemented in the case studies
 parameters of the artificial performer are adjusted to adapt to the human     reported in this paper. The figure refers specifically to the first case study.
 performer, and the human performer naturally adapts to the artificial one.

                                                                              case studies, three of which are reported below. Figure 2 shows the
    Our model has the following distinctive features.                         high-level architecture of this system, in the case where the human
                                                                              performer is a jazz pianist and the artificial performer is a parametric
• Supervisory. The AI system does not directly generate the artistic
                                                                              virtual drummer (first case study below).
  output. Instead, we assume that the artificial performer is capable
                                                                                 The features extraction module analyzes input from the human
  of autonomous artistic performance, whose modalities are con-
                                                                              performer, in this case in the form of MIDI signals, and estimates the
  trolled by a fixed number of parameters. The parameters of interest
                                                                              value of a set of variables that represent the musical expression of
  here are expressive parameters, that modulate the behavior of the
                                                                              the performance, like the keypress velocity and the rhythmic density.
  artificial performer to produce different artistic expressions. For
                                                                              This module computes an expressive state represented by variables
  example, a robot may be able to autonomously perform a number
                                                                              that depend on the past and current input values, as well as variables
  of dancing patterns, and parameters may make its motions more
                                                                              that predict future states. Examples of the former are the instanta-
  aggressive or more subtle.
                                                                              neous velocity v(t), the average velocity v̄(t) over the last bar, and
• Reactive. The goal of the AI system is to analyze the artistic ex-
                                                                              the velocity slope ∆v (t); an example of the latter is a predicted cli-
  pression in the live human performance, and to dynamically adapt
                                                                              max ĉ(t + 1) at the end of an ongoing crescendo. Variables referring
  the parameters of the artificial performer to match this expression.
                                                                              to past, current and predicted states are represented in the figure by
  Thus, the behavior of the artificial performer is influenced by the
                                                                              xt−1 , xt and xt+1 , respectively.
  human performer, but it is not fully determined by this.
                                                                                 The parameter generation module uses the above variables to de-
• Proactive. The AI system may be creative and proactive in set-
                                                                              cide the values of the execution parameters of the artificial agent, so
  ting the performance parameters. The human performer hears or
                                                                              as to continuously adapt its performance to match the current musical
  sees what the artificial performer does, and may adjust to it (the
                                                                              expression of the human performer. In the case of a virtual drummer
  right-to-left arrow in Figure 1). Our hypothesis, which we aim to
                                                                              shown in the picture, these parameters include the intensity I(t) and
  verify trough empirical studies, is that this feedback loop will re-
                                                                              complexity C(t) of the drumming, the drumming pattern P (t), and
  sult in an harmonious joint performance between the human and
                                                                              the selective muting M (t) of some of the drums.
  the autonomous agent.
                                                                                 The above architecture can be interpreted in terms of the semantic
   By combining reactive and proactive behavior, our model imple-             perception-production mapping mentioned above: in this view, fea-
ments the two directions in human-AI co-creativity described by Li-           ture extraction would correspond to aesthetics, parameter generation
apis and colleagues [24]: the human guides artificial creativity (re-         to poietics, and expressive state variables represent artistic meaning.
activity), and the AI system triggers human creativity (proactivity).         The architecture also reminds of the listener-player schema for in-
This also resonates with the idea of “directed improvisation” intro-          teractive music systems originally proposed by Rowe [35], and later
duced in the area of multi-agent human-computer interaction [13].             used in several works [28]. However, the crucial difference is that
   From the three features above, it should be clear that the key             what are generated in our case are not performance contents (music
move to achieve collaborative artistic performance in our model is to         or movements) but performance parameters.
align the artistic expressions of the two performers, each one realized          To implement both feature extraction and parameter generation,
through its specific expressive means. The role of the collaborative          we relied on a knowledge-based approach where knowledge from
AI system is to realize this alignment. This alignment can be seen            the music experts was manually encoded into the system (the top
as an artistic counterpart of inter-modal mapping, that is, the ability       arrows in Figure 2). Our team includes both computer scientists and
of people to associate stimuli received in one modality, e.g., shapes,        musicians: discussions among these revealed that musicians possess
to stimuli in another modality, e.g., sound [33]. In terms of musical         heuristic knowledge of how the drummer’s parameters depend on the
semiology [31], we can also see the function of the collaborative AI          pianist’s play, and that this knowledge can be expressed in terms of
system as a mapping from aesthetics (perception) to poietics (pro-            approximate rules using vague linguistic terms, like:
duction) that preserves artistic meaning.
                                                                                 If rhythmic complexity on the lower register is high,
                                                                                 Then rhythmic complexity of drums should increase strongly.
3     System architecture
We have implemented the above model in a concrete system for col-             This type of knowledge is suitably encoded in fuzzy logic [23], and
laborative artistic performance, which has been tested in a number of         consequently we implemented both feature extraction and parame-
    Case     Input device       Extracted expressive features                       Output device           Generated expressive parameters
    4.1      MIDI piano         velocity, rhythmic density, velocity slope,         Strike drummer          intensity, complexity, pattern, fill, selective
                                density slope, step change                                                  drum mute
    4.2      Motion tracker     count, distance                                     Strike drummer          pattern, selective drum mute
    4.3      MIDI piano         velocity, rhythmic density, velocity slope,         Pepper robot            motion type, selective joint mute
                                density slope, step change


                                                 Table 1.   Configurations used in the three case studies



ter generation using multiple-input multiple-output Fuzzy Inference             shows a visualization of the Strike drummer; the monitor on the
Systems (FIS). Each FIS is based on the usual fuzzify-inference-                right shows the output membership functions generated by our sys-
defuzzify pipeline found in classical fuzzy controllers [9]. To take            tem, from which the system extracts the control parameters sent to
the temporal aspect into account in the feature extraction FIS, we use          Strike. Although we did not collect structural feedback (e.g., ques-
a recurrent fuzzy system [1] that takes the current estimated state and         tionnaires), informal comments by the audience were very positive,
predictions as input. This solution allows us to capture the knowledge          with many people remarking that the drummer appeared to follow
of the musician about temporal patterns, e.g., about what counts as a           (and sometime anticipate) the pianist in a natural way.
“sudden drop in intensity”, in a way that is both explicit and easy to             In addition to feedback from the audience, we also collected infor-
modify.                                                                         mal feedback from the artists. The pianist at the concerts commented
   The same implementation has been used in all the case studies                that the AI-controlled drummer was perceived as collaborative and
reported below, with only minor changes to the fuzzy rules and the              “human like”. He also remarked that it was often “surprising” in a
membership functions. Further details of this implementation can be             way that he did not expect a machine to be, and sometimes more
found in [39]. For the purpose of this paper, we shall not discuss the          “proactive” than a human drummer might be, leading him to follow
technical realization of our model in any depth; rather, we want to             what the drummer seemed to suggest, as per the feedback arrow in
demonstrate its applicability in breadth across different types of artis-       Figure 1.
tic collaboration, and different types of human and artificial players.            It is worth speculating a bit on the last point. We believe that this
                                                                                feeling of proactivity is partly due to the use of expectations (xt+1 )
                                                                                in parameter generation, leading to an anticipatory behavior in the
4     Case studies
                                                                                drummer [34]. For example, when a step change is predicted, e.g.,
We now give concrete examples of how the above model was used                   expecting to go from a forte to a piano, the system first mutes the
in three different case studies involving human-robot collaborative             kick, and if the change is confirmed it then also mutes the snare.
artistic performance. Each case was implemented using fuzzy rule-               The pianist may perceive the absence of the kick as a suggestion
based systems as discussed in the previous section. In each case, the           for a change in mood, and either follow the suggestion and go to a
features extracted from the input characterize the detected musical             piano, or not follow it and persist with the forte. In the first case, the
expression of the human performer; and the parameters sent as output            drummer will also mute the snare; in the second case, it will unmute
represent the desired artistic expression of the artificial performer.          the kick. An example of this proactive interaction can be observed in
The input device, the output device, the extracted features and the             the video recording at [3] around time 22:34.
generated parameters are different for each case study, and they are
summarized in Table 1.

4.1        A human pianist and a virtual drummer
The first case study involves collaboration in live jazz performance.
The human performer was a pianist performing improvised jazz mu-
sic, while the robot performer was the commercial virtual drummer
Strike 2.0.7 [2]. The tempo and style were agreed before the per-
formance starts, as is commonly done among musicians, and manu-
ally set into the virtual drummer. The other parameters of the virtual
drummer were decided in real time by the AI system based on the
musical expressive features of the piano performance using the ar-
chitecture in Figure 2, as described in the previous section.
   The architecture was implemented in Python 3.6.8 with the MIDO
library (1.2.9). The input comes from a MIDI piano, or from a MIDI
file for debugging purposes. The output was a MIDI signal, encoding
the parameters to be sent to the Strike drummer.
   The resulting system was tested in two public concerts given at
the Music School of Örebro University, Sweden, in Spring 2019, at-
tended by about 60 and about 100 people, respectively. Video record-            Figure 3. A snapshot from a public concert given on June 12, 2019, during
ings from these concerts are available online at [3]. Figure 3 is a                 the international symposium “Humans Meet Artificial Intelligence”.
snapshot from video of the second concert: the background screen
Figure 4. The variation of our system used for the second case study. The     Figure 6. The system used for the third case study. The robot has been
 variables xt depend on the current position and distance of the dancers.      enriched with control software to perform classical ballet movements.


4.2    Two human dancers and a virtual drummer                              the time of the snapshot, both dancers have just jumped out of the
                                                                            “black box” and became visible to the system.
Our second case study happened by serendipity. In October 2019,
the Music Tech Fest (MTF) art festival [30] was hosted at Örebro
University. There, our team met the Accents in Motion team, who had
                                                                            4.3    A human pianist and a robot dancer
previously researched the use of body movements to control sound.           In our last case study, we used our model to realize a collaborative
We decided to join forces and explore if and how the performance            live performance of a jazz pianist and a robot dancer. Like in the first
of the virtual drummer could follow two human dancer improvisers,           case study the jazz pianist improvises, but this time the AI system
using our model.                                                            controls the execution parameters of a humanoid robot.
   To do this, we used the simplified version of our architecture              For this experiment, we have used the commercial robot Pepper,
shown in Figure 4. The input to the system was taken from a Vicon           produced by Softbank Robotics, as artificial performer. We used the
tracking system mounted in a large laboratory space. Together with          system shown in Figure 6. The collaborative AI system is the same
the artists, we decided the features to extract and how the drummer         one used in the first case study, but now the performance parameters
should react to those. We drew an area on the floor to act as a “black      are sent both to the virtual drummer and to the robot.
box” where dancers would be invisible to the tracking system. We               The robot has been enriched with a control software to continu-
decided to extract two single features from the tracking system data:       ously perform dancing motions, synchronized with the pre-defined
the number of dancers that are visible (i.e., outside the “black box”),     beat, and generated from a library of basic motions inspired by clas-
and their mutual distance. For the parameter control part, we used          sical ballet [12]. Movements may involve one or both arms, the head,
two simple principles. The distance among dancers would influence           or the base, in any combination. The dancing motions are selected,
the pattern of the drummer: the closer the dancers, the more complex        modified and chained depending on the parameters received from the
the pattern. The number of visible dancers would influence which            collaborative AI system. Some parameter values are mapped to dif-
instruments are played: none with no dancer, only cymbals with one          ferent combinations of motions, which are decided randomly in or-
dancer, all cymbals and drums with two dancers.                             der to produce a more lively performance. Selective muting disables
   The above system was realized in collaboration with the Accents          some of the degrees of freedom, like the head or the base, and is typ-
in Motion team within an MTF lab, during two hectic days of work.           ically used in response to more quiet passages by the piano. Further
A performance was recorded on October 19, 2019, and shown at the            details on the implementation of this test case are given in [40].
MTF closing night. A clip from that recording is available at [3].             The above case study was demonstrated in a public performance at
Figure 5 shows a snapshot from the clip. The plot at the bottom shows       the official yearly celebration of Örebro University, attended by about
the temporal evolution of the ‘number’ and ‘distance’ variables: at         250 people. A video recording is available at [3], Figure 7 shows




                                                                               Figure 7. A snapshot from the public concert on January 31, 2020
Figure 5. A snapshot from the “Music Tech Fest” lab on October 19, 2019
                                                                            statement “The pianist and the robot perform in good harmony”, sup-
                                                                            porting our hypothesis that the loop in Figure 1 leads to a perceived
                                                                            sense of collaborative performance. Finally, the test group consis-
                                                                            tently rated the statement “I enjoyed the overall performance” higher
                                                                            than the control group, suggesting that our system may result in in-
                                                                            creased perceived artistic value. Full details of this user study are
                                                                            reported in [40].


                                                                            6   Conclusions
                                                                            We have proposed a model for collaboration between human agents
                                                                            and artificial agents in artistic performance. Our model does not fo-
                                                                            cus on the production of behavior in the artificial agent. Instead, we
    Figure 8. Results from the online user study. Error bars represent ±1   assume that the artificial agent is already capable of autonomous per-
                        standard error of the mean.                         formance, and we focus on how an AI system can be used to modu-
                                                                            late this performance through the manipulation of its parameters, to
                                                                            harmoniously adapt to the performance of the human.
a snapshot from that video. The reaction from the audience to the
                                                                               Co-existence, co-operation and co-creation between humans and
performance was overwhelmingly positive.
                                                                            AI systems are today extremely important areas of investigation. Col-
                                                                            laborative artistic performance among humans is one of the domains
5     Evaluating collaborative performance                                  where these phenomena are most visible, and it is therefore an ideal
                                                                            domain where to study the foundations of human-AI collaboration.
An open question for a human-AI collaborative system is how to              We hope that the model, the case studies and the evaluation presented
evaluate the effectiveness and added value of the collaboration. This       in this note contribute to this study.
question is even more complex in the case of artistic collaboration,           The implementation of the model used in our test cases is purely
where we are faced with the double problem of evaluating the collab-        knowledge-based. In this initial stage, this approach was chosen be-
orative aspect and the artistic aspect. Recently, some works have been      cause it afforded us a quick bootstrap using existing music knowl-
reported on the evaluation of collaborative human-robot systems [16]        edge. The knowledge-based approach also allowed us to go through
and of artificial artistic creations [6, 20], but much still needs to be    an open, modular and incremental development loop. Interestingly,
understood in both domains and in their combination.                        the music experts found that the process of eliciting knowledge was
   Bown [4] has suggested that the evaluation of artificial creative        rewarding for them. For example, they found that the need to describe
systems should look at the subjective experiences of humans. In             music performance in logical terms led them to develop a new ana-
the case of our model, the informal feedback received at the live           lytical perspective on how, when and why different styles are being
events indicated that both the audience and the musicians experi-           chosen and used. Notwithstanding the advantages of the knowledge-
enced a feeling of harmonious collaboration between the performers.         based approach, we plan in the near future to integrate this approach
We then decided to evaluate this feeling in quantitative terms, and we      with a data-driven approach for the feature extraction part, the param-
run an online user study aimed at measuring the subject’s perception        eter generation part, or both. This might help to complete the hand-
of collaborative artistic performance in the third case study above.        written rules, or to adapt them to the artistic taste of a given musician.
   The experimental setup was designed to highlight the collabora-          It might also allow us to use sub-symbolic input, like sound, rather
tion aspect rather than the quality of the robot’s performance. We          than symbolic one, like MIDI [36].
created two versions of the system based on Figure 6, a test one and
a control one. Both versions used the same artificial performers: the
Strike 2 virtual drummer, and the Pepper robot performing dancing           ACKNOWLEDGEMENTS
movements synchronized with the music beats. However, while the
test case used our collaborative AI system to decide the parameters         We are grateful to the Accents in Motion team (Andreas Bergsland,
of the robot’s performance, in the control case those parameters were       Lilian Jap, Kirsis Mustalahti, Joseph Wilk) and to MD Furkanul Is-
selected randomly. (The parameters for the virtual drummer were             lam for their collaboration in the second case study. Madelene Joels-
generated by our system in both cases.)                                     son kindly acted as pianist in the third case study, and Vipul Vijayan
   We recruited 90 subjects using the Amazon Mechanical Turk. Sub-          Nair helped us with the survey. Work by A. Saffiotti and L. de Mi-
jects were randomly assigned to a test group (58), that were shown          randa was supported by the European Union’s Horizon 2020 research
videos of performances using the test version of the system; and            and innovation programme under grant agreement 825619 (AI4EU).
to a control group (32), that were shown videos of performances             The cross-disciplinary perspective was made possible thanks to the
using the control version. These videos can be seen at https:               CREA initiative (crea.oru.se) of Örebro University.
//tinyurl.com/yyg67eco. Subjects were asked to rate a few
statements about the performance, using a 6-step Likert scale. The
                                                                            REFERENCES
survey was created and run using PsyToolkit [37].
   The results of the experiment are visualized in Figure 8. Subjects       [1] Jürgen Adamy and Roland Kempf, ‘Regularity and chaos in recurrent
in test group consistently rated the statement “The robot follows the           fuzzy systems’, Fuzzy Sets and Systems, 140(2), 259–284, (2003).
                                                                            [2] AIR music technology. https://www.airmusictech.com/
music nicely” higher than those in the control group, showing that
                                                                                product/strike-2/.
our system successfully aligns the artistic performance of the robot        [3] Alessandro Saffiotti (maintainer). Human meets AI in music. Website:
to the one of the pianist. The test group also gave higher rates to the         http://crea.oru.se/Music, 2020.
 [4] Oliver Bown, ‘Empirically grounding the evaluation of creative sys-          [27] Marvin Minsky, ‘Music, mind, and meaning’, in Music, mind, and
     tems: Incorporating interaction design’, in Int Conf on Computational             brain: The neuropsychology of music, ed., Manfred Clynes, 1–19,
     Creativity (ICCC), pp. 112–119, (2014).                                           Springer, (1982).
 [5] Michael E Bratman, Shared agency: A planning theory of acting to-            [28] René Mogensen, ‘Swarm algorithm as an improvising accompanist: an
     gether, Oxford University Press, 2013.                                            experiment in using transformed analysis of george e. lewis’s “voy-
 [6] Rebecca Chamberlain, Caitlin Mullin, Bram Scheerlinck, and Jo-                    ager”’, in Proc of the 1st Conf on Computer Simulation of Musical
     han Wagemans, ‘Putting the art in artificial: Aesthetic responses to              Creativity, (2016).
     computer-generated art.’, Psychology of Aesthetics, Creativity, and the      [29] Bhavya Mor, Sunita Garhwal, and Ajay Kumar, ‘A systematic literature
     Arts, 12(2), 177, (2018).                                                         review on computational musicology’, Archives of Comp. Methods in
 [7] David Cope, Computer models of musical creativity, MIT Press Cam-                 Engineering, 1–15, (2019).
     bridge, 2005.                                                                [30] Music Tech Fest. https://musictechfest.net, 2019.
 [8] Silvia Coradeschi and Alessandro Saffiotti, ‘Symbiotic robotic systems:      [31] Jean-Jacques Nattiez, Music and discourse: Toward a semiology of mu-
     Humans, robots, and smart environments’, IEEE Intelligent Systems,                sic, Princeton University Press, 1990.
     21(3), 82–84, (2006).                                                        [32] Brian O’Neill, Andreya Piplica, Daniel Fuller, and Brian Magerko,
 [9] Dimiter Driankov, ‘A reminder on fuzzy logic’, in Fuzzy Logic Tech-               ‘A knowledge-based framework for the collaborative improvisation of
     niques for Autonomous Vehicle Navigation, eds., D Driankov and                    scene introductions’, in Int Conf on Interactive Digital Storytelling, pp.
     A Saffiotti, chapter 2, Springer, (2001).                                         85–96, (2011).
[10] European Commission. White paper on artificial intelligence: A euro-         [33] Vilayanur S. Ramachandran and Edward M. Hubbard, ‘Hearing colors,
     pean approach to excellence and trust. https://ec.europa.eu/                      tasting shapes’, Scientific American, 288(5), 53–59, (2003).
     commission/presscorner/detail/en/ip_20_273, 2020.                            [34] Robert Rosen, Anticipatory Systems: Philosophical, Mathematical &
[11] George Ferguson and James Allen, ‘Mixed-initiative systems for col-               Methodological Foundations, Pergamon Press, 1985.
     laborative problem solving’, AI magazine, 28(2), 23–23, (2007).              [35] Robert Rowe, Interactive Music Systems: Machine Listening and Com-
[12] Ann Hutchinson Guest, Choreo-graphics: a comparison of dance nota-                posing, The MIT Press, Cambridge, MA, 1993.
     tion systems from the fifteenth century to the present, Psychology Press,    [36] Robert Rowe, ‘Split levels: Symbolic to sub-symbolic interactive music
     1998.                                                                             systems’, Contemporary Music Rev., 28(1), 31–42, (2009).
[13] Barbara Hayes-Roth, Lee Brownston, and Robert van Gent, ‘Multiagent          [37] Gijsbert Stoet, ‘PsyToolkit: A software package for programming psy-
     collaboration in directed improvisation.’, in Proc of the First Int Conf          chological experiments using linux’, Behavior research methods, 42(4),
     on Multiagent Systems, pp. 148–154, (1995).                                       1096–1104, (2010).
[14] High Level Expert Group on AI. Ethics guidelines for trustworthy AI.         [38] Belinda Thom, ‘Interactive improvisational music companionship: A
     https://ec.europa.eu/digital-single-market/en/                                    user-modeling approach’, User Modeling and User-Adapted Interac-
     high-level-expert-group-artificial-intelligence,                                  tion, 13(1-2), 133–177, (2003).
     2019.                                                                        [39] Oscan Thörn, Peter Fögel, Peter Knudsen, Luis de Miranda, and
[15] Lejaren A Hiller Jr and Leonard M Isaacson, ‘Musical composition                  Alessandro Saffiotti, ‘Anticipation in collaborative music performance
     with a high-speed digital computer’, J. of the Audio Engineering So-              using fuzzy systems: a case study’, in Proc. of 31st Swedish AI Society
     ciety, 6(3), 154–160, (1958).                                                     Workshop, (2019).
[16] Guy Hoffman, ‘Evaluating fluency in human-robot collaboration’,              [40] Oscan Thörn, Peter Knudsen, and Alessandro Saffiotti, ‘Human-robot
     IEEE Transactions on Human-Machine Systems, 49(3), 209–218,                       artistic co-creation: a study in improvised robot dance’, in Int Sympo-
     (2019).                                                                           sium in Robot and Human Interactive Communication, (2020).
[17] Guy Hoffman and Gil Weinberg, ‘Interactive improvisation with                [41] William F Walker, ‘A computer participant in musical improvisation’,
     a robotic marimba player’, Autonomous Robots, 31(2-3), 133–153,                   in Proc of the ACM SIGCHI Conf on Human factors in computing sys-
     (2011).                                                                           tems, pp. 123–130, (1997).
[18] Tariq Iqbal and Laurel D Riek, ‘Human-robot teaming: Approaches              [42] Nan-ning Zheng, Zi-yi Liu, Peng-ju Ren, Yong-qiang Ma, Shi-tao
     from joint action and dynamical systems’, in Humanoid robotics: A                 Chen, Si-yu Yu, Jian-ru Xue, Ba-dong Chen, and Fei-yue Wang,
     reference, eds., A. Goswami and P. Vadakkepat, 2293–2312, Springer,               ‘Hybrid-augmented intelligence: collaboration and cognition’, Fron-
     (2017).                                                                           tiers of Information Technology & Electronic Engineering, 18(2), 153–
[19] Walter Isaacson, The Innovators: How a Group of Hackers, Geniuses,                179, (2017).
     and Geeks Created the Digital Revolution, Simon & Schuster, 2014.
[20] Anna Jordanous, ‘Evaluating evaluation: Assessing progress and prac-
     tices in computational creativity research’, in Computational Creativ-
     ity: The Philosophy and Engineering of Autonomously Creative Sys-
     tems, eds., Tony Veale and Amı́lcar F. Cardoso, 211–236, Springer,
     (2019).
[21] Anna Kantosalo and Hannu Toivonen, ‘Modes for creative human-
     computer collaboration: Alternating and task-divided co-creativity’, in
     Proc of the Int Conference on Computational Creativity, pp. 77–84,
     (2016).
[22] Pegah Karimi, Jeba Rezwana, Safat Siddiqui, Mary Lou Maher, and
     Nasrin Dehbozorgi, ‘Creative sketching partner: an analysis of human-
     ai co-creativity’, in Proc of the Int Conf on Intelligent User Interfaces,
     pp. 221–230, (2020).
[23] George J. Klir and Tina A. Folger, Fuzzy sets, uncertainty, and infor-
     mation, Prentice Hall, 1988.
[24] Antonios Liapis, Georgios N. Yannakakis, Constantine Alexopoulos,
     and Phil Lopes, ‘Can computers foster human users’creativity? theory
     and praxis of mixed-initiative co-creativity’, Digital Culture & Educa-
     tion, 8(2), 136–153, (2016).
[25] Todd Lubart, ‘How can computers be partners in the creative process:
     classification and commentary on the special issue’, International Jour-
     nal of Human-Computer Studies, 63(4-5), 365–369, (2005).
[26] Jon McCormack, Toby Gifford, Patrick Hutchings, Maria Teresa
     Llano Rodriguez, Matthew Yee-King, and Mark d’Inverno, ‘In a silent
     way: Communication between ai and improvising musicians beyond
     sound’, in Proc of the CHI Conf on Human Factors in Computing Sys-
     tems, pp. 38:1–38:11, (2019).