=Paper= {{Paper |id=Vol-2282/EXAG_124 |storemode=property |title=Freedom of Movement: Generative Responses to Motion Control |pdfUrl=https://ceur-ws.org/Vol-2282/EXAG_124.pdf |volume=Vol-2282 |authors=Kate Compton,Michael Mateas |dblpUrl=https://dblp.org/rec/conf/aiide/ComptonM18 }} ==Freedom of Movement: Generative Responses to Motion Control== https://ceur-ws.org/Vol-2282/EXAG_124.pdf
              Freedom of Movement: Generative Responses to Motion Control

                                              Kate Compton, Michael Mateas
                                                  Expressive Intelligence Studio
                                               University of California, Santa Cruz
                                             kecompto@ucsc.edu, mmateas@ucsc.edu




                                                                     may be musicians or lighting directors who react collab-
                                                                     oratively with the movements, or non-human works like
                                                                     interactive projections
                                                                      All of these phenomena “listen” to the movement of the
                                                                   dancer and respond in some way. As designers of genera-
                                                                   tive systems, we can build systems that operate like any of
                                                                   these real-world responsive systems: our systems can be cos-
                                                                   tumes, fields, physically-connected agents or expressively-
                                                                   connected agents, or to have systems that combine the re-
                                                                   sponsive properties of any of these examples. Observing the
                                                                   range of reactive systems that occur in dance practice re-
Figure 1: Squishy touchscreen. Rainbow-ified impressions           minds us not to limit ourselves to only one kind of ”dance
of hands formed by remapping the Kinect’s greyscale depth          partner”. In this paper, I reflect on some works where in-
data to a more interesting colorspace                              sights from dance (and other movement arts, like puppetry)
                                                                   inform how I can use computers to listen to movement, and
                                                                   respond, collaborate, or amplify that movement.
                          Abstract                                    This paper is emphatically not about discrete detection
                                                                   and categorization of gesture. Though we have now spent
  Generative methods provide rich, emergent ways to deal with      most of a decade with moderately effective motion-tracking
  many kinds of data. In this paper, we explore projects that      (Kinect, Wii, Leapmotion, Oculus Touch), none of them
  listen to human motion, and respond through emergent gen-
  erative art in ways that are inspired by dance and puppetry.
                                                                   have sparked the motion-control revolution that each one
                                                                   seemed to promise. In previous work,(Compton and Mateas
                                                                   2017) I explored how this is driven by a common inabil-
                       Introduction                                ity to deal computationally with an input stream that is not
How do we respond to a body in motion? There are many              a sequence of discretely occurring (and discretely valued)
things in the world that respond to a body in motion, for          events. There is a broad range of research on performing
example, a dancer’s physical motion:                               discrete gesture detection with devices like the Leapmo-
• dance costumes, or dance toys like fire poi, physically          tion(Marin, Dominio, and Zanuttigh 2014)(Potter, Araullo,
   moved by or attached to the dancer, and subject to forces       and Carter 2013), because we can imagine it being used
   like drag, momentum, and centripetal force, depending on        to create the sequence-of-events input that we so often use
   their materials.                                                in our interactive experiences (especially games). However,
                                                                   with a continuous multi-dimensional stream of motion data,
• fields around the performer, as the dancer wades through         discrete techniques like if-statements and categorizations
   water or smoke or tall grass, if they disturb curtains as       compress the data and lose the continuous fluid quality of
   they move                                                       the original motion.
• a human partner, moving their body in response to their             Instead, this paper is about how we can use a variety of
   perception of their partner’s movement. An audience can         algorithms (some “artificial intelligence”, some not, this pa-
   sense tension, force, and connection, even if the two bod-      per won’t quibble about the definition) to listen and respond
   ies never touch                                                 to continuous body movement.
• physically unattached collaborators, who, like the human
   dance partner, ”listen” to the movement and respond. This                        Listening to Motion
                                                                   How can we listen to a body in motion? As mentioned ear-
                                                                   lier, artists have many sensors from which to choose. Some
are relatively basic forms of sensors like accelerometers, gy-       users dragged their fingers through a tabletop full of black
roscopse, distance sensors, and bend sensors. Some sensors           sesame seeds (a webcam could see fingertips through the
operate by processing image data, often via machine learn-           glass bottom of the table). The resistance and physical prop-
ing or various statistical methods, from either a single cam-        erties of the seeds provided haptic feedback of resistance,
era or multiple cameras (often assisted by invisible infrared        and also produced a very satisfying sound and smell when
projections), and several achieve “dead-reckoning” by com-           disturbed. Pressing a finger harder into the tabletop make a
bining camera, GPS, and accelerometer data.                          bigger “blob” for the image detection to track, and the size
   It is easy to list the ways that we can listen to motion. But     of the area of contact could also be felt by the user by the
let us instead examine what motion we listen to, and why.            texture contrast between seed and glass.
Lived human experience informs us that some forms of mo-                In Kinect Poi, Squishy Touchscreen, and the Black
tion feel better than others. For example, holding arms ex-          Sesame Table, the “sensors” themselves are dance partners.
tended and still is wearying (Nielsen et al. 2003). Yet many         Their physical properties (resistance, centrifugal force, iner-
Kinect experiences used that pose as a UI technique to simu-         tia, sound, even scent) and the way they encourage interac-
late a button press. Dynamic loosely-controlled swinging of          tion (through softness, texture, the pleasure of inertial move-
arms feels better than stiff precision, but was underutilized        ment) form a connection with the interactor even before we
in Kinect games as it couldn’t be used to translate traditional      consider how the digital components of the systems will re-
UI elements.                                                         spond to that input.
   One of my first Kinect projectspresented at SF Bay Area
3D Vision and Kinect Hacking, 2/1/2012 took advantage                                Responding to Motion
of this. In Kinect Poi, the player used their arms to swing          In Mueller and Isbister’s “Movement-Based Game Guide-
digital fire poi, which left trails of sparks and stars as they      lines”, they encourage motion control game designers to
swung them. They could then retrace their trails to collect          not focus intently on game-style interaction: “Start by pro-
the stars left behind on a previous swing. This had several          viding feedback on the movement itself, without too much
advantages. The poi were simulated as particles, with con-           worrying about scores, multipliers etc. [..] Provide several
tinuous acceleration forces, so even when the Kinect sensing         forms of feedback, but do not require players to engage all
momentarily dropped (frequently in old models), the parti-           of them: better to let players choose which ones to engage
cle continued to move smoothly, without any of the glitch-           based on their cognitive abilities, and shift their attention as
ing of one-to-one control. Using force-based control, rather         mastery grows.” (Mueller and Isbister 2014). It can be hard
than position based control, created a natural “anti-aliasing”       to structure a game with win-conditions (or even resource-
effect for the motion input. Finally, the perceived weight of        logic) around continuous playful motion control, so the fun
a player’s hand increases as they swing their arm, creating          of these experiences must often come from emergence and
the weighted, force-based feedback that was missing from             surprise rather than control or competition.
most Kinect experiences. The motion that this art used was              Fortunately, one of the major advantages and disadvan-
the type of motion that felt best for an interactor, and the in-     tages of a thick stream of continuous motion data is that
teraction/“game” was built around that, rather than the other        while it cannot be handled by the if-statements of traditional
way around.                                                          game logic, it does provide an excellent seed for genera-
   Lack of haptic feedback or tactile resistance is common           tive methods. Often these methods need not even be com-
in motion control experiences, but this is not unavoidable. In       plex to be engaging: they merely have to be responsive. The
Squishy Touchscreen 1 , a user interacts with a soft spandex         most successful “app” on the Squishy Touchscreen was a
membrane stretched over a wooden frame. A laser projec-              rainbow-remapping of the depth field, which I had made as
tor backprojects an image onto the membrane, and a Kinect,           a debug utility. As one pressed harder into the screen, the
placed under the projector, maps the deformation of the              colors changed around it, like reaching one’s hand into a
membrane into a grayscale image. This project was inspired           rainbow-colored geode. The stretchiness of the spandex also
by Kinect musical instruments (like Tim Thompson’s Space             deformed around whatever was pressed into it, so a hand
Palette 2 ) where the user waved their hands through the air.        would become outlined in rings of hand-shaped color. The
Few instruments, with the exception of a theremin, have no           material was “responding” to the interaction, even before the
tactile resistance feedback in this way, so I wanted to cre-         algorithm got to it.
ate an instrument that you could feel pushing back. Spandex             More complex responses can be designed by passing the
acts as a spring and has resistance that increases as you press      continuous motion stream into a pipeline of generative meth-
harder against it. It felt good to press against it, to stroke the   ods3 . Idle Hands 4 was designed as an installation in an
screen and feel the drag against your fingers. Additionally,         art festival, projected on a wall, that the users control via
the Kinect could see any deformation of the surface, so the          a Leapmotion. Giant hands (the projection was about 10 feet
user could press their palm, fingers, face, or any object into       across) clenched and unclenched even when the controller
the screen, and it would change the character of the defor-          was idle. When controlled by the user, the hands mostly-
mation.
   In another early prototype touch-“screen” (circa 2005),              3
                                                                          see (Compton and Mateas 2017) for a catalog of the range of
                                                                     generative methods and how they can be used to compose such a
   1
       (2010, https://vimeo.com/217033311                            pipeline
   2                                                                    4
       https://spacepalette.com/                                          http://galaxykate.com/apps/idlehands/
Figure 2: Idle Hands, a Voronoi diagram generated from
Leapmotion 3D finger-joint points, with particle system
stars


faithfully reflected their hand gestures. The Leapmotion’s
data stream was a continuous (etd. 40fps) feed of 3D vector
positions for all finger joints, which was compressed to 2D
points and used to construct a Voronoi diagram of regions
and colored as shaded fragments. A few flocks of particles
were gravitationally attracted to the fingertips to further ac-
centuate the user’s motion. The response to the user data         Figure 3: Generative dance puppets with a variety
was straightforward, but the directness made the experience       of secondary-motion gesture-enhancing dance accessories
rather visceral (many reported a vividly tactile sensation of     When animated, the feathers and orbs emphasize and elabo-
“crinkling” the background , without touching anything).          rate on the human user’s motion like a dancer’s costume
   One interesting pattern that I discovered with Idle Hands
was the importance of flexibility of control. Like the Kinect-
controlled poi, any motion control system has moments             the Sound Suits and the Muppets do with their dancers and
where tracking drops frames, or the interactor walks away.        puppeteers. I adopted some ideas from the Spore creature
In these moments, a virtual agent can take over for the in-       creator (Hecker ), making the bodies based on tubes, but
teractor. This can be done to patch or smooth the motion,         created more emergent and surprising forms based on the
but it can also be used to playfully resist the user’s control.   tubes (super-ellipse cross-sections, wrinkles or oscillations
Is this a direct mirror, or an intelligent partner mimicking      along the length of the tube). I also used Spore’s wiggles-
your movements, only to break free with some improvisa-           and-jiggles system of secondary motion (and past work on
tion? Previous projects (Long et al. 2017) have experimented      secondary motion in generative animation (Compton and
with the dance partner as an autonomous agent. In my most         Mateas 2015)) as inspiration to create a variety of motion-
recent work, I experiment with using the autonomy of the          controlled ”parts”: yoyos, bobbling balloon spheres, fringe,
dance agent as a continuous slider.                               and luxuriantly flowing feathers. Each kind of dance acces-
   My most recent motion-reactive art is on dance-reactive        sory ”listens” and ”responds”, in different ways (to fast ac-
puppets.5 This project was funded by the Google Creative          celeration or slow), depending on where it occurs (head or
Lab as an experiment to use their Posenet Tensorflow de-          hands or legs), and its physical properties.
tection algorithm (Oved ). This algorithm produces simi-             At the time of development, I did not have access to the
lar skeleton data to the Kinect, only instead of using in-        live stream of data from the webcam (that part of the tech-
frared dots and multiple cameras, it uses machine-learning        nology was unreleased) so I had to create synthetic data,
on normal RGB webcam data, potentially reaching a vastly          from a dancing virtual forward-kinematics-animated body
larger audience than the Kinect ever has. This project was        in 3.JS, which could generate the data that we anticipated
inspired by Nick Cave’s Sound Suits (Cave et al. 2010),           receiving from the machine-learned component. I set up the
dance costumes which distort the body into strange shapes         data-generating virtual body so that it could be driven via
and become partners to the dancers, and the Muppets, where        a Leapmotion (translating the finger movement into joint
the responsive materials of the Muppets (Kermit’s flailing        movement), the potential future Posenet data, music data, or
arms, Animal’s chickenfeathers, Janice’s satin hair) become       some combination of all three. It also had a slider that con-
part of their character and movement. The idea was to cre-        trolled independent noise-controlled data (autonomy) versus
ate generative dance suits whose animation would respond          user-provided data (mirror mode). One could imagine this
to and exaggerate and reinterpret the movement of a user          slider being driven by anything, including the agent’s “bore-
(as detected through Posenet), just as the physical forms of      dom” with the player’s lack of movement.
                                                                     The released version of Posenet yields only 2D point data,
   5
       http://www.galaxykate.com/apps/puppet/                     not the 3D of the Kinect, so I developed a very rudimentary
system to jiggle the 3D synthetic body until it matches the       Long, D.; Jacob, M.; Davis, N.; and Magerko, B. 2017. De-
2D detected points 6 It is far from accurate, yet like much       signing for socially interactive systems. In Proceedings of
of the work discussed here, it seems that the accurate move-      the 2017 ACM SIGCHI Conference on Creativity and Cog-
ment is far less important than continuous, reactive, respon-     nition, 39–50. ACM.
sive, and emergent movement, and it is an enjoyable “pres-        Marin, G.; Dominio, F.; and Zanuttigh, P. 2014. Hand ges-
ence” to interact with.                                           ture recognition with leap motion and kinect devices. In Im-
                                                                  age Processing (ICIP), 2014 IEEE International Conference
                       Conclusion                                 on, 1565–1569. IEEE.
Dance (and movement arts like puppetry) have a long               Mueller, F., and Isbister, K. 2014. Movement-based game
and developed history of turning human movement into              guidelines. In Proceedings of the SIGCHI Conference on
something pleasurable, alien, expressive, or transcendent.        Human Factors in Computing Systems, 2191–2200. ACM.
Movement augmentation both listens to and responds to             Nielsen, M.; Störring, M.; Moeslund, T. B.; and Granum, E.
user movement. Some patterns of listening/responding              2003. A procedure for developing intuitive and ergonomic
are costumes, fields, physically-connected agents or              gesture interfaces for hci. In International gesture workshop,
expressively-connected agents.                                    409–420. Springer.
   Both Squishy Touchscreen and the Black Sesame table            Oved, D.                Real-time Human Pose Esti-
were fields that the user disturbed with their motion, creating   mation      in    the     Browser       with     TensorFlow.js
eddies and deformations in the physical interface and also        .               https://medium.com/tensorflow/
in the digital response. Idle Hands is a field which the user     real-time-human-pose-estimation-in-the-
manipulates with their fingers, but while it lacks the physi-     browser-with-tensorflow-js-7dd0bc881cd5
cally responsive interface, the seamlessly responsive interac-    Accessed: 2018-08-10.
tion created an impression of physical touch. The Kinect Poi
                                                                  Potter, L. E.; Araullo, J.; and Carter, L. 2013. The leap
and the dance puppets are costumes: they are linked to the
                                                                  motion controller: a view on sign language. In Proceedings
user’s movement, but have secondary motion that amplifies
                                                                  of the 25th Australian computer-human interaction confer-
and elaborates on that emotion. Like the virtual partner Lu-
                                                                  ence: augmentation, application, innovation, collaboration,
men.AI project, the puppet is an autonomous agent, but can
                                                                  175–178. ACM.
move continuously between being a autonomous partner or
a costume as its agency is dialed up or down. My projects do
not have a strong expressively-connected agent component
(I prefer more directly-reactive agent action), but this would
be an avenue for exploration for either these projects or any
other generative movement-reactive system, such as a musi-
cal or visual background improvisation based on some gen-
erative interpretation of user movements. These categories
only begin to outline the range of how interaction in real
world dance/movement arts can inspire and inform digital
systems; much more exploration in the vast world of dance
culture is possible.

                        References
Cave, N.; Cameron, D.; Eilertsen, K.; and McClusky, P.
2010. Nick Cave: Meet Me at the Center of the Earth. Yerba
Buena Center for the Arts. Exhibition at the Seattle Art Mu-
seum 2011.
Compton, K., and Mateas, M. 2015. A Different Kind
of Physics: Interactive evolution of expressive dancers and
choreography. In Computational Creativity in Games Work-
shop, ICCC.
Compton, K., and Mateas, M. 2017. A generative frame-
work of generativity. In Experimental AI in Games Work-
shop, AIIDE.
Hecker, C.       My Liner Notes for Spore.           http:
//chrishecker.com/My_liner_notes_for_
spore. Accessed: 2018-08-10.
  6
    http://www.galaxykate.com/apps/puppet/
posematch
Figure 4: Data flow diagram of the puppet project. Blue outlines are sensors. Pink outlines are processed input streams. Cyan
outlines are output graphics. Green outlines are autonomous or puppeted control