<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The Deep Sound of a Global Tweet: Sonic Window #1</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>(a Real Time Sonification)</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Como Conservatory, Electronic Music Composition Department</institution>
        </aff>
      </contrib-group>
      <fpage>2</fpage>
      <lpage>8</lpage>
      <abstract>
        <p>People listen music, than they share emotions writing thinks about music on Twitter, a software analyzes the tweet with music as argument, and report some informations about these spoken emotions. I wrote a patch in Max/MSP that sonify in real time the global emotion lived by the twitter user music writers, it produce new music and this new music produce emotions also, and if you want you can write about that in Twitter, in this way the social network produce new emotions from its previous emotions, an AI generated emotion.</p>
      </abstract>
      <kwd-group>
        <kwd />
        <kwd>Sonification</kwd>
        <kwd>twitter</kwd>
        <kwd>emotion</kwd>
        <kwd>code</kwd>
        <kwd>data network</kwd>
        <kwd>real time</kwd>
        <kwd>electronic music</kwd>
        <kwd>installation</kwd>
        <kwd>interactive</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>This is a real time audio installation in Max/MSP. It is a sonification of an abstract
process: the writing on Twitter about music listening experiences on the web from
people around the world. My purpose is not to sonify the effects of this process on a
musical structure of the songs listened to, like a real-time-echo-web-mix or a new version of
J. Cage “Imaginary landscape n°4”, but to sonify the structure of the process itself, with
its language transducers, its media and its rules. For this purpose, I created a musical
instrument played by the data, like a wind chime, but here all the sounds are created by
the web data itself, as if the material of a wind chime were the wind itself. It’s like an
open window on the web listeners where you can observe the action of listening and
talking about music, but you don’t hear the music listened to and you search for
connections, reactions, interactions among the listeners, the transmission media and the code
language.</p>
    </sec>
    <sec id="sec-2">
      <title>Listeners – Writers</title>
      <p>First of all, the listening process and the tweet process from twitter users; people
listen to music and then write tweets about it: it’s a human thought about listening to
music expressed in a verbal language and syntax. People think, listen and interact with
the process and the media with a GUI that translates an information flux. This
translation is from a human thought(with its specific language and syntax) to a universal
ASCII number code or numeric streams; characters are the same, but syntax changes
(ASCII numbers are the common atoms [letters] among different languages) according
to an internet code data: language and syntax change, but information doesn’t change.
(Fig 1.).</p>
      <p>At this point of the process (that I want to sonify), there is a transduction of the
language: the code data from twitter is analysed and the information flux changes:
language and syntax (code) are the same, but information changes: information is about
the process itself, not the original information thought and posted on the web by the
twitter users, but a new thought about the first action: the new information is always a
consequence of the previous thoughts. (Fig 2.)</p>
    </sec>
    <sec id="sec-3">
      <title>Information Used</title>
      <sec id="sec-3-1">
        <title>For this sonification I used only one kind of information:</title>
      </sec>
      <sec id="sec-3-2">
        <title>1) the Artist Name ;</title>
        <p>2) the last 10 Twitter IDs that wrote about the artist (names translation in a code
language).</p>
        <p>In this way, (fig. 3) I have a list of 11 names in two different languages (spoken and
codified) and these names are connected by a common thought in different ways: the 10
ID names write about the musical actions created by the artist name: names change but
the process is always the same, like the musical language…these data becomes in
different ways the sound itself and also the score.</p>
        <p>I used the “last” ten ID numbers scaled from -1 to 1 as amplitudes of a wave-table
(each ID = 18 numbers =180 numbers * 5 (downsampling factor of 2) = 900 samples
stored in the wave table) (Fig 4).</p>
        <p>They are updated every 2 seconds, according to a choice of the Social Genius
programmers and so I programmed a linear interpolation of ID values between the
updated triggers, to simulate that the process is continuous.
The wave-table is then played back in a loop at a frequency that varies cyclically from
0.1 to 1.5 Hz, and it’s a musical representation of the twitter code web rhythm (a
background noise from a portion of the web) morphed by the twitter users almost in real
time. At the end of the process, I use a cyclic stereo pan and a cyclic fade-in fade-out to
give more sense of “web data waves”, as if the web data were a living entity with its
own cycles of life (Fig 5).
1) The Artist Name is translated by the Speech computer software (at each new name,
the voice, which reads the name, changes randomly, depending on the computer speech
software); then the speech signal passes into a granular synthesis module with a buffer
of 10 seconds:</p>
      </sec>
      <sec id="sec-3-3">
        <title>Twitter IDs control in real time:</title>
        <p>• grain duration (Min/Max),
• rests between grains ((Min/Max-Voice numbers),
• grain amplitudes and
• grain pan-pot (MIDI)
In this way, the multitude of twitter users voices listening to the artists and also the
translation process are represented; at the beginning of the process, the spoken words
are translated in ASCII numbers and these numbers are the code “letters-phonemes”; at
that point, with a granular synthesis, I deconstruct the spoken languages (English,
French, Italian, etc.) into phonemes (musical language).</p>
        <p>Language conversions:
• thoughts (spoken language)Words written on keyboardASCII codeweb code
data</p>
        <p>• web code data ASCII codeSpoken language Phonemes (musical language)
2) The previously obtained “twitter ID background noise” is then filtered by the “last
artist name”, as if the name could sculpt its profile in the noise: the noise passes into a
bank with a maximum of 18 pass filters and frequencies of each filter are given by a
conversion of ASCII numbers in frequencies.</p>
      </sec>
      <sec id="sec-3-4">
        <title>Example: Beatles = 66 101 97 116 108 101 115 (ASCII-Midi Pitches) = 369 2793 2217 6644 4186 2793 6271 Hz (Filter bank center frequencies)</title>
        <p>The bandwidths of the filters are given by one of the twitter IDs (scaled from 0.1 to 4
Hz) that is listening to the Beatles:</p>
        <p>Twitter IDs: 1 5 0 0 9 6 8 5 4 9 0 0 6 7 8 6 5 6</p>
        <p>Bandwidths: 0.8 2.4 0.4 0.4 4. 2.8 3.6 2.4 2. 4. 0.4 0.4 2.8 3.2 3.6 2.8 2.4 2.8 Hz
Each Artist Name is updated every 2 seconds, so the timbre changes without an
interpolation every 2 seconds like a “bell signal” and gives a regular beat to the time (Fig 6).</p>
        <p>One of the last ID listeners gives a small amount of samples stored in a wave- table
and played immediately; the amplitudes, which are not scaled and are from 0 to 9 , are
afterwards clipped to 1 (wave-shaping) with a linear interpolation between samples.
Then the signal is passed through a resonant band-pass filter with a central frequency
set to 2000 Hz, bandwidth of 23 Hz and a resonant factor of 3; this gives a “percussive
mallet” sound. A quartic envelope is applied to the signal, which has been extracted
from the artist name, and the resulting signal enters in a variable delay with a feedback
of 1%. This because "the latest artist" scrolls back in position on time... and 2 seconds
later he is not ' the latest one' but it's always listened to on twitter; in this case, it
doesn't disappear but becomes like an “aura”, which gives this sense of slow down and
fading, passing through a granular synthesis.
• Frequencies of each partials
• Detuning factor of each partials
• Relative amplitudes of each partials
• Relative durations of each partials
• Relative attack times of each partials</p>
        <p>As the IDS are from different people, I applied a granular synthesis to simulate the
contemporary presence of 5 different people (the Ids), that are producing the same
sound together.</p>
        <p>It is possible to listen to this audio installation from different computers and
headphones or to diffuse the sound on several loudspeakers, to obtain a double
interaction: on the other side of the web the listeners create the sounds and on this side
other people diffuse this sound in a room and it may be that twitter users, who are
present in the room, can change the sound itself…
11</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Technical Details</title>
      <p>This software is a Max/MSP patch and you can launch it as an alone application or
inside Max/MSP, according to externals used in the patch until now; it is possible to
run it only on Apple computers. If you listen to it directly from your computer audio
device, it is necessary to do an internal routing; in fact, audio from speech system
player will not diffuse out directly, but only after being processed by Max/MSP.
It is possible to route it internally with the software "Sound flower" (from Cycling74 or
"Jack") or externally with a sound card, which is present in the room and can change
the sound itself.</p>
      <p>In Fig. 10 the main Block diagram.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Puckette</surname>
          </string-name>
          , Miller.
          <source>Theory and Techniques of Electronic Music</source>
          . San Diego: World Scientific Press,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>Roads</given-names>
            <surname>Curtis</surname>
          </string-name>
          . The Computer Music Tutorial. Cambridge: MIT Press,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3. Hermann Thomas,
          <string-name>
            <given-names>Hunt</given-names>
            <surname>Andy</surname>
          </string-name>
          ,
          <string-name>
            <surname>John G. Neuhof</surname>
          </string-name>
          <article-title>The Sonification Handbook</article-title>
          . Berlin: Logos Publishing House,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>