=Paper= {{Paper |id=Vol-2827/KBS-Paper_3 |storemode=property |title=A Hyperstitional Machine Appropriating Human Culture in an Evolutionary Fashion |pdfUrl=https://ceur-ws.org/Vol-2827/KBS-Paper_3.pdf |volume=Vol-2827 |authors=Marinos Koutsomichalis }} ==A Hyperstitional Machine Appropriating Human Culture in an Evolutionary Fashion== https://ceur-ws.org/Vol-2827/KBS-Paper_3.pdf
A Hyperstitional Machine Appropriating Human Culture
in an Evolutionary Fashion
Marinos Koutsomichalisa
a
    Department of Multimedia and Graphic Arts, Cyprus University of Technology, 30 Arch. Kyprianou, 3036, Limassol, Cyprus


                                          Abstract
                                          This paper presents an account of an ecosystemic evolutionary pipeline for the generation of original mul-
                                          timedia content without employing fitness functions or other evaluation schemata. It examines an ongoing
                                          art/research endeavour that concerns an experimental creative machine to be ‘plugged-in’ to human culture
                                          through the WWW and in order to produce own multimedia content autonomously and unattendedly. The
                                          machine employs natural language graphs, as well as intelligent Comprehenders that analyse the retrieved
                                          media to further the evolutionary cycle with new queries. It also features a series of algorithmic Composers
                                          that mashup and manipulate the retrieved media in various fashions. The overall system is being designed
                                          to empirically probe the hypothesis of genuine nonhuman creativity that is built computationaly upon the
                                          re-synthesis and the re-appropriation of human culture (through its WWW footprint). The project and the
                                          underlying method are announced herein, and a series of technical idiosyncrasies are examined in some detail.
                                          Theoretical considerations to the overall approach are further drawn with reference to critical post-humanism.

                                          Keywords
                                          Ecosystemic Evolution, Nonhuman Creativity, Hyperstition, Creative Machine, Multimedia




1. Introduction
Recent literature features numerous resources discussing algorithmic systems for the unattended
composition of multimedia content. These range from art/creative endeavors that may or may not
involve interaction [1] [2] [3] [4], to bioinformatics [5] and robotics research [6]. The question of
unattended and self-generative evolutionary composers has been researched in various contexts such
as genre-specific music composition [7], or evolutionary painters [8]. Literature and creative practice
is also abundant in algorithmic pipelines synthesising or reappropriating existent (third-party) me-
dia content. These range from (historical) examples of music composition employing prepared/found
melodies and/or audio snippets [9] to image mashups [10] and multimedia meta-creative systems [11].
   Evolutionary creative systems of sort traditionally involve a fitness function or some evaluation
schema. They are typically dealt with as meta-heuristics optimization systems, i.e., systems meant to
discover those heuristics that are necessary for another subsystem to solve an optimization problem.
As discussed in [12], most evolutionary algorithms (EA) for art still follow this approach despite it
often being hard, irrelevant, or altogether impossible to define meaningful fitness/evaluation func-
tions in such cases. In genuine artistic contexts, the goal is, more often than not, to generate new and
original (or otherwise aesthetically or poetically intriguing) content for the sake of it. In this vein,
while ‘selection of the fittest’ approaches do provide valid and readily available means to implement
or evaluate art-related EAs, it remains debatable whether they eventually succeed in generating gen-
uine artistic value in real-life contexts. This is further discussed in [13] and [14], where fitness-based
Joint Proceedings of the ICCC 2020 Workshops (ICCC-WS 2020), September 7-11 2020, Coimbra (PT) / Online
" m.koutsomichalis@cut.ac.cy (M. Koutsomichalis)
 0000-0002-3876-9064 (M. Koutsomichalis)
                                       © 2020 Copyright for this paper by its authors.
                                       Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073       CEUR Workshop Proceedings (CEUR-WS.org)
EAs are generally shown to concern imitation rather than originally creative behaviour.
   While evaluation and selection still govern evolutionary systems study, biological evolution is not
exhausted in Darwinian/Lamarckian processes of selection or mutation—see, e.g., [15]—and not all
cultural phenomena can be always understood, or described, in terms of meta-heuristics optimisation
or problem solving. Insofar as art EAs are concerned, there are some documented cases that eschew
or undermine the idea of fitness altogether. Consider, e.g., Biles’ jazz melody composer that pivots on
an intelligent crossover operator [16], or Dorin’s ‘interactive’ approach that relies on human-driven
selection [17]. Another trend is to rather rely on ‘endogenous’ fitness functions—that is, ones that
are defined, and that operate, in some local context rather that with respect to the aesthetic outcome.
Even if fitness is still sustained here—both as a concept and as a technical means—a system of sort can
no longer be thought of as evolving towards ‘fittest’—that is, ‘better’ in any subjective sense—works
of art. A relevant example is Bird’s drawing robot where a fitness function rewards local behaviour
with respect to pen position [18].
   Most importantly, an entirely new paradigm has emerged over the last couple of decades: that of
‘ecosystemic’ evolution where the focus shifts to the design of an environment, an array of compo-
nents therein, and carefully designed interactions between the former and latter as well as in-between
the components. Components within an ecosystem are typically interconnected so that they can
change their environment in some fashion. To give an example, in the Audible Eco-Systemic In-
terface (AESI) project a network of interdependencies is enacted among individual sound-synthesis
subsystems and the external physical space hosting the artistic performance [19]. Accounts of several
other art/creative ecosystemic evolution systems can be found in [14] and [20].
   All the above mentioned approaches, and in general any algorithmic system for art that is intended
as genuinely creative, are still largely thought of with respect to human creativity (even if there is still
no consensus on what exactly the latter may stand for, or consist of). The question of a genuinely ‘non-
human’ computational creativity—i.e., one that dismisses human notions of creativity altogether—has
not been a major research concern hitherto and still lacks integrated treatment. This is, nevertheless,
the research focus of this endeavour: to investigate (through design) the hypothesis of a machine that
draws upon human culture in order to generate ‘nonhuman’ art of its own. Hypersition Bot is an ex-
perimental system that ever-crawls the WWW in order to produce own digital content autonomously
and unattendedly. It loosely draws inspiration from the concept of ‘hyperstition’, brought forth by
CCRU’s Nick Land and referring to “narratives able to effectuate their own reality through the work-
ings of feedback loops, generating new sociopolitical attractors” (Williams, 2013 as quoted in [21])
or “[. . . ] as ideas [that] function causally to bring about their own reality [. . . ] transmuting fictions
into truths”1 . There have been some other attempts to creatively explore this concept and in various
fashions—not always artistic, however. Examples are discussed throughout [22].
   Hypersition Bot aspires to mashup, transfigure, re-synthesise, remmediate, and re-appropriate—
that is, utilise for a different purpose than the intended one—human cultural content with respect
to emergent cybernetic orderings and in a hyperstitional fashion. From a technical perspective, the
system is an complex multi-modal ecosystemic EA. It does not does employ a fitness function nor any
evaluation schemata. It rather comprises a several hardware and software components that intertwine
and cross-interact with one another to further the evolution cycle while simultaneously generating
multimedia content of various kinds. From an artistic lens, the process is envisioned as speculative
(being a hypothesis for how nonhuman creativity could look like), meta-phenomenological (it cannot
be reduced merely to phenomenological experiences thereof), and post-geographical (since content

    1
      Nick Land in an interview by Delphi Carstens retrieved December 15, 2019 from http://xenopraxis.net/readings/
carstens_hyperstition.pdf)
Figure 1: Submodules and their interplay within the evolutionary process.


utilised may be of all possible geographical origins).
   Having contextualised the project, the following section outlines the machine in question, overviews
the specifics of its implementation, and presents the first incarnation of the work. A Discussion sec-
tion follows. This treatise sums up with a concluding remarks and notes on future work.


2. Method
The multimedia output of Hypersition Bot is the emergent outcome of a complex cybernetic ecosystem
that is distributed over several hardware and software modules. Fig. 1 illustrates the main software
submodules that have been implemented in a series of programming languages (Python, SuperCol-
lider, Bash). The overall architecture draws on a complex evolutionary database management sys-
tem that is discussed in great depth in [23]. It additionally features several submodules to perform
multimedia synthesis and maintenance. The evolutionary cycle is as follows: a series of Crawlers
iterate a genome to retrieve natural language queries and use them to download digital media from
User-Generated-Content (UGC) repositories of interest; then, a series of Comprehender submodules
analyse the retrieved media to generate a new generation of genotypes, while a series of Composer
modules process and mash-up the former to generate multimedia output unattendedly.
   In formal terms, the 𝑛𝑡ℎ generation genotypic population 𝐺𝑛 is
                                         {
                                          ⋃ ΦΛ (𝑃𝑛−1 ) ∶ 𝑛 ∈ ℕ>1
                                   Gn ∶=                                                          (1)
                                          ⟨𝑆⟩            ∶ 𝑛=1
where ⟨𝑆⟩ indicates the seed—the very first user-defined genome—𝑃𝑛−1 the phenotypic population of
the previous generation (the digital files retrieved over the WWW in the previous evolution cycle), and
ΦΛ ∶ Λ+ → 𝐺 + a Comprehender submodule mapping phenotypic content of type Λ to new genomes.
Comprehenders are of varying complexity/intelligence with respect to the media type Λ they are
designed to understand. Three modules of sort are already implemented: (1) Φ𝑖𝑚𝑎𝑔𝑒 employing the
Inception-v3 Deep Convolutional Network [24] and trained on the ImageNet LSVRC-2012 challenge
data set [25], (2) Φ𝑡𝑒𝑥𝑡 relying on the Rapid Automatic Keyword Extraction (RAKE) algorithm [26] to
‘understand’ and summarise natural language text, and (3) Φ𝑡𝑎𝑔 that just converts tags/keywords into
genomes.
   Individual genomes—as well as populations thereof—are weighted undirected graphs comprising
natural language tokens. They have the form 𝐺 ≡ (𝑉 , 𝑊 , 𝐸), where 𝑉 is a set of vertices {𝑣1 , 𝑣2 , … , 𝑣𝑘 }
with 𝑘 ∈ ℕ>0 , 𝑣𝑛 ∈ 𝑈 ∗ ∧ 𝑣𝑘 ≠ ∅ (𝑈 ∗ denoting all finite (sequences of) words over the unicode character
set), 𝑊 is their scalar weight attributes {𝑤1 , 𝑤2 , … , 𝑤𝑘 }, 𝑤𝑘 ∈ [0, 1], and 𝐸 is a (possibly empty) set
of pairs {𝑣𝑝 , 𝑣𝜓 } for some 𝑣𝑝 , 𝑣𝜓 ∈ 𝑉 and 𝑣𝑝 ≠ 𝑣𝜓 . All ΦΛ generate genomic graphs of sort for
each one of the files they visit. As explain in detail in [23], at the end of each cycle all available
individual genomes are combined to a uniform merger thereof (𝐺𝑛 ) so that any cross-associations
between their individual edges and weights are resolved. Such a architecture makes it possible to
retrieve and manipulate content in many different native languages.
   While ΦΛ are responsible for generating a new genomic population from a given phenotype 𝑃𝑖 , a
series of Crawler submodules 𝜆0 , 𝜆1 , … , 𝜆𝑗

                                                𝜆𝑗 ∶ G+ , 𝑊 → ⋃ Λ+
                                                                                                                       (2)
                                                       𝑄, 𝑅 ↦ 𝑃
are responsible for producing the latter. They do so employing natural language queries 𝑄 retrieved
over a given genome 𝐺 to download digital media from the WWW (𝑊 ), so that the resulting 𝑛𝑡ℎ
generation phenotype 𝑃 is: 𝑃𝑛 ∶= ⋃ 𝜆𝑗 (𝐺𝑛−1 ) Note that an individual Crawler 𝜆𝑗 may retrieve digital
content of several different types (thus mapping content to ⋃ Λ+ rather than Λ+ ), so that, e.g., 𝜆𝑌𝑜𝑢𝑇𝑢𝑏𝑒
retrieves audio, video, and text (user-comments and meta-data). As of writing the system comprises
𝜆𝑌𝑜𝑢𝑇𝑢𝑏𝑒 , 𝜆𝐹 𝑙𝑖𝑐𝑘𝑟 , 𝜆𝑆ℎ𝑜𝑜𝑡𝑘𝑎 , 𝜆𝐹 𝑟𝑒𝑒𝑆𝑜𝑢𝑛𝑑 , 𝜆𝑆𝑜𝑢𝑛𝑑𝐶𝑙𝑜𝑢𝑑 , 𝜆𝑀𝐿𝐷𝑏 , 𝜆𝑇 ℎ𝑖𝑛𝑔𝑖𝑣𝑒𝑟𝑠𝑒 , 𝜆𝑊 𝑖𝑘𝑖𝑝𝑒𝑑𝑖𝑎 , 𝜆𝐶𝑜𝑛𝑐𝑒𝑝𝑡𝑁 𝑒𝑡 , 𝜆𝑊𝑜𝑟𝑑𝑁 𝑒𝑡
that download audio, video, images, music, prose, lyrics, tags, lemmas, and 3D models from those
repositories. It should be noted that Eq. 2 is time-dependent. UGC repositories are volatile so that 𝜆𝑗
will most likely return different results for the same input 𝑄 if called at different times.
   The above described evolutionary process is implemented in a local network comprising four micro-
computers. One of them is responsible for retrieving digital content over WWW, ‘comprehending’ it,
renewing the genome population 𝐺 for each generation 𝑛, and distributing the resulting phenotype 𝑃𝑛
among all four. Each of the latter features a series of local helper submodules that handle I/O operation
and disk maintenance as needed. Multimedia synthesis is then carried out by a series of Composer
modules 𝑇𝑗 ∶ 𝑃Λ+ → Ψ that manipulate Λ type content to generate new original Ψ type content. As of
writing, in all implemented 𝑇𝑗 , Λ ≡ Ψ; there are, however, concrete plans for multi-modal composers.
𝑇𝑗 typically process all the available digital files and not just that of the last 𝑛𝑡ℎ generation. While
new content is pushed to the various hardware nodes, older generation 𝑃Λ files are eventually deleted
by local maintenance routines. Nevertheless, once the machine has been online for a few evolution
cycles, some 𝑇𝑗 will almost certainly work on a local pool 𝐿Λ ∶= ⋃𝑛𝑖=𝑛−𝑘 𝑃Λ𝑖 with 𝑘, 𝑛 ∈ ℤ+ ∧ 𝑘 < 𝑛. 𝐿𝜆
comprise content from all [𝑛 − 𝑘, 𝑛] phenotypic populations. The left part of Alg. 1 is an overview of
the evolution cycle in pseudo-code.
   While the genome mutates in this fashion, a number of Composer submodules mashup or other-
wise manipulate the retrieved media files to generate new content. Implementation features a few
such submodules, namely: 𝑇𝑣𝑖𝑑𝑒𝑜 , a 𝑇𝑡𝑒𝑥𝑡 , 𝑇𝑎𝑢𝑑𝑖𝑜 , and 𝑇3𝐷 . If Σ is a stochastic operation to select
an element 𝜎𝑖 from 𝐿, a simplified model for the video Composer is: 𝑇𝑣𝑖𝑑𝑒𝑜 (𝜏 ) ∶= 𝜎𝑖 (𝜏 + 𝜌), where
𝜎𝑖 ∶= Σ(⋃𝑛𝑛−𝑘 𝑃𝑣𝑖𝑑𝑒𝑜 , 𝜏 ), 𝜏 ∈ ℤ+ denoting discrete time, 𝜌 ∈ ℤ being a random discrete offset (so that
video content starts playing back at frame 𝜏 +𝜌), 𝜚 ∈ ℤ+ a random discrete time duration after which a
new 𝜎𝑖 is to be selected for playback—it should also hold that 𝜏 +𝜌+𝜚 ≤ ‖𝜎𝑖 ‖ (the length of 𝜎𝑖 . The upper
right half of Alg. 1 describes this simple mash-up process in pseudo-code. In the actual implemen-
tation, Σ is of some complexity, combining chance operations with some hard-coded synthesis rules.
The lower right half of Alg. 1 presents 𝑇3𝐷 —an experimental Composer for solid 3D models drawing
on the synthesis pipeline described in [27]. A simplified formal model is 𝑇3𝐷 ∶= ⋃𝑛𝑗=0 (𝑍 ◦𝑋 ◦𝑅)(𝑂𝑗 )
with 𝑍 , 𝑋 , 𝑅 being linear transformations in 3D space that randomly translate, scale, and rotate (re-
spectively) a random selection of individual solid models {𝑂0 , 𝑂1 , … , 𝑂𝑛 } ⊆ ⋃𝑛𝑛−𝑘 𝑃3𝐷 .

Algorithm 1 Evolution cycle, 𝑇𝑣𝑖𝑑𝑒𝑜 , and 𝑇3𝐷 in pseudo-code
                                                         Σ ← a complex stochastic operation
                                                         loop
  𝐺 ← ⟨𝑆⟩                                                   𝜎 ← Σ(⋃𝑛𝑛−𝑘 𝑃𝑣𝑖𝑑𝑒𝑜 , 𝜏 )
  𝜆[] ← {𝜆𝑌𝑜𝑢𝑇𝑢𝑏𝑒 , 𝜆𝐹 𝑟𝑒𝑒𝑆𝑜𝑢𝑛𝑑 , …}                        𝜚 ← a random number in (0, ‖𝜎‖)
  𝜙[] ← 𝜙𝑡𝑒𝑥𝑡 , 𝜙𝑖𝑚𝑎𝑔𝑒 , 𝜙𝑡𝑎𝑔                               𝜌 ← a random number in (𝜚, ‖𝜎 ‖)
  loop                                                      playback 𝜎 from 𝜌 to 𝜌 + 𝜚
      𝑃 ← []                                             end loop
      for 𝑖 = 0 to ‖𝜆‖ do
                     𝑎𝑝𝑝𝑒𝑛𝑑
           𝑃 ←←←←←←←←←←←←←←←←←←←←←←← 𝜆𝑖 (𝐺)                     𝑂 ← a random ⊆ 𝐿3𝐷
       end for                                                  𝑅 ← []
         ′
       𝐺 ← []                                                   for 𝑖 = 0 to ‖𝑂‖ do
       for 𝑖 = 0 to ‖𝑃‖ do                                          𝐴 ← 𝑂[𝑖]
                 ′    𝑎𝑝𝑝𝑒𝑛𝑑
        𝐺 ←←←←←←←←←←←←←←←←←←←←←←← 𝜙Λ (𝑃[𝑖])                         random translate 𝐴
     end for                                                        random scale 𝐴
     𝐺 ← ⋃𝐺
                  ′
                                                                    random rotate 𝐴
                                                                       𝑎𝑝𝑝𝑒𝑛𝑑
  end loop                                                          𝑅 ←←←←←←←←←←←←←←←←←←←←←←← 𝐴
                                                                end for
                                                                return ⋃ 𝑅

   𝑇𝑎𝑢𝑑𝑖𝑜 is an experimental adaptation of a rather complex system for algorithmic mashups that is
described in great detail in [28]. The architecture, inter alia, comprises a non-real-time machine lis-
tening pipeline performing onset detection and spectral feature extraction on all available content.
For each audio file 𝑙 ∈ 𝐿𝑎𝑢𝑑𝑖𝑜 , it generates a vector 𝑑⃗ registering the particular moments of some no-
table change (in pitch, rhythm, or timbre), and a feature matrix 𝐷         ⃗ ≡ [𝑐⃗, 𝑢⃗ , ⃗𝑠 ]𝑇 with weighted mean
frequency 𝑐⃗ ≡ [𝑐1 , 𝑐2 , … , 𝑐𝑘 ], magnitude-weighted variance 𝑢⃗ ≡ [𝑢1 , 𝑢2 , … , 𝑢𝑘 ], and spectral complex-
ity ⃗𝑠 ≡ [𝑠1 , 𝑠2 , … , 𝑠𝑘 ] (𝑘 ∈ ℤ+ ) per some regular time interval. Individual generative ‘sonic events’ 𝐸𝑘
of various different kinds (e.g., (non-)deterministic sequences of shorter sounds, or ‘sustained sonic
atmospheres’) are defined employing audio file fragments and with respect to their associated break-
points in 𝑑⃗𝑙 and its origin (the UGC repository they were downloaded from). While 𝐸𝑘 are dynamically
added to a scheduling queue, an intelligent composition submodule juxtaposes them in real-time and
with respect to the feature matrices associated with the audio snippets in use. In this fashion, the
particular patterns governing the temporal appearance, the repetition, the duration, and the acoustic
localisation are all configured employing features from 𝐷         ⃗ . Some example output (stereo versions)
from this experimental Audio Composer can be listened to at https://tinyurl.com/t-audio-examples.
                                         Figure 3: Hardware Submodules. Figure 4: Installation.
  Figure 2: The bot.


   𝑇𝑡𝑒𝑥𝑡 utilises the ‘textgenrnn’2 system for intelligent character-level text synthesis that is based on
a Multiplicative Recursive Neural Network (MRNN) topology. This method is described in great detail
in [29]. Given a sequence of input vectors (𝑥⃗1 , 𝑥⃗2 , … , 𝑥⃗𝑇 ) a sequence of predictive softmax distribution
𝑃(𝑥𝑡+1 |𝑥 ≤ 𝑡) is obtained at the output vectors (𝑜⃗1 , 𝑜⃗2 , … , 𝑜⃗𝑇 ). The language modelling objective is
to maximise the total log probability of the training sequence ∑𝑇𝑡=0         −1
                                                                                𝑙𝑜𝑔𝑃(𝑥𝑡+1 |𝑥 ≤ 𝑡). This MRNN
topology is ever-trained on every 𝑛 iteration on some small 𝑙 ⊆ 𝐿𝑡𝑒𝑥𝑡 . 𝑇𝑡𝑒𝑥𝑡 is then scheduled to
generate new strings of original text at irregular time intervals.
   Fig. 2 illustrates the machine in its eventual realisation, with the various hardware submodules
hosted in a block of concrete and several cables to interfaces with the WWW, monitor screens, loud-
speakers, and other terminals. The machine also features a built-in thermal printer. The overall design
is hybrid and rough-hewn, also embodying a certain kind of ‘material dialectics’—such an approach
towards interface design is further discussed in [30]. Fig. 4 illustrates the machine in its first public
showcase in the context of the Children of Prometheus international group exhibition that took place in
NEME Gallery (Limassol, CY) 2019. In this particular incarnation multimedia synthesis is carried out
by just three distinct 𝑇𝑣𝑖𝑐𝑒𝑜 and a 𝑇𝑡𝑒𝑥𝑡 printing out algorithmically generated text every few minutes.
   Seen through an ecosystemic lens, Hypersition Bot comprises several individual software compo-
nents of varying complexity that interact with one another and an external environment. This exter-
nal environment is, in reality, three different overlaid ones: (1) a local network comprising various in-
tertwined hardware and software submodules, (2) the WWW, and (3) the physical space accommodat-
ing the Hypersition Bot and its generated multimedia content. Accordingly, the proposed architecture
is principally grounded on a complex cybernetic network of cross-interactions, inter-dependencies
and intertwinments so that its output is emergent, hybrid and distributed—the system’s operation
cannot be traced, or reduced, to the specifics of its software or hardware modules alone.


3. Discussion
Hypersition Bot has been designed to interrogate and appropriate (i.e. using otherwise than intended)
the human culture’s WWW footprint in a computational and creative fashion. It is envisioned as cre-
ative machine that may be plugged-in to a largely human-oriented WWW and creatively re-synthesise
media content to bring forth its own alternate, non-human and ‘hyperstitional’ one. UGC reposito-
ries are an excellent way to account both for human culture in its immense trans-geographical and
trans-socio-political contingency, as well as for the ways it may be cybernetically ‘comprehended’
and re-appropriated by machines. YouTube features a few billions of videos of all possible subjects,
    2
        Retrieved December 15, 2019 from http://github.com/minimaxir/textgenrnn
themes, and genres uploaded for all possible kinds of purposes by all possible kinds of individuals.
Maybe more importantly, it also features meta-data and long threads of structured user comments
that further articulate the numerous possible cultural connotations and ramifications of the featured
content. Wikipedia is an immense codified and structured database of user-contributed knowledge
that covers pretty much all aspects of human existence, from science to popular culture, from history
to poetry, and from esoteric religions to design. Soundcloud comprises millions of music works of all
possible styles and by all professional, semi-professional, and amateur creators. Music content is fur-
ther embellished with meta-data and (timeline defined) user comments. UGC of sort unconditionally
represent human culture in its sheer eclecticism as well as in the particular ways in which humans
themselves interpret, understand, and reflect thereof in all (in)formal, (non)casual and (un)structured,
fashions—still, they far-exceed our capacity engage with some significant dimension of them. Without
the aid of machines and sophisticated algorithmic techniques it is largely impossible for humans alone
to ever make sense of such complex/broad cybernetic phenomena while it is, of course, debatable to
what extend we can do so even with the aid of the former.
   The creative machine described here is destined to manoeuvre and to appropriate human-oriented
cultural content the way it resonates over the WWW, yet in ways humans alone would not be able
to pursue. It does so in an ecosystemic fashion, establishing the necessary conditions for emergent
cybernetic behaviour to arise. In this vein, Hypersition Bot is not meant to imitate human creativity
(even if it may accidentally do so at times). It rather celebrates a certain approach towards ‘com-
putational poetics’—i.e., an inherently deliberate computational and nonhuman take on multimedia
synthesis. But if this so, what would originality and creativity mean in such a context? And how
would they relate to human notions thereof? While such affairs cannot be substantially elaborated
upon here, there are two important concerns that ought be immediately outlined.
   Firstly, the cybernetic method described hereinbefore is primarily meant as an experiment ques-
tioning the human authority/exclusivity in both establishing own media culture and in building upon
it. As such, Hypersition Bot stands together with several other efforts of sorts, that range from like-
minded artistic endeavours to the entire philosophical project of critical post-humanism in its various
manifestations. Despite their breadth and disparity, the cornerstone of nearly all flavours of critical
post-humanism is that humans are ever prosthetic, distributed, and ever produced by, and in relation
to, social, environmental, technological, and other nonhuman traits [31, 19–22]. It follows that to
further trust/allow machines to re-purpose our own cultural production in computational and non-
human fashions is actually a very ‘human’ thing to do—it is what casts us humans in the first place
through a post-human lens. The machine’s attempt to establish an ouevre of its own can be said
to aid to the formulation of a broader and more critical way to understand our species. In the case
of Hypersition Bot such a stance is deeply echoed to the algorithmic design and its operation that
functionally embed the post-human thesis. It is argued that this is only possible through a decen-
tralised ecosystemic design approach that accelerates cross-interactions in between several different
‘species’, modalities, and (sub)domains. This trait reverberates the herein described machine in all
bottom-up and top-down fashions: its made of hybrid and intertwined algorithmic, electronic, and
physical components and in a way that brings forth ‘material dialectics’ of some sort and its operation
is emergent and contingent, resonating across digital, analogue, and physical domains and through
acoustic, visual, haptic, semantic, and other modalities.
   The second concern to delineate relates with the notion of Hypersition. Land’s original inspiration
traces to Dawkins and his acolytes who popularised the idea that ‘memes’—i.e. mental elements—
control a carrier’s thought and behaviour in an ontogenetic fashion and much like genes do to biolog-
ical bodies [32]. The validity of such a hypothesis is, notwithstanding, questionable in the first place.
For instance, Ingold shows that the very claim that some genealogical/inheritance-based mechanism
(exclusively) governs the development of a (biological) organism to some significant extent is feeble: it
succumbs altogether under closer examination in that it requisites suitable environmental conditions
in the first place [33, 1–17]. Accordingly, he suggests that it is upon the latter we should primarily
focus upon when it comes to understand, or to control, how organisms end up being what they are.
   It is beyond the scope of this paper to delve in such a debate, of course. Yet, Hypersition Bot’s op-
eration is ascribed an additional dimension if seen through a genealogical-contra-ecosystemic prism.
The system is specifically designed to fuel a hyperstitional mode of operation: it probes retrieved
content to identify and isolate semantic/symbolic links that would link it to other content and so
forth, until a hitherto latent (or merely fictional) narrative concretely emerges. The overall system
can be said to succeed in such a hyperstitional expedition—at least to some plausible extent—in that
it does pursue the cybernetic bearings of our WWW presence and in that it does produce original
content re-synthesising them in a generative fashion. However, at the very same time and in tandem
with this, it explores the hybrid environmental conditions that cast such a hyperstitional excursion
possible: these are, its very own design, WWW and certain UGC topologies within it, the particular
cybernetic infrastructures it relies upon to access and to retrieve content of interest, and so on. The
machine simultaneously pursues arbitrary congenital bearings waiting to be unfolded, and investi-
gates the conditions that cast such an unfolding possible. It can be then said, that it is made to operate
at the crux where ontogenesis and ecosystemic conditioning meet.


4. Conclusion
Hypersition Bot is a creative multimedia system distributed over a complex hybrid network of soft-
ware/hardware subsystems that creatively explores and re-appropriates the digital footprint of human
culture over the WWW in a ‘hyperstitional’ manner. It autonomously and unattendedly synthesises
original multimedia content in a generative fashion and showcases it in-situ. ‘Original’ here stands
for appropriated, manipulated and remixed content that attains agency based on how the machine
re-purposes it in a cybernetic fashion. The system’s architecture is ecosystemic so that its multi-
media output is emergent and contingent; it cannot be explained merely in terms of the constituent
subsystems. It pivots on technology that is largely designed to ‘defy’ human creativity in pursue of
experimental and nonhuman ‘computational poetics’—even if it is yet unclear what it means to be
creative in a nonhuman fashion.
   The specifics of the various comprising submodules as well as of their interplays and intertwine-
ments are discussed in some detail heretofore. The overall operation is shown to pivot on an ecosys-
temic evolutionary paradigm that does not employ fitness function or other evaluation schemata of
sorts. A couple of concerns surface critical reflection upon the process and the particular poetics at
play: (a) the question of challenging human authority/exclusivity in building upon human culture,
and (b) the question of ‘hyperstitional’ behaviour at the crux of ontogenetic and ecosystemic tactics.
Affairs of sort are open research questions that call for thorough investigation in both critical and
analytical fashions, and, maybe most importantly, speculatively and through the design of relevant
creative pipelines. Hypersition Bot is such a experimental endeavour, being designed to fumble about
the hypothesis of genuine nonhuman creativity.
   Considering (a), while it is indeed suggested that Hypersition Bot challenges human authority/ex-
clusivity in accessing and building upon human culture in a straightforward manner, and while such
a claim is to some certain extent supported pragmatically by means of the machine’s overall archi-
tecture and multimedia output, this is still a rather bold claim to make and should be taken with a
grain of salt. What exactly human authority/exclusivity may stand for in an advanced digital age
is still rather vague—if not ill-formulated. Important investigations in this vein are still an ongoing
affair in several subdisciplines such as computational aesthetics, or critical post-humanism. Imple-
menting bidirectional functionality so that Hypersition Bot may contribute back content of its own
(rather than merely ‘consume’ human culture) is an important future step towards better formulating
the question. So is the design of more complex creative machines of sort. The working hypothesis is
that of a machine that (following a long tradition of autonomous algorithmic/generative art) would
eventually succeed in transcending human-specific notions of art/creativity altogether, setting out
a counter-culture of its own species. In principle, this is the scope of this endeavour: to speculate-
through-design on this hypothesis.
   Considering (b), it is herein argued that Hypersition Bot pivots on the inter-dependency between
the exploration of congenital ‘genotypic’ bearings that wait be unfolded (the ontogenetic dimension),
the environmental conditions that cast such an unfolding possible, and the particular ways in which
they presuppose, appropriate and establish one another. The overall endeavour could be, therefore,
thought of as a structured experiment that both relies upon, and at the same time interrogates, the
very conditions that cast ‘hyperstitional’ creativity possible. Still, what exactly such a creativity is
and how it relates with known paradigms of human creativity cannot be answered now—not even
properly speculated at. It remains an open research question that needs to be properly formulated
and treated in an integrated fashion, both theoretically, and empirically.


5. Future Work
Future research primarily zooms in the implementation of additional Composers, Comprehender, and
to a lesser extend, Crawler submodules. 𝑇𝑎𝑢𝑑𝑖𝑜 and 𝑇3𝐷 are still in-development and largely experi-
mental. There are concrete plans for a more intelligent 𝑇3𝐷 pivoting on AI and point-cloud represen-
tations of solid geometry that would be inspire by the method described in [34]. There are also plans
for cross-modal Composers, e.g. 𝑇𝑡𝑒𝑥𝑡→𝑖𝑚𝑎𝑔𝑒 or 𝑇𝑖𝑚𝑎𝑔𝑒→3𝐷 . Comprehenders submodules Φ𝑎𝑢𝑑𝑖𝑜 and
Φ𝑣𝑖𝑑𝑒𝑜 are also needed—even if it is not the latter are rather complex and involved to design. A few
additional Crawler modules, e.g., 𝜆𝐼 𝑛𝑠𝑡𝑎𝑔𝑟𝑎𝑚 or 𝜆𝑇 𝑖𝑘𝑇𝑜𝑘 , would also be nice additions. Most importantly,
future research zooms in an entirely new class of submodules, that is Uploaders 𝑈𝑗 , that would make
possible the bidirectional interaction with selected UGC repositories. First priority is for 𝑈𝑌𝑜𝑢𝑇𝑢𝑏𝑒 and
𝑈𝑇𝑤𝑖𝑡𝑡𝑒𝑟 , with more to follow. When this feature is implemented, Hypersition Bot would be granted
the right to claim its place in our world, pollinating it with cultural content of its own species.


References
 [1] M. Cook, S. Colton, Redesigning computationally creative systems for continuous creation, in:
     Proceedings of the Ninth International Conference on Computational Creativity (Salamanca,
     Spain), 2018, pp. 32–39.
 [2] A. Das, B. Gambäck, Poetic machine: Computational creativity for automatic poetry generation
     in bengali., in: Proceedings of The Fifth International Conference on Computational Creativity
     (Ljubljana, Slovenia), 2014, pp. 230–238.
 [3] S. Colton, The painting fool: Stories from building an automated painter, in: Com-
     puters and creativity, Springer, Berlin/Heidelberg, Germany, 2012, pp. 3–38. doi:10.1007/
     978-3-642-31727-9\_1.
 [4] M. Koutsomichalis, A. Psarra, Computer-aided weaving: From numerical data to generative tex-
     tiles, in: Proceedings of the Conference on Electronic Visualisation and the Arts, BCS Learning
     & Development Ltd., Swindon, UK, 2015, pp. 122–123. doi:10.1145/2641248.2641281.
 [5] J. Hartler, M. Trötzmüller, C. Chitraju, F. Spener, H. C. Köfeler, G. G. Thallinger, Lipid Data
     Analyzer: unattended identification and quantitation of lipids in LC-MS data, Bioinformatics 27
     (2010) 572–577. doi:10.1093/bioinformatics/btq699.
 [6] M.-J. Kim, T.-H. Song, S.-H. Jin, S.-M. Jung, G.-H. Go, K.-H. Kwon, J.-W. Jeon, Automatically
     available photographer robot for controlling composition and taking pictures, in: 2010 IEEE/RSJ
     International Conference on Intelligent Robots and Systems, 2010, pp. 6010–6015. doi:10.1109/
     IROS.2010.5650341.
 [7] N. Collins, Musical form and algorithmic composition, Contemporary Music Review 28 (2009)
     103–114. doi:10.1080/07494460802664064.
 [8] G. Greenfield, Evolutionary methods for ant colony paintings, in: Workshops on Applica-
     tions of Evolutionary Computation ( Lausanne, Switzerland), 2005, pp. 478–487. doi:10.1007/
     978-3-540-32003-6\_48.
 [9] M. Koutsomichalis, Catalogue æsthetics: Database in and as music, in: Trends in Music Informa-
     tion Seeking, Behavior, and Retrieval for Creativity, IGI Global, Hersey, PA, 2016, pp. 258–277.
     doi:10.4018/978-1-5225-0270-8.ch012.
[10] M. Cook, S. Colton, Automated collage generation-with more intent., in: Proceedings of the
     Second International Conference on Computational Creativity (Mexico City, Mexico), 2011, pp.
     1–3.
[11] A. Eigenfeldt, M. Thorogood, J. Bizzocchi, P. Pasquier, Mediascape: Towards a video, music,
     and sound metacreation, Journal of Science and Technology of the Arts 6 (2014) 61–73. doi:10.
     7559/citarj.v6i1.129.
[12] C. Johnson, Fitness in evolutionary art and music: what has been used and what could be used?,
     in: Evolutionary and Biologically Inspired Music, Sound, Art and Design, Springer, Berlin/Hei-
     delberg, Germany, 2012, pp. 129–140. doi:10.1007/978-3-642-29142-5\_12.
[13] J. McCormack, Open problems in evolutionary music and art, in: EvoWorkshops 2005: Appli-
     cations of Evolutionary Computing, Springer, Berlin/Heidelberg, Germany, 2005, pp. 428–436.
     doi:10.1007/978-3-540-32003-6\_43.
[14] O. Bown, J. McCormack, Taming nature: tapping the creative potential of ecosystem models in
     the arts, Digital Creativity 21 (2010) 215–231. doi:10.1080/14626268.2011.550029.
[15] K. N. Laland, J. Odling-Smee, M. W. Feldman, Niche construction, biological evolution,
     and cultural change,        Behavioral and brain sciences 23 (2000) 131–146. doi:10.1017/
     S0140525X00002417.
[16] J. A. Biles, Autonomous GenJam: eliminating the fitness bottleneck by eliminating fitness, in:
     The 2001 GECCO Workshop on Non-Routine Design with Evolutionary Systems, San Francisco,
     CA, 2001, p. Paper 4.
[17] A. Dorin, Aesthetic fitness and artificial evolution for the selection of imagery from the mythical
     infinite library, in: European Conference on Artificial Life (Prague, Czech Republic), 2001, pp.
     659–668. doi:10.1007/3-540-44811-X\_76.
[18] J. Bird, P. Husbands, M. Perris, B. Bigge, P. Brown, Implicit fitness functions for evolving a
     drawing robot, in: Applications of Evolutionary Computation: EvoWorkshops 2008, Springer,
     Berlin/Heidelberg, Germany, 2008, pp. 473–478. doi:10.1007/978-3-540-78761-7\_50.
[19] A. Di Scipio, ‘sound is the interface’: from interactive to ecosystemic signal processing, Organ-
     ised Sound 8 (2003) 269–277.
[20] A. Dorin, A survey of virtual ecosystems in generative electronic art, in: The Art of
     Artificial Evolution, Springer, Berlin/Heidelberg, Germany, 2008, pp. 289–309. doi:10.1007/
     978-3-540-72877-1\_14.
[21] S. O’Sullivan, Accelerationism, hyperstition and myth-science, Cyclops 2 (2017) 11–44.
[22] CCRU, CCRU Writings 1997-2003, Urbanomic, Falmouth, U.K., 2017.
[23] M. Koutsomichalis, B. Gambäck, Evolvable media repositories: An evolutionary system to re-
     trieve and ever-renovate related media web content, in: Intelligent Computing-Proceedings
     of the Computing Conference, Springer, Berlin/Heidelberg, Germany, 2019, pp. 76–92. doi:10.
     1007/978-3-030-22868-2\_6.
[24] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna, Rethinking the Inception architecture
     for computer vision, in: Proceedings of the IEEE Conference on Computer Vision and Pattern
     Recognition (Las Vegas, NV), 2016, pp. 2818–2826.
[25] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, L. Fei-Fei, ImageNet: a large-scale hierarchical image
     database, in: 2009 IEEE conference on computer vision and pattern recognition, 2009, pp. 248–
     255. doi:10.1109/CVPR.2009.5206848.
[26] S. Rose, D. Engel, N. Cramer, W. Cowley, Automatic keyword extraction from individual docu-
     ments, Text Mining: Applications and Theory 1 (2010) 1–20.
[27] M. Koutsomichalis, B. Gambäck, Generative solid modelling employing natural language un-
     derstanding and 3d data, in: International Conference on Computational Intelligence in Music,
     Sound, Art and Design, Springer, Berlin/Heidelberg, Germany, 2018, pp. 95–111. doi:10.1007/
     978-3-319-77583-8\_7.
[28] M. Koutsomichalis, B. Gambäck, Algorithmic audio mashups and synthetic soundscapes em-
     ploying evolvable media repositories, in: 6th International Workshop on Musical Metacreation
     (Salamanca, Spain), 2018, pp. 3318–3325.
[29] I. Sutskever, J. Martens, G. E. Hinton, Generating text with recurrent neural networks, in:
     Proceedings of the 28th International Conference on Machine Learning (ICML-11) (Bellevue,
     WA), 2011, pp. 1017–1024.
[30] M. Koutsomichalis, Rough-hewn hertzian multimedia instruments, in: International Conference
     on New Interfaces for Musical Expression (Birmingham, U.K.), 2020-to appear.
[31] P. K. Nayar, Posthumanism, John Wiley & Sons, Hoboken, NJ, 2018.
[32] S. Blackmore, The meme machine, Oxford, Oxford, U.K., 2000.
[33] T. Ingold, Anthropology and/as Education, Routledge, London, U.K., 2017.
[34] S. Ge, A. Dill, E. Kang, C.-L. Li, L. Zhang, M. Zaheer, B. Poczos, Developing creative AI to
     generate sculptural objects, in: Proccedings of the 25th International Symposium on Electronic
     Art (Gwangju, Korea), 2019, pp. 225–232.