=Paper=
{{Paper
|id=Vol-2827/CAC-Paper_1
|storemode=property
|title=Towards Co-Build: An Architecture Machine for Co-Creative Form-Making
|pdfUrl=https://ceur-ws.org/Vol-2827/CAC-Paper_1.pdf
|volume=Vol-2827
|authors=Manoj Deshpande,Eric Sauda,Mary Lou Maher
}}
==Towards Co-Build: An Architecture Machine for Co-Creative Form-Making==
Towards Co-Build: An Architecture Machine for
Co-Creative Form-Making
Manoj Deshpandea,b , Eric Saudaa and Mary Lou Maherb
a
School of Architecture, University of North Carolina at Charlotte, Charlotte, NC 28262 USA
b
Department of Software and Information Systems, University of North Carolina at Charlotte, Charlotte, NC 28262 USA
Abstract
Based on Negroponte’s idea of man-machine symbiosis, this paper proposes Co-Build – A real-time
web-based collaborative 3D modeling platform with a co-creative agent (machine). The study aims
to extract different circumstances under which the co-creative agent’s contribution appears to make
sense. The study identifies varied aspects of human-human collaboration applicable to human-machine
collaboration. These research objectives are set in order to understand the “symbiosis”. The behavior of
the machine in Co-Build is based on the enactive model of co-creativity. For the purpose of this study,
machine intelligence is emulated using the wizard of oz technique, and the machine action is restricted to
mimicking. To simplify the architectural design process, the study focuses on additive massing models in
a concept design game’s theoretical context, namely, the silent game. Two variations of the silent game,
namely, switch silent game and simultaneous silent game, are proposed to test two kinds of interactions
between the collaborators: turn-taking and simultaneous interactions. This paper reports the results
of an online user study with 20 participants. The user study involves participants playing both the
variation of silent games with a human and then with the wizard of oz ’machine.’ Retrospective video
walk-through and post-task interview are the methods utilized to collect data for evaluation.
Keywords
co-creative system, participatory sense making, enactive model of creativity, concept design games,
architecture machine
1. Introduction
Nicolas Negroponte is one of the early pioneers of the infusion of computation to architectural
design. In his 1969 article “Towards a Humanism Through Machines”, he described the term
“Architecture Machine” that referred to turning the design process into a dialogue that would
alter the man-machine dynamic. Negroponte envisioned architecture machines to be symbiotic.
He defined the symbiotic relationship between man and architecture machine as “the intimate
association of two dissimilar species (man and machine), two dissimilar processes (design and
computation) and two intelligent systems (the architect and the architecture machine)” [1]. By
attributing intelligence to the architecture machine, Negroponte envisioned the relationship
between the architect and the architecture machine not as a master (smarter) and a slave
Joint Proceedings of the ICCC 2020 Workshops (ICCC-WS 2020), September 7-11 2020, Coimbra (PT) / Online
Envelope-Open manojdeshpande@gatech.edu (M. Deshpande); ericsauda@uncc.edu (E. Sauda); m.maher@uncc.edu
(M. L. Maher)
GLOBE https://www.manoj-deshpande.com/ (M. Deshpande); https://coaa.uncc.edu/people/eric-sauda (E. Sauda);
http://maryloumaher.net/ (M. L. Maher)
© 2020 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
Workshop
Proceedings
http://ceur-ws.org
ISSN 1613-0073
CEUR Workshop Proceedings (CEUR-WS.org)
(dumber), but to be a partnership of two associates with each having the potential of self-
improvement.
However, even though designers today can easily create and modify a CAD model, the CAD
software primarily functions as an input device. Furthermore, while the current prototyping
and fabrication machines have led to a wealth of techniques to create physical artifacts from
virtual objects, they primarily function as output devices [2]. As a result, machines are detached
from the conception of design and have not achieved Negroponte’s man-machine symbiosis. To
address this, we propose Co-Build an architecture machine - a partner to the designer.
The notion of machines/computers as intelligent, creative partners has been studied in the
emerging field of computational co-creativity. Accordingly, computational co-creativity is
defined as - when computers and humans collaborate to build a shared creative artifact [3]. For
the definition to be applied to this research, terms like “collaborate”, “shared”, “creative” and
“artifact” need further contextual clarification. For this research, we utilize the enactive model
of creativity [4] to emulate creativity in Co-Build: the architecture machine. Within the theory
of enaction, we utilize the conceptual framework of participatory sense-making to understand
collaboration. This research follows the design and evaluation frameworks and methodologies
employed in the co-creative application - the ‘Drawing Apprentice’ [3].
The research goals for this project are:
• RG1: To understand different conditions under which contributions from the machine
appear to make-sense.
• RG2: To understand which interaction method (simultaneous/turn-taking) promotes a
good co-creative experience.
• RG3: To identify aspects of human-human collaboration that can be applied to the
human-machine collaboration.
Based on these goals, the research questions are:
• RQ1: To what degree was participatory sense-making present during the collaboration?
• RQ2: What metrics and features did users employ to determine whether contributions
from the machine ‘made sense’?
• RQ3: Is the machine considered as a collaborator or a tool?
2. Related Work
In this section, we describe the architectural design context in which Co-Build is relevant
and applicable. We describe how the context relates to the key terms in the definition of
co-creativity and describe the creativity model adopted by Co-Build. We relate this project to
existing co-creative systems.
2.1. Architectural Massing Models
In the architectural discourse, physical/virtual models are exploratory design tools that allow
architects to create abstract spatial concepts. For students and practitioners, testing digital
findings with 3D prototypes can help assess if a complex solution is offering “spatial, aesthetic
and programmatic” solutions to a project. Therefore, each physical and digital phase of the
project can inform each other subsequently and iteratively [5].
Hence, one of the early stages in an architecture design process is making many iterations
of “massing models”. Massing in architecture refers to the basic three-dimensional shape
of the composition of the building. These models are quick first attempts to design how an
architectural intervention looks. It is used to study how the mass reacts to the site and context
around it. Alternatively, it is also used as an abstract architectural form-making exercise [6].
Since massing models are a simple three-dimensional composition, we can broadly divide the
models into two categories: subtractive models and additive models. Subtractive models are
stereotomic - they are carved out of a solid block. Additive models are aggregative - small pieces
or blocks are attached to form the massing. We use additive massing models for this research.
2.2. Architectural Design Games
Even at an early stage of design, the complexity and open-ended nature of the massing model
is a challenge to understand and replicate via an intelligent agent as a part of the architecture
machine. To reduce the complexity of the task, we use the concept of design games. As
Negroponte suggests, by utilizing games, a machine’s adroitness in design could evolve from
local strategies that would self-improve by the machine testing for local successes and failures [7].
Design games are about staging participation. There is rarely any competition over who wins
the game [8]. These games can be utilized to study design actions in a tractable environment
that gives rise to design situations resembling those in real life. In games, as in real life, players’
moves are limited by the existing rules, conventions, and principles [9].
Habraken and Gross developed nine concept design games as a tool for research in design
theory [9]. They suggest games provide an environment for a group of players, acting with
individual goals and a shared program, to make and transform complex configurations, free
of functional requirements [9]. Concept design games represent theories about (aspects of)
designing. By playing them, the theories are tested and most likely modified as a result. As
indicated by the name, each design game is based on a design concept. The concept that we are
interested in exploring in this research is design interaction.
Habraken and Gross proposed two games (out of nine) about design interaction: the Reference
Game and the Silent Game. The Reference Game has a “Talker” who instructs the “Doer” as
to what to do. The Talker may not move any pieces, and the Doer may not speak, message,
draw, or sketch, but only move pieces. The Talker gives a message to the Doer, who interprets
them in a configuration on the board. The Silent Game, in contrast, forbids any form of verbal
communication. The players are not allowed to talk. The first player lays out a pattern (made
out predefined game pieces). The second player interprets the patterns and adds another pattern
to the board’s configuration for the first player to follow. An elaborate configuration emerges
on the board, representing a combination of patterns created by both the players. Players do not
explain, nor are any agreements formulated. They collaborate only through the configuration,
which is the only medium available for communication.
The silent game has two roles: pattern-maker and pattern-follower. For this research, one
player plays exclusively as a pattern-maker and another player as pattern-follower. Although
the Silent Game and the Reference Game represent very different modes of interaction, both
show the importance of shared mental models in designing. Together they illustrate the extent
to which interaction among designers is rooted in the convention of seeing rules and goals in
the deployment of pieces (in patterns) and in the convention of describing such deployments.
For this study, we will be employing two modified silent games: simultaneous silent game and
switch silent game.
In the study reported in this paper, the human collaborator will play as the pattern-maker.
The machine (the co-creative agent) will play as the pattern-follower. The game will be played
on a real-time collaborative 3D modeling web application: Co-Build. In the simultaneous silent
game, after the pattern maker’s first move, both the pattern-maker and the pattern-follower
will simultaneously add blocks to the 3D model. Hence, sharing/interaction, in this case, is
concurrent. In the switch silent game, the pattern-maker and pattern-follower will take turns
and add blocks to the 3D model one after the other. Hence sharing/interaction, in this case, is
turn-taking.
To summarise linking the context back to the definition of co-creativity, the artifact in
consideration is an additive 3D model resembling a simplified abstract massing model. The
collaboration is happening through a design game (silent game) on a web-based platform; the
type of collaboration for the machine is mimicking the human. There are two types of sharing
or interaction that are explored, namely simultaneous and turn-taking.
2.3. Enactive Model of Creativity
Creativity, according to Webster’s dictionary, is the ability to make new things or think of new
ideas. Through the years, human creativity has been studied through diverse perspectives such
as philosophical, neuroscientific, and psychological. In computational creativity literature, one
of the dominant views on creativity is - creativity as search [10]. This approach assumes a
potential solution space, defining creativity as searching a state space to find a solution for any
given design problem. Although this approach is useful when applied to design optimization
problems, the formal notation is less useful at the conceptual stage of design as there are no
fixed parameters and there is no single solution.
A prominent way of design thinking in architecture is “thinking by doing”. It applies to a
wide range of activities like sketching, model making, engaging with materials, and so on. The
enactive model of creativity operationalizes the “thinking by doing” method of cognition. The
enaction theory describes creativity as a continual process. Here, intelligent agents adaptively
and experimentally interact with their environment through a continuous perception-action
feedback loop to produce structured and meaningful interactions in an emergent process of
sense-making (or participatory sense-making when multiple agents are collaborating). The
emergent sense-making process that results in creativity is fundamentally based on continuous
real-time interaction between an agent and its environment [3].
2.4. Co-Creative Systems
In this section, we have selected projects that are both software and fabrication architecture
machines. In the HCI literature, there are various frameworks for the classification of co-creative
systems, both from a human perspective and a computational agent perspective. For this survey
of related projects, we utilize the classification based on creative ideation described by Maher
[11]. Accordingly, computers can assume three roles, i.e., support, enhance, and generate.
Humans/designers, on the other hand, have two roles: to model and to generate. Most of the
fabrication co-creative projects described here do not include an intelligent agent. However, we
have still categorized them using this framework based on the seemingly intelligent mechanism
they showcase.
In the first category (support), the computer or machine is used just as a tool, and humans are
the sole creator or creative thinkers. Projects like ‘Interactive Fabrication’ [12] and ‘Interactive
Construction’ [13] fall under this category. Here, the human-machine collaboration for the
fabrication process is made easier with intuitive and embodied interaction with the fabrication
machine (3D printer or laser cutting). These projects demonstrated how personalized artifacts
could be created without losing the designers’ intention. However, the fabrication machines
follow the instructions and have no creative control or feedback to the designer. As a result,
these kinds of collaborative fabrication machines function as output devices and support the
designers. Other examples of projects in this category would be ‘Protopiper’ [14] and ‘D-Coil’
[15]. These projects allow users to extrude materials from a hand-held portable device to allow
for real-time 3D sketching on-the-go, sometimes to scale.
In the second category (enhance), the machine, with the help of a simple algorithm or AI,
acts as a creator. There have been various projects to show that computers with the help of
AI can produce novel outputs that can be considered creative [16]. Projects like ‘Being the
Machine’ [17] and ‘Crowdsourced Fabrication’ [18], explore these kinds of collaboration. In
‘Crowdsourced Fabrication,’ users receive instructions on their smartwatch. They follow the
instructions given by the machine to construct a pavilion module by module. Here, humans
have no input or control over the fabrication process. Whereas in ‘Being the Machine,’ users
receive step-by-step G-code instruction from a machine. They are free to deviate and use their
creative input while using a natural material to fabricate an object. In this project, the human
acts as a mechanically controlled tool, trading precision, and control to realize surprising and
unexpected forms of the artifact. In comparison to the first category, these projects foster more
collaboration between humans and machines. Furthermore, the role of the human oscillates
between modeling and generating, and the machine has some creative input.
Co-creative fabrication projects that fall into the third category (generate) are projects similar
to ’FreeD’ [19] and, ’DeepWear’[20]. In ‘FreeD,’ the author develops a milling tool that guides
the user -in this case, an artisan to create 3D models out of milling. As users are free to do as
they please and the computer program adapts and sometimes redirects, the project is emergent.
Similarly, in ‘DeepWear,’ designers and AI co-create new clothing by analyzing fashion trends
and productions of a single fashion brand. In the project ‘Negotiating the Creative Space in
Human-Robot Collaborative Design’ [21], authors collaborate with a robotic arm. The project
uses a constrained tangible user interface that both the robot and humans can manipulate to
create interesting spatial arrangements. In this project, humans have to negotiate both the
physical and creative space with a robot. Another project in the third category is ‘Truss Fab’ [22].
Here the system allows designers to fabricate large scale structures that are sturdy enough to
bear human weight using plastic bottles and connectors as building modules. Here the computer
and designer co-create the artifact in the digital space. However, the assembly/construction is
carried out only by humans.
Figure 1: Co-Build System Design
Similar to the co-creative 3D projects, HCI research has explored co-creative drawing and
painting. Examples of such projects are - ‘Duet Draw’ [23], ‘Drawing Apprentice’ [3] or, ‘Creative
Sketching Partner’ [24]. In these projects, researchers have explored and evaluated how AI-
based systems can collaborate with humans in sketching. In projects like these, typically, first,
the user draws a line or a curve on the screen. Then, based on the AI interpretation of the sketch,
the computer extends or enhances the drawing. Similarly, in ‘Computing with Watercolor
Shapes’ [25], a custom drawing/painting apparatus is developed in which the computer acts as
a generative painting system and the designer traces and co-creates along with the computer.
In these projects, both roles of the human and computer are to generate and fall into the third
category.
3. System Design
In this section, we describe the technical components of the Co-Build system and explain the
user interaction with the system.
3.1. System Architecture Design
Co-Build is a web-based application that is a co-creative software system are to generate. Co-
Build lets people collaborate in real-time to build 3D models. The application uses Three.js
(a JavaScript library and API) to create and display interactive 3D computer graphics in the
web browser. Three.js uses WebGL to draw and render 3D objects. The application consists
of two parts a Node.js web server and a Three.js web client. The system design is shown in
Fig 1. The web server and web client communicate via WebSocket protocol. Co-Build utilizes
and builds on top of the Three.js voxel painter example [26] and Lucas Majerowicz’s code on
building real-time applications [27].
The code for the web client is structured using the MVC (model view control) pattern. The
application logic is built-in on the web client. Hence, the computation and interaction happen
on the front end. Since the code is structured using MVC on the frontend, instead of linking
multiple JavaScript files in the HTML, the application uses module bundler - webpack. The
bundler internally builds a dependency graph that maps every module required by the project
and generates one compiled bundled JavaScript file.
The frontend has two classes: voxel and the voxel grid. The voxel class has details relating
to a single voxel like its dimensions, color, and id. The voxel grid has information regarding a
collection of voxels along with the gird dimensions. The view component is responsible for
setting up the Three.js scene, user interface, and sending user actions/requests to the controller.
The controller is responsible for performing the user’s action on the voxels and the voxel grid
and sending it to the view component. The WebSocket connection is handled by a separate
JavaScript file called the Remote Client. This file is responsible for maintaining the WebSocket
connection and sending and receiving messages to and from other clients through the web
server.
The primary responsibility of the web server is to receive messages from each web client
and broadcast it to other web clients. The Nodejs server uses the WS library for creating a
WebSocket connection. When a new client joins, the server will send a list of all the previously
executed commands and ensure that the new client is in sync with all the other clients. The
entire web application is hosted on Heroku and can be accessed through the following URL:
http://Co-Build.herokuapp.com/
3.2. User Interaction
Since the platform utilizes WebSocket, it can support as many collaborators as needed. How-
ever, for this study, the collaboration will always take place between two collaborators. The
application enables the user to add voxel by clicking on an empty place on the grid or another
voxel’s face. The user can remove a voxel by holding the shift key and clicking on the voxel.
The user can remove voxels added by them or by another collaborator. Additionally, the user
can rotate the 3D scene as in any CAD software by right-clicking the mouse and moving in the
rotation direction. The user can also zoom in and out of the scene using the scroll wheel on the
mouse. Furthermore, the user can set the perceptual logic for the intelligent agent (architecture
machine). The perceptual logic dictates what the machine considers for producing its outcome.
Consequently, for local logic, the machine considers the last two moves made by the human. In
regional logic, the machine considers the last ten moves made by the human and divides the
composition into regions. In global logic, the machine looks at the entire composition.
4. Evaluation
In this section, we describe the framework we employ for evaluation and analysis. We briefly
describe the study design and the collected data. We then present the results of the analysis of
the data.
Evaluating co-creative systems is still an open research question. There is no standard metric
that can be used across specific systems. However, a critical component of co-creative systems
is the interaction between the machine and the human. While there are different frameworks on
evaluating a co-creative system, we utilize the framework proposed by Karimi et al. [28]. Table
1 shows the application of the framework for this research. Evaluation in many co-creative
systems is about the creativity of the collaborative agent. However, for this study, we do not
measure creativity. Instead, according to the research goals, human-human and human-machine
interaction are evaluated with simultaneous and turn-taking collaboration. The evaluation in
both conditions is summative. Borrowing from the evaluation methods used for the ‘Drawing
Apprentice,’ retrospective video walk-throughs and post-task structured interviews are used for
evaluating Co-Build.
Table 1
Evaluation Framework in relation Co-Build
4.1. Study Design
The user study is designed to help understand the emergent participatory sense-making that
arises from human-human collaboration and to test if participatory sense-making arises in
human-machine collaboration.
The user study was conducted through a video call on google meet. The study consisted
of three tasks. The first task was to familiarize the participants with all the controls and
navigation of Co-Build. Following this, the participants were introduced to the silent game
rules. The second task was to play two variations of the silent game with a human collaborator.
Accordingly, the second task had two subtasks lasting for 3 minutes each with a break in
between. The first subtask was dedicated to the switch silent game. The second subtask was
dedicated to the simultaneous silent game. The third task was to play two variations of the
silent game with the machine (WoZ) collaborator. The participants were first introduced to
perceptual logic settings. Since this study utilizes the wizard of oz technique, the wizard copied
the last user move in the local logic. In regional logic, the wizard finished incomplete structures.
In global logic, the wizard mirrored a portion of the structure. Following this, the participants
played both the variations of the games lasting for 3 minutes each with a break in between.
During the second and third tasks, the screen was recorded.
After the design tasks, a retrospective video walkthrough was conducted. The participants
were asked to explain their thought process during each collaboration briefly while watching
the video of their interaction. Following this, the participants were interviewed. The post-task
structured interview had nine questions as shown in Table 2 designed to explore the research
goals, research questions, and evaluation metrics. The user study on an average lasted for
around 45 minutes.
Table 2
Interview Questions
4.2. Data Collected
The user study was conducted with 20 participants, 8 females, and 12 males with an average
age of 25. Out of 20, 15 participants had a background in architecture and design, and 5 had a
non-design background. The participants were recruited through email after they had read and
agreed to the consent form.
The data generated from the study includes screen recordings of the design tasks, the audio and
transcribed data from the retrospective video walk-through (protocol data), and the transcribed
data of the post-task interview. A sample screenshot during both the collaboration conditions
is shown in Fig 2.
A simple comparative analysis was conducted on the interview data. Therefore, all the
transcribed interview data was simplified and compiled in a table to make it easier to quantify
and compare. Inductive thematic analysis was conducted on protocol data for both Human-
Human and Human-Machine(WoZ) collaboration. Based on this analysis, three common
themes were identified for both collaboration conditions: participatory sense-making in the
collaboration, interaction dynamics, and emergent form-making.
Figure 2: Screenshot of outcome of collaboration with human(left) and machine (right).
4.3. Analysis of Post-Task Interview Data
Seventeen participants reported that the collaboration was beneficial to their design process.
During the machine’s (WoZ Human) primary mimicking behavior, 17 participants reported
that the machine’s contribution made sense. Thirteen participants preferred simultaneous
interactions with the machine as they enjoyed it more or liked the machine’s real-time response.
Four reported they preferred turn-taking interactions because they had more control over
the machine, or they could closely monitor and analyze the machine’s contribution.Eleven
participants reported they prefer global perceptual logic. Six of them expressed that they
designed by looking at the big picture, and the machine was doing the same in the global setting.
Also, 7 of the participants thought that since the machine took into account all the voxels, it had
more data to train and learn. Five participants preferred regional perceptual logic because they
felt the machine was completing their structure following their design logic. Six participants
preferred local perceptual logic. They expressed that the machine was paying close attention to
them by following and mimicking what they were doing precisely. Two participants reported
that local logic could be used to automate monotonous and repetitive tasks. Five participants
reported that the machine was a tool because it was mimicking. Five participants reported that
the machine was more than a tool but less than a partner because it followed their design logic
and completed their structure. Nine participants reported that it was a partner because it either
gave them new ideas or they could not fully control the machine. One participant thought the
machine was like an opponent because it competed with them to place the blocks.
4.4. Analysis of Protocol Data for Human-Human Collaboration
While 18 participants reported that the collaboration was beneficial, nine preferred to collaborate
with the human. The dominant reason for this was the human partner’s diverse thinking,
inventiveness, similar spatial understanding, and trust.
Participatory Sense Making in the Collaboration- during the retrospective video walk-through,
19 participants expressed they got a new idea from the collaboration. For example, P14 expressed
that the final form resulted from two people working together and had no prior design that
they were trying to achieve. P14 stated this was during the switch collaboration -
“Initially, I was just exploring the platform’s possibilities, I had no design in mind.
However, through collaboration, I started forming new ideas and adding blocks
in places that seemed interesting. I think overall, it was good exercise. It was
interesting to see how two minds worked with different perspectives to develop a
form together.”
It is interesting to note that even though P14 agreed that collaboration with the human was
interesting and got new ideas, P14 preferred to collaborate with the machine. The above case
showcased participatory sense-making when they had no design in mind. P3, on the other hand,
had a design in mind, but during the collaboration, the idea changed drastically to something
else. P3 stated this for the switch collaboration-
“Initially, I was trying to make some alphabets, then I changed to make them into a
3D shape. Based on the collaborator’s move, I changed my mind. Then I started
making two buildings beside each other and connected them.”
Interaction Dynamics- typically, during human collaboration, the participants either built
their design and joined it to the collaborator’s design or started making their design in response
to the collaborator’s design. Participants mostly preferred turn-taking/switch interaction with
the human collaborator. P16 sheds light on this as follows-
“I started with shape without thinking I just put things. After a couple of moves by
the collaborator, I started seeing the shape semantically and started to interpret
shapes, so from the plan view, it looked like a human belly, and hence I started
adding legs. And then the collaborator continued adding blocks to it. It was an
interesting process. The switching allowed me to interpret occasionally and change
what I want to do based on the actions.”
Further, P16 described simultaneous interaction as “hectic” and said “had no time and just placed
blocks because many things were happening together.”
Emergent form making- this was a dominant feature during human collaboration. All partici-
pants expressed that the final form was not what they had initially thought of or basing their
decision based on the collaborator’s move. The emergent form-making is demonstrated the
best in P2’s comments-
“In this, I started by thinking of building vertical structures, but then I switched to
making arches. At the same time, my collaborator added boxes that looked like
supports to the arch, so even I added supports based on the collaborator’s move.
Later based on all the voxels on the screen, I thought it looked like a pyramid and
started building a flat pyramid in the vertical plane.”
4.5. Analysis of Protocol Data for Human-Machine Collaboration
Eleven participants preferred to collaborate with the machine. The dominant reason for this
was the control over the machine and hence, the 3D form. Other reasons were the mimicking
action of the machine, similarity of the output, and design alignment.
Participatory Sense Making in the Collaboration- during the retrospective walk-through, only 3
participants expressed that they got a new idea or a new design direction from the collaboration.
P2 expressed this during simultaneous interaction-
“I began with the local logic setting. The agent was just extending my moves
concurrently. So, I changed the logic to global. Though it was making symmetric
moves, it kind of surprised me when it started building something resembling
archways or roman aqua ducts. Then I continued with adding more to the aqua
duct.”
All the participants were keener on understanding how the machine worked. So, the participants’
mindset changed from collaborating to “let us see how the machine reacts to this move.”
Interaction Dynamics- the participants went into a testing mode during the collaboration
with the machine. For example, P13 deliberately added random blocks and wanted to see if the
machine detected any pattern that P13 was using subconsciously-
“So, there is one thought process behind this, that is randomness, I was trying to rid
myself of using any logic. I wanted to see if the machine shows me the logic that I
was using when I was thinking that I was not using any logic. At the same time,
I was switching between different logic modes of the machine. I think it did an
excellent job. It seems like I had some subconscious logic while placing the blocks.
This was especially evident in the global setting when the machine used the entire
grid and kept bringing back my earlier chain of thought.”
Another dominant thought was the control and authority over the machine, as stated by P12 in
simultaneous case.
“I had figured out how the machine was working, so switched between the logic,
like, when I wanted the machine to follow me, I selected the local. And when I
wanted a global perspective, I selected regional or global. As opposed to other
cases in this one, I tried to focus on one structure instead of spreading it out. The
machine behavior was predictable.”
Two participants expressed their frustration in the local logic mode as the machine was placing
blocks where they wanted to place. Furthermore, one participant regarded the machine as a
puppy following around in the local setting.
Emergent form making even though the machine was mimicking, the emergent form-making
was a dominant feature during the simultaneous interaction. The collaboration with the machine
produced controlled but emergent and complex forms. This is highlighted in P8’s comments
regarding the final form in the simultaneous interaction-
“This was the most interesting of all because, after the initial switch in the logic
from local to regional to global, the scheme became quite cohesive.”
4.6. Evaluation Metrics
Eighteen participants reported that the collaboration was beneficial for their design process. The
collaboration was equal in both human and machine collaboration. Comparatively, engagement
was higher with a human collaborator. In the machine collaborator, engagement was higher in
the simultaneous interaction.
Ownership varied a lot between human collaboration and machine collaboration. In compar-
ison, all participants attributed the outcome for human collaboration as the work of two minds.
Nineteen participants claimed sole ownership of the form in the case of machine collaboration.
It was interesting also to note the degree ownership changed with different perceptual logic
settings. It was highest in local, followed by regional and was least in global.
5. Conclusion
This paper reports on a co-creative system that explores the enactive model of creativity in an
architecture machine that performs variations on the silent game. In this section, we describe
the conclusions we drew from the data analysis concerning the research goals and research
questions; observations from the overall development and study; and prospective avenues of
exploration and project development.
Participatory sense-making was higher in human-human collaboration as compared to human-
machine(WoZ) collaboration. The engagement was much more in human-human collaboration.
The participants reported less emergent form making experiences with machine collaboration.
Participants repeatedly expressed that the machine should not just mimic the users but also
generate new ideas. The participants proposed a variety of methods. Few of the prominent ones
were: randomized voxel placement, mimicking with random mutation, machine initiating the
design, and working towards a common design goal. Participants noted that participatory sense-
making was more in simultaneous interaction. Within simultaneous interaction, participatory
sense-making occurred more in regional and global perceptual logic.
Participants used pattern mimicking, pattern continuation, and similar pattern creation as
the metrics to decide if the contribution made sense. Also, symmetry, continuity, repetition, and
proximity were the dominant visual design principles employed by the participants to make
sense of the machine contribution. It is interesting to note the correlation between ownership
and the sense-making of the machine contribution. Higher ownership was more likely when
the participant declared the machine contribution to be sensible.
Only 5 participants regarded the machine as a tool: the rest claimed the machine was either
more than a tool or claimed that they saw the machine as a partner. The primary reason for
this was that the participants could not fully control the machine’s output. Furthermore, the
machine was also continuing and completing their designs.
The following three dominant design recommendations can be made from the user study and
participant observations. First, the machine can have a “personality” and “design belief system”
of its own. For example, in the human-human collaboration case, spatial understanding, and
the collaborators’ personality played a big part in the collaboration. Second, the machine can be
provocative, that is, should not always follow the human, it can make changes to the structures
built by the human. Many participants attributed the human’s provocative nature as one reason
for getting new ideas and increased participatory sense-making. Third, as suggested by all the
participants; the machine should not just mimic and generate and work towards a separate idea.
5.1. Discussion
It was interesting to note that the bar for an entity to be considered a design partner is very
low. As previously stated, the majority of the participants considered Co-Build in its current
state as a partner. Currently, the only difference between Co-Build and any CAD software is
the way the command is given. In CAD software, the user explicitly gives specific commands
to the machine, like extrude top surface of the box by a unit length. Whereas in Co-Build, the
machine detects the user’s moves and performs it. This subtle change seems to be the main
reason as to why the participants considered Co-Build as a partner.
The participatory sense-making from the machine’s point of view was restricted to be purely
geometric. For example, during the WoZ Human-Machine collaboration, in local logic, the
machine would replicate the user’s moves by adding voxels in the direction and location where
the participant last added a voxel. No syntactic or semantic analysis was carried out by the
machine. It would have been easy to simulate that the machine understood the semantic
meaning using the wizard of oz. However, it was deliberately kept simple to keep it attainable.
Also, during this study, it was found that there are no data banks that can be used to train the
machine on building 3D objects collaboratively.
Using the enactive model of creativity facilitates not only emulating a designer’s way of
thinking but also facilitates a method for data labeling that can be used for machine learning.
When the participant changes the perceptual logic settings between local, regional, and global,
the participant is also informing or labeling their moves. When sufficient data is available, this
can be utilized by the machine to switch between logic models automatically.
5.2. Future Work
The current system allows interaction through the website. One possible direction of work is
to explore the same system with the integration of interactive and physical computing. This
integration would facilitate other modes of interaction with the system. Moreover, it would
significantly change the user experience with the system, for example, the mimicking action by
the machine may also seem very intelligent.
The second direction is investigating human-human-machine interaction, that is, two humans
collaborating with a mimicking machine. This would increase participatory sense-making and
provide the opportunity for the machine to choose between which human to mimic.
For sense-making, it is currently not clear if the participants were evaluating the emergent
3D massing or evaluating the machine logic and the way the machine behaved. On analyzing
the transcripts, it cannot be said for sure what the participants were evaluating. And this an
exciting avenue for further exploration. Further, it will also be interesting to explore when the
machine contributions stop making sense in the same given setup.
Acknowledgments
We thank Dimitris Papanikolaou for his feedback and discussions during the project’s develop-
ment.We thank the anonymous reviewers for their constructive feedback.
References
[1] A. Menges, S. Ahlquist, Computational design thinking: computation design thinking,
John Wiley & Sons, 2011.
[2] J. Kim, H. Takahashi, H. Miyashita, M. Annett, T. Yeh, Machines as Co-Designers: A
Fiction on the Future of Human-Fabrication Machine Interaction, in: Proceedings of the
2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems - CHI
EA ’17, ACM Press, Denver, Colorado, USA, 2017, pp. 790–805. URL: http://dl.acm.org/
citation.cfm?doid=3027063.3052763. doi:1 0 . 1 1 4 5 / 3 0 2 7 0 6 3 . 3 0 5 2 7 6 3 .
[3] N. Davis, C.-P. Hsiao, K. Y. Singh, L. Li, S. Moningi, B. Magerko, Drawing Apprentice: An
Enactive Co-Creative Agent for Artistic Collaboration, in: Proceedings of the 2015 ACM
SIGCHI Conference on Creativity and Cognition - C&C ’15, ACM Press, Glasgow, United
Kingdom, 2015, pp. 185–186. URL: http://dl.acm.org/citation.cfm?doid=2757226.2764555.
doi:1 0 . 1 1 4 5 / 2 7 5 7 2 2 6 . 2 7 6 4 5 5 5 .
[4] N. Davis, C.-P. Hsiao, K. Yashraj Singh, L. Li, B. Magerko, Empirically Studying Par-
ticipatory Sense-Making in Abstract Drawing with a Co-Creative Cognitive Agent, in:
Proceedings of the 21st International Conference on Intelligent User Interfaces - IUI ’16,
ACM Press, Sonoma, California, USA, 2016, pp. 196–207. URL: http://dl.acm.org/citation.
cfm?doid=2856767.2856795. doi:1 0 . 1 1 4 5 / 2 8 5 6 7 6 7 . 2 8 5 6 7 9 5 .
[5] E. Gulay, A. Lucero, Integrated Workflows: Generating Feedback Between Digital and
Physical Realms, in: Proceedings of the 2019 CHI Conference on Human Factors in
Computing Systems, CHI ’19, ACM, New York, NY, USA, 2019, pp. 60:1–60:15. URL: http://
doi.acm.org/10.1145/3290605.3300290. doi:1 0 . 1 1 4 5 / 3 2 9 0 6 0 5 . 3 3 0 0 2 9 0 , event-place: Glasgow,
Scotland Uk.
[6] F. Blanciak, Siteless: 1001 building forms, MIT Press, 2008.
[7] N. Negroponte, The architecture machine, Computer-Aided Design 7 (1975) 190–195.
Publisher: Elsevier.
[8] K. Vaajakallio, T. Mattelmaki, Design games in codesign: as a tool, a mindset and a
structure, CoDesign 10 (2014) 63–77. URL: http://www.tandfonline.com/doi/abs/10.1080/
15710882.2014.881886. doi:1 0 . 1 0 8 0 / 1 5 7 1 0 8 8 2 . 2 0 1 4 . 8 8 1 8 8 6 .
[9] N. J. Habraken, M. D. Gross, Concept design games :a report submitted to the National
Science Foundation Engineering Directorate, Design Metholodogy Program /, Cambridge,
Mass. :, 1987. URL: http://hdl.handle.net/2027/coo.31924056611514.
[10] G. A. Wiggins, Searching for computational creativity, New Generation Comput-
ing 24 (2006) 209–222. URL: http://link.springer.com/10.1007/BF03037332. doi:1 0 . 1 0 0 7 /
BF03037332.
[11] M. L. Maher, Computational and collective creativity: Who’s being creative?, in: ICCC,
Citeseer, 2012, pp. 67–71.
[12] K. D. Willis, C. Xu, K.-J. Wu, G. Levin, M. D. Gross, Interactive fabrication: new interfaces
for digital fabrication, in: Proceedings of the fifth international conference on Tangible,
embedded, and embodied interaction - TEI ’11, ACM Press, Funchal, Portugal, 2011,
p. 69. URL: http://portal.acm.org/citation.cfm?doid=1935701.1935716. doi:1 0 . 1 1 4 5 / 1 9 3 5 7 0 1 .
1935716.
[13] S. Mueller, P. Lopes, P. Baudisch, Interactive construction: interactive fabrication of
functional mechanical devices, in: Proceedings of the 25th annual ACM symposium on
User interface software and technology - UIST ’12, ACM Press, Cambridge, Massachusetts,
USA, 2012, p. 599. URL: http://dl.acm.org/citation.cfm?doid=2380116.2380191. doi:1 0 . 1 1 4 5 /
2380116.2380191.
[14] H. Agrawal, U. Umapathi, R. Kovacs, J. Frohnhofen, Protopiper: Physically Sketching Room-
Sized Objects at Actual Scale, in: Proceedings of the 28th Annual ACM Symposium on
User Interface Software & Technology - UIST ’15, ACM Press, Daegu, Kyungpook, Republic
of Korea, 2015, pp. 427–436. URL: http://dl.acm.org/citation.cfm?doid=2807442.2807505.
doi:1 0 . 1 1 4 5 / 2 8 0 7 4 4 2 . 2 8 0 7 5 0 5 .
[15] H. Peng, A. Zoran, F. V. Guimbretière, D-Coil: A Hands-on Approach to Digital 3D
Models Design, in: Proceedings of the 33rd Annual ACM Conference on Human Factors
in Computing Systems, CHI ’15, Association for Computing Machinery, Seoul, Republic of
Korea, 2015, pp. 1807–1815. URL: https://doi.org/10.1145/2702123.2702381. doi:1 0 . 1 1 4 5 /
2702123.2702381.
[16] A. Daniele, Y.-Z. Song, AI + Art = Human, in: Proceedings of the 2019 AAAI/ACM
Conference on AI, Ethics, and Society, AIES ’19, ACM, New York, NY, USA, 2019, pp.
155–161. URL: http://doi.acm.org/10.1145/3306618.3314233. doi:1 0 . 1 1 4 5 / 3 3 0 6 6 1 8 . 3 3 1 4 2 3 3 ,
event-place: Honolulu, HI, USA.
[17] L. Devendorf, K. Ryokai, Being the Machine: Reconfiguring Agency and Control in Hybrid
Fabrication, in: Proceedings of the 33rd Annual ACM Conference on Human Factors in
Computing Systems - CHI ’15, ACM Press, Seoul, Republic of Korea, 2015, pp. 2477–2486.
URL: http://dl.acm.org/citation.cfm?doid=2702123.2702547. doi:1 0 . 1 1 4 5 / 2 7 0 2 1 2 3 . 2 7 0 2 5 4 7 .
[18] B. Lafreniere, T. Grossman, F. Anderson, J. Matejka, Crowdsourced Fabrication, in:
Proceedings of the 29th Annual Symposium on User Interface Software and Technology -
UIST ’16, ACM Press, Tokyo, Japan, 2016, pp. 15–28. URL: http://dl.acm.org/citation.cfm?
doid=2984511.2984553. doi:1 0 . 1 1 4 5 / 2 9 8 4 5 1 1 . 2 9 8 4 5 5 3 .
[19] A. Zoran, R. Shilkrot, S. Nanyakkara, J. Paradiso, The Hybrid Artisans: A Case Study in
Smart Tools, ACM Transactions on Computer-Human Interaction 21 (2014) 1–29. URL:
http://dl.acm.org/citation.cfm?doid=2633906.2617570. doi:1 0 . 1 1 4 5 / 2 6 1 7 5 7 0 .
[20] N. Kato, H. Osone, D. Sato, N. Muramatsu, Y. Ochiai, DeepWear: a Case Study of Collabo-
rative Design between Human and Artificial Intelligence, in: Proceedings of the Twelfth
International Conference on Tangible, Embedded, and Embodied Interaction - TEI ’18,
ACM Press, Stockholm, Sweden, 2018, pp. 529–536. URL: http://dl.acm.org/citation.cfm?
doid=3173225.3173302. doi:1 0 . 1 1 4 5 / 3 1 7 3 2 2 5 . 3 1 7 3 3 0 2 .
[21] M. V. Law, J. Jeong, A. Kwatra, M. F. Jung, G. Hoffman, Negotiating the Creative Space in
Human-Robot Collaborative Design, in: Proceedings of the 2019 on Designing Interactive
Systems Conference - DIS ’19, ACM Press, San Diego, CA, USA, 2019, pp. 645–657. URL:
http://dl.acm.org/citation.cfm?doid=3322276.3322343. doi:1 0 . 1 1 4 5 / 3 3 2 2 2 7 6 . 3 3 2 2 3 4 3 .
[22] R. Kovacs, A. Seufert, L. Wall, H.-T. Chen, et al., TrussFab: Fabricating Sturdy Large-
Scale Structures on Desktop 3D Printers, in: Proceedings of the 2017 CHI Conference on
Human Factors in Computing Systems, CHI ’17, Association for Computing Machinery,
Denver, Colorado, USA, 2017, pp. 2606–2616. URL: https://doi.org/10.1145/3025453.3026016.
doi:1 0 . 1 1 4 5 / 3 0 2 5 4 5 3 . 3 0 2 6 0 1 6 .
[23] C. Oh, J. Song, J. Choi, S. Kim, S. Lee, B. Suh, I Lead, You Help but Only with Enough
Details: Understanding User Experience of Co-Creation with Artificial Intelligence, in:
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI
’18, ACM Press, Montreal QC, Canada, 2018, pp. 1–13. URL: http://dl.acm.org/citation.cfm?
doid=3173574.3174223. doi:1 0 . 1 1 4 5 / 3 1 7 3 5 7 4 . 3 1 7 4 2 2 3 .
[24] P. Karimi, N. Davis, M. L. Maher, K. Grace, L. Lee, Relating Cognitive Models of Design
Creativity to the Similarity of Sketches Generated by an AI Partner, in: Proceedings of the
2019 on Creativity and Cognition - C&C ’19, ACM Press, San Diego, CA, USA, 2019, pp.
259–270. URL: http://dl.acm.org/citation.cfm?doid=3325480.3325488. doi:1 0 . 1 1 4 5 / 3 3 2 5 4 8 0 .
3325488.
[25] O. Y. Gun, Computing with Watercolor Shapes, in: G. Cagdas, M. Ozkar, L. F. Gul, E. Gurer
(Eds.), Computer-Aided Architectural Design. Future Trajectories, Communications in
Computer and Information Science, Springer, Singapore, 2017, pp. 252–269. doi:1 0 . 1 0 0 7 /
978- 981- 10- 5197- 5_14.
[26] R. Cabello, three.js examples, 2019. URL: https://threejs.org/examples/#webgl_interactive_
voxelpainter.
[27] L. Majerowicz, lucasmajerowicz/threejs-real-time-example, 2020. URL: https://github.com/
lucasmajerowicz/threejs-real-time-example, original-date: 2016-08-21T18:19:10Z.
[28] P. Karimi, K. Grace, M. L. Maher, N. Davis, Evaluating creativity in computational co-
creative systems, arXiv preprint arXiv:1807.09886 (2018).