=Paper= {{Paper |id=Vol-2978/saerocon-paper8 |storemode=property |title=Modeling, Visualizing, and Checking Software Architectures Collaboratively in Shared Virtual Worlds |pdfUrl=https://ceur-ws.org/Vol-2978/saerocon-paper8.pdf |volume=Vol-2978 |authors=Rainer Koschke,Marcel Steinbeck |dblpUrl=https://dblp.org/rec/conf/ecsa/KoschkeS21 }} ==Modeling, Visualizing, and Checking Software Architectures Collaboratively in Shared Virtual Worlds== https://ceur-ws.org/Vol-2978/saerocon-paper8.pdf
Modeling, Visualizing, and Checking Software
Architectures Collaboratively in Shared Virtual Worlds
Rainer Koschke1 , Marcel Steinbeck1
1
    University of Bremen, Bibliothekstraße 1, 28359 Bremen, Germany


                                             Abstract
                                             Software visualization is useful to highlight certain aspects of software in a way that is easy to grasp for humans. In this paper,
                                             we present our software visualization platform SEE which, among other use cases related to software development, assists
                                             developers and architects in identifying inconsistencies between the architecture and the implementation of a software—
                                             using the software reflexion model. SEE is based on the software-as-a-city metaphor and presents the generated software
                                             cities in virtual worlds that can be entered by multiple users from different locations (i.e., they do not have to be physically
                                             in the same place). Within these worlds, users can see each other as avatars and communicate via a built-in voice chat. A
                                             special feature of SEE is the ability for users to interact remotely with the cities in real-time and thus creates a basis for
                                             collaborative work that goes far beyond the classic means of distributed software development.

                                             Keywords
                                             reflexion analysis, software visualization, virtual and augmented reality, code cities, distributed development



1. Introduction                                               aware of any truly collaborative architecture modeling
                                                              and checking tool that enables developers to model and
There has been a sustained trend towards distributed validate architecture together at the same time, yet at
software development long before the current pandemic different locations. Currently, architects and developers
situation [1]. Distributed development is a consequence need to use screen sharing and video-conferencing sys-
of budget and time limitations, lack of developers, need tems to use such tools remotely. These, however, are very
for specialized expertise, lack of space, and other factors. generic tools with no relation to the actual task at hand
It takes place at large scale in terms of offshoring but also and, hence, are cumbersome to use.
at small scale within an organization whose developers
of the same team are not all in the same room. If devel- Contributions In this paper, we present our software
opers of distributed teams need to work together, spatial visualization tool SEE (for Software Engineering Expe-
gaps need to be bridged. Remote joint development is a rience). SEE is a multi-purpose visualization platform
particular challenge in situations where tight collabora- based on the software-as-a-city metaphor that allows
tion requires a high degree of communication. This is, users (software architects, developers, etc.) at different lo-
for instance, the case when a team needs to recover and cations (i.e., they do not have to be physically in the same
validate a software architecture from an existing system. place) to work collaboratively on software architecture
For large systems, there is rarely a single person who in shared virtual worlds. Within these worlds, all users
knows all the details. Hence, multiple developers need have a visual representation—an avatar—and can thus see
to work together to reconstruct an accurate architectural each other. In addition, users can talk to each other via an
description and to decide which implementation depen- integrated voice chat. The virtual worlds created by SEE
dencies violate the architectural rules and how to handle are dynamic so that users can highlight and change parts
these violations.                                             of the visualized software cities, which is visible to the
    There are a few collaborative UML modeling tools, other users in real-time. SEE can be used from different
where different users can work on the same diagrams to hardware devices: desktop computers, tablets, and virtual
model an architecture [2, 3]. Likewise, there are collabo- reality systems (VR). One of the primary use cases of SEE
rative integrated development tools (IDE) such as Intelli/J is the support of the software reflexion model [4], that is,
IDEA with the feature Code With Me, which allow to edit the automatic identification of inconsistencies between a
and debug code collaboratively at the source-code level. specified software architecture its implementation. This
Even though there are several tools to model and validate use case, and how it is implemented in SEE, will be the
an architecture—even in the market place—we are not central subject of this paper.
                                                                                                                      Outline The remainder of this paper is structured as
ECSA2021 Companion Volume
                                                                                                                      follows. Section 2 presents related research. Section 3
" koschke@uni-bremen.de (R. Koschke);
marcel@informatik.uni-bremen.de (M. Steinbeck)                                                                        describes SEE and Section 4 how SEE can be used to
                                       © 2021 Copyright for this paper by its authors. Use permitted under Creative
                                       Commons License Attribution 4.0 International (CC BY 4.0).
                                                                                                                      support remote collaborative reflexion analysis. Section 5
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073       CEUR Workshop Proceedings (CEUR-WS.org)                                        concludes.
Figure 1: Virtual room with code cities for different use cases



2. Related Research                                         between entities (e.g., include dependencies, function
                                                            calls, and so on) can be visualized with edges [28]. Hier-
Our research focuses on the visualization of software archical edge bundles, as proposed by Holten [29, 30], is
and software architecture using the software-as-a-city a tried and tested means of diminishing the visual clutter
metaphor with a special emphasis on collaborative vi- that occurs when drawing lots of (overlapping) edges.
sualizations where users can face each other in shared         Code-Cities have long since arrived in practice and
virtual worlds from different hardware devices. In this are now used in commercial products to visualize the
section, we will present related research of software visu- change history and code quality of software. Examples
alization with regard to the software-as-a-city metaphor, include the software packages from Hello2morrow and
software visualization in virtual and augmented reality Serene [31]. There is also a plug-in for the widespread
environments, and collaborative software visualization. software analysis platform SonarQube, SoftVis3D, which
                                                            is based on Tree-maps and EvoStreets [32].
2.1. Software as a City
In addition to the quantitative properties of software (e.g.,     2.2. Software Visualization in AR/VR
lines of code), the hierarchy (e.g., namespaces) is often         As early as 2000 there were considerations to bring Code-
another aspect that needs to be visualized. An early ap-          Cities into virtual reality (VR) environments [7, 8]. Then
proach that covers both aspects at the same time is the           as today it was hoped that the advantages of VR observed
so called Tree-map [5] visualization. In Tree-maps the            outside of computer science [33, 34, 35, 36] also apply
hierarchy of a software system is depicted with recur-            in software visualization. Since then, Code-Cities were
sively nested rectangles where the area of the innermost          used in pseudo 3D (desktop computer monitors) and VR
rectangles is proportional to a certain metric. Initially,        environments to visualize static [37, 38, 10, 39, 16, 40, 41,
Tree-maps were designed as a two-dimensional visual-              42, 23] as well as dynamic [43, 15, 44, 45, 21] aspects of
ization. However, quickly the idea came up to map an ad-          software. Studies have shown that head-mounted dis-
ditional metric to the height of the rectangles, leading to       plays (HMDs) may have a positive effect on the orienta-
three-dimensional blocks. Such three-dimensional Tree-            tion [33] and navigation speed [46] of users. That said,
maps create the impression of a typical North American            there are also studies where a positive effect could not
city with buildings arranged in a grid. Due to this pic-          be found [47] or where using HMDs had only little ef-
torial representation, three-dimensional Tree-maps are            fect [48]. How these different results are to be assessed
also known as Code-Cities [6]. Code-Cities were quickly           is a subject of further research.
adopted by the research community and are still very
popular today [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
19, 20, 21, 22, 23, 24, 25, 26]. Besides the original Tree-
                                                                  2.3. Collaborative Software Visualization
map/Code-City algorithm there are also other layout               Co-located collaborative software visualization where
methods grounded in the software-as-a-city metaphor,              people interact with each other physically in the same
e.g., the EvoStreets visualization [27]. Using additional         place was studied by Isenberg et al. [49, 50] und Anslow
visual attributes, such as colors and textures mapped             et al. [51, 52]. The authors developed a multi-touch dis-
onto the surface of the three-dimensional blocks, further         play mounted onto a table that can be used by several
metrics can be expressed in Code-Cities [22]. Relations           people (primarily pairs of users) at the same time. One of
the key findings of these studies was that working face-        (and thus implicitly also surfaces and splines) can be
to-face around the table (i.e., sharing individual findings     arranged automatically—at the moment of writing, SEE
and communicating throughout) was a successful way              supports Circular Balloon, Circle Packing, Rectangle Pack-
for the pairs to solve complex problems, and that collab-       ing, Tree-map, and EvoStreets layouts. That said, it is also
orative software visualization should support multiple          possible to change the position of blocks and splines, and
users. An early attempt of implementing a distributed           insert and delete blocks and splines at any time (we will
collaborative visualization (i.e., users do not have to be      elaborate on that in Section 4.3).
physically in the same place) in the context of software           The entirety of all visualized nodes and edges of a
comprehension can be found in the work of Kot et al. [53].      graph and their visual mappings form a Code-City. That
Based on the Quake 3 game engine, the authors devel-            is, there is a one-to-one relation between an imported
oped a shared three-dimensional world where users can           graph and a Code-City. Code-Cities can be placed ar-
view, move, and arrange source-code files interactively.        bitrarily in the virtual world of SEE. For example, we
Further works in the area of collaborative software visual-     created a virtual world—a Unity scene—that reminds one
ization are [38, 54, 55, 56, 3]—in a broader sense also [57].   of a small library. Within this library are different tables
Collaborative software visualization with focus on Code-        on which a city could be placed as desired. Actually, SEE
Cities in shared virtual words was recently studied by          allows to visualize multiple cities at once. That is, with
Zirkelbach et al. [58] and Jung et al. [25]. In both studies    respect to our example, it is possible to import different
users were represented as abstract avatars in the virtual       graphs (or even the same graph multiple times), config-
world, allowing them to perceive each other and to sup-         ure their visual mappings independently, and place each
port their verbal communication with a simple form of           of the generated cities on its own table (cf. Figure 1).
gesture. This aspect was in particular received positively
in the study by Zirkelbach et al. [58], yet, the participants
would have preferred a more realistic, human represen-          4. Telecollaborative Reflexion
tation.
                                                                If one has an architecture modeling tool, one can share
   None of those studies aimed at architecture modeling
                                                                it via a screen sharing tool. Generally, those generic
and validation, which is the focus of our paper.
                                                                screen sharing tools provide very limited modes of in-
                                                                teractions. Often only a single person has control over
3. SEE Software Visualization                                   the screen and everyone is forced to view its content
                                                                from the same perspective, that is, the shared content is
SEE (Software Engineering Experience) is a multi-purpose        identical for everyone. Users cannot take their own per-
multi-user software visualization platform built upon           spective, see different details, or otherwise interact with
the Unity game engine. The underlying data model vi-            the shown content independently from others. Screen
sualized by SEE is a hierarchical graph with attributed         sharing embedded in video conference systems allows
nodes and edges—hierarchical means that nodes can be            to also view the other present members of the team, but
nested. Generally, SEE does not make any assumptions            just at the edges of the shared content. They may be able
about what nodes, edges, and attributes represent and           to point to particular areas of interest via their mouse
thus could be used to visualize anything that can be en-        cursor but this pointing gesture is disconnected from
coded as hierarchical graph. However, the main purpose          the small video showing the participant triggering this
of SEE lies on the visualization of aspects related to soft-    gesture. There are first attempts to provide truly collab-
ware (e.g., source-code files, packages, components, and        orative UML modeling tools, where different users can
so on) and software development (e.g., change history,          work on the same diagrams [2]. Likewise, there are col-
quality metrics, and the like). Graphs can be imported          laborative integrated development tools (IDE) such as
from GXL files, a standard file format for exchanging           Intelli/J IDEA with the feature Code With Me. Yet, we
arbitrary graphs that is used both in academia and in-          are not aware of any collaborative architecture modeling
dustry [59]. The nodes and edges of an imported graph           and checking tool. Moreover, those collaborative coding
are visualized as three-dimensional blocks (leaf nodes),        or modeling tools do not allow to actually see the other
two-dimensional surfaces enclosing blocks (inner nodes),        team members. A lot of information between humans is
and splines connecting blocks or surfaces (edges) that          exchanged by non-verbal communication, however. For
can be hierarchically bundled. The visual components            instance, delayed movements or frowning may express
of the blocks (width, height, depth, and color), surfaces       hesitation or uncertainty.
(shape and color), and splines (thickness, color gradient,         Our goal is to enable developers in distributed teams
and bundling strategy) can be freely configured based on        to model and validate software architectures even when
the attributes attached to the corresponding nodes and          they are not at the same place. To do that we integrate
edges. Using one of the built-in layout engines, blocks         different technologies ranging from ordinary desktop
computers with 2D display, keyboard, mouse, and cam-           That is, in our context the same kind of implementation
era over tablet computers with touchscreens to modern          entities, for instance, all methods, should be depicted
hardware for augmented (AR) and virtual reality (VR).          alike. The inverse implication of that for us is that it
The decisions for our kind of visualization and the pro-       would be rather misleading if architectural and imple-
vided interactions are founded on concepts of cognitive        mentation components would look alike. Their form and
psychology including but not limited to laws of Gestalt,       color and other visual attributes should be different to
cognitive schemes, and mirror neurons. In the following,       make clear that they are different concepts. The law of
we will delve into those details.                              proximity states that objects close to each other appear to
                                                               form a group. Automatic layout algorithms are generally
4.1. Cognitive Foundations                                     agnostic to this law. They place their objects according
                                                               to other criteria. For instance, tree-maps just try to save
Visualization and interaction should be as intuitive as        space and the objects put in close neighborhood have gen-
possible. Intuition provides insights without inferences       erally no semantic relation. In our visualization, humans
of the conscious mind by drawing on processes that are         will model the architecture and in doing so obey to this
entrenched in human cognition [60]. These cognitive            law naturally. Moreover, because we have multiple hu-
mechanisms should be taken into account in the design          mans modeling the same architecture together, a process
of efficient and effective visualization and interactions.     to reach a consensus is enforced because an object can
   One such cognitive mechanism are cognitive schemas          be at only one place. Implementation entities are gener-
that describe associative structures in a human brain by       ally not modeled by humans, they are typically extracted
which knowledge and experience is organized. For in-           from the source code through static or dynamic analysis.
stance, if an object is observed, typical perceptions of       They have no initial physical location in 2D or 3D. The
how one can interact with this object are activated in the     only partial ordering criteria we can extract from the
memory. These associations are not limited to simple           code is hierarchical nesting (e.g., syntactic nesting, phys-
relations; they can also include more complex behavior.        ical containment in the file system, or type hierarchies),
All cognitive schemas have in common that they are trig-       linear order of declaration within a file, and dependencies
gered by a particular perception, for instance, an object,     among declarations, e.g., call relations. There are many
a situation, or a sensation. Dominic et al., for instance,     layout algorithms that consider hierarchies. For instance,
have investigated how the activation of such cognitive         our current implementation offers tree-maps, EvoStreets,
schemas have effected the behavior of participants in          circle packing, rectangle packing, and balloon layouting.
a VR scene [61]. They found that emotional reactions           Moreover, force-directed layouts will group together el-
could be triggered based on a suitable stimulus. Kot et al.    ements according to their dependencies. While these
observed in a visualization in VR how the participants         algorithms may provide a first good placement of the
grabbed objects—here representing files—to take those          elements extracted from the code, they have no deeper
objects to other present members to show them to others        knowledge of the semantics beyond direct dependencies
Kot et al. [53]. The observed behavior has not been fore-      and hierarchies. For this reason, humans are free to move
seen, let alone purposefully implemented by the authors.       the objects in our visualization arbitrarily once they have
The behavior just arose from a cognitive schema for small      been laid out automatically. In particular, when it comes
physical objects that can be grabbed. That means for our       to the mapping of implementation entities onto archi-
context that we should present objects in a visualization      tectural components, they may stack those at arbitrary
in a way that triggers the wanted behavior. For instance,      places within the boundaries of the visual element repre-
the mapping of implementation elements onto architec-          senting the architectural component they are mapped to.
tural components can be expressed by simply stacking           This way, the law of proximity will again hold.
one on the other. Metaphorically, the architecture is a           Mirror neurons describe a property of certain neu-
city map and an architectural component a district within      rons that were first detected in brains of monkeys but
this city map. Implementation elements are physical ob-        were later shown to exist in human brains, too. Cer-
jects, for instance, blocks that can be grabbed and put on     tain neurons, for instance, those responsible to control
the map. If an entity is to be re-mapped, it is just moved     a particular movement of the human body are not only
to the other place. The possible interaction is naturally      activated when this behavior is to be executed but also
suggested by a human’s experience with physical objects.       when a human just observes this behavior for another
   Another relevant cognitive mechanism are the laws of        person [63]. Recent research has shown that this property
Gestalt, which hold independently of cultural and even         is not limited to motor neurons specialized on muscle
interpersonal differences [62]. The laws of Gestalt are a      control. Even neurons deriving bodily reactions from
set of principles of human perception of viewn sceneries.      certain sensations were shown to have this property [64].
For instance, the law of similarity predicts that physically   The implication in the context of collaborative visualiza-
similar items will be perceived as the same kind of object.    tion may be that it could be advantageous to show other
present participants in a visualization as avatars such           cognitive schemas will not be triggered by this design.
that their movements can be observed by the behold-               Humans would not be tempted to move around buildings
ers. Triggering their respective mirror neurons could             that are magnitudes larger than themselves.
not only help in learning interactions by example but                Our new design is more similar to approaches to soft-
also provide non-verbal clues on the sentiment of the             ware visualization in co-located environments. In pre-
acting person, e.g., hesitant movements indicating uncer-         vious studies on collaborative visualization, researchers
tainty or forceful movements indicating definiteness or           have experimented with large multi-touch displays inte-
even anger. Our conjecture of the relevance of showing            grated in tables for the joint interaction with software vi-
the behavior of collaborating partners triggering mirror          sualizations at the same physical location [49, 50, 51, 52].
neurons is indicated by several studies on collaborative          The human beholders group around the physical table
visualization [58, 49, 51, 52, 25]. In particular the partic-     (display) and can see both the visualization and the other
ipants of the study by Zirkelbach et al. have explicitly          participants at the same time. They can communicate
stated that they appreciated the presence of other mem-           with each other both verbally and non-verbally, which
bers [58]. That study is interesting in two ways due to           has been observed as a great advantage in these studies.
the way the presence of the other participants was vi-            Our approach can be viewed as a virtualization of this
sualized: The participants were drawn only by a virtual           setting.
head-mounted display and the two handheld controllers                We provide a virtual room with several tables, each
in VR, not as human-like avatars. This way, others could          showing one particular software architecture and its im-
observe where someone was looking or where someone                plementation. Large organizations may have multiple
was pointing to with the hand, which was appraised to             applications and all of them could be made available in
be useful by the participants of the study. However, the          the same virtual room. This way participants could walk
participants stated that they would have preferred a more         from table to table and work on different programs or
human-like representation. This outcome is consistent             just compare these. Each program is represented by one
with studies in robotics which found that more human-             Code-City and generally there is one Code-City on each
like robots are generally more accepted [65] (up to the           table. Participants are, however, able to take a Code-City
point where these machines become too similar to real             to another table if they want to make comparisons be-
humans [66] and start to frighten humans). To further             tween different programs. The Code-Cities are shown in
explore the advantages of avatars and also to overcome            miniature. They can be scaled and zoomed, however. Un-
the said disadvantages of current video conference and            like the physical monitors for co-located environments,
screen sharing systems discussed above, we show the               there are no limits for scaling enforced; participants will
participants present by way of human avatars.                     stop scaling by themselves at the point when they think it
                                                                  becomes useless. And also unlike the physical displays in
4.2. Visualization and Interaction                                co-located environments, our visualization has three di-
                                                                  mensions. In particular, for implementation components,
After having introduced some of the foundations of hu-            height may have an important meaning, for instance,
man cognition influencing our design decision for the             the size of a class. If they are embedded in architectural
visualization and interaction, we will describe the latter        components, it becomes immediately visible which ar-
two in greater detail.                                            chitectural components tend to have god classes.
   Our early ideas with Code-Cities was to present them              Another advantage over physical multi-touch tables
true to scale, that is, the proportions of the human body         is that we can individualize what can be seen for each
and the buildings representing software entities were as          beholder. We take great care that the virtual room is
in real world [24, 67, 68, 69]. This causes problems of           consistent among all participants; for instance, if one
orientation due to occlusion and the limitations of the           participant grabs an object, this object must move in all
human short-term memory. While some of that might                 representations of the virtual room for all participants.
be mitigated by mini maps showing the current posi-               Thus, our visualization is essentially a distributed real-
tion of a person as in real world, it is still difficult to see   time application. We even have a global undo/redo his-
other participants if they are in other parts of the city.        tory identifying and prohibiting conflicting actions, e.g.,
Of course, one could blend them into the visible area of          one participant renames an element and the other one re-
the beholder either at its edges as in video conference           moves the same element. Nonetheless, there are aspects
systems or as part of a hand-held device analogously to           for which it makes sense to drawn them specifically to
video calls with a smart phone. Yet, that basically means         one beholder. For instance, on physical multi-touch ta-
to virtualize video conference systems in a virtual world,        bles, labels necessarily have exactly one orientation. A
and we wanted to overcome the problems of those. Mini             person on the opposite side will have difficulties to read
maps and embedded video conference calls are just tech-           them. Our virtual labels always face the beholder. More-
nical crutches to remedy bad design decisions. Moreover,          over, participants can query additional details on demand,
for instance, the source code of an implementation entity     approach is that the device has a limited range of visibil-
presented in a code viewer or additional metrics shown        ity forcing a user’s hands to be held out uncomfortably,
in a scatter plot. These additional views can be shared       gesture detection can be difficult if fingers occlude each
or not, depending upon whether the beholder has only          other, and the precision was not enough for fine interac-
a personal interest or whether she or he wants to talk        tions through direct manipulation with selected objects.
about it. If they were always shown as it would be the           The architecture can be modeled on the plane of a ta-
case for physical multi-touch tables, they could distract     ble around which all participants group. The creation
others.                                                       of objects is instantaneous on all connected computers
   The participants are visible as human avatars and can      so that all participants always have the same view. If
communicate via voice to each other. The avatar’s lips are    hand-tracking data is available, all participants can ob-
synchronized with the spoken word so that it can be seen      serve who is creating the new node or edge by visually
who is talking. To synchronize the movement of a human        following the hand. New and deleted nodes and edges
in the real world with his or her avatar, we leverage the     are animated so that changes are highlighted to every-
position data of the head-mounted display and hand-held       one present. It is important that everyone can see who
controllers in VR. In case of an ordinary desktop envi-       initiated a change even before the change is actually fi-
ronment, we can derive the viewing angle of the avatar’s      nalized so that they can possibly intervene and that all
head by the viewpoint (in game-engine lingo, the game         recent changes are obvious. To make sure, the architec-
camera angle) of his human counterpart. We currently          ture is syntactically consistent (e.g., there are no dangling
do not have sensors for the human’s hands in desktop          edges), conflicting changes are detected and refused.
environments. We plan to derive this information from            Users want to name the nodes they created, which can
3D depth cameras or even ordinary cameras with suitable       be a challenge in VR. We offer a virtual keyboard, but the
image-recognition software or physical trackers such as       haptic feedback is of course missing. For this reason, we
HTC’s Vive trackers. Neither do we have a way to cap-         allow a user to dictate a name for a new node through
ture, transfer, and present mimics yet. Again, we will        voice recognition, which works surprisingly well when
attempt to capture this data through cameras and ide-         the name is not cryptic—and maybe it is better to avoid
ally present them as real-time videos on the avatar’s face.   cryptic names anyhow. At least, the recognized name
There are already commercial applications1 for animating      may a good starting pointing that can then be corrected
avatars according to the mimics of a human face showing       by way of virtual keyboard keeping the necessary virtual
that this is doable.                                          key strokes at a minimum. Similarly to Seipel et. [72],
                                                              we also offer a conversational interface to initiate other
4.3. Virtual Reflexion Analysis                               actions (e.g., for showing the code of a component) to
                                                              free the user from the need to use a keyboard.
We are using the reflexion analysis [4, 70, 71] to recon-
struct and validate the software architecture. This section Mapping the implementation onto the architec-
describes in more detail how each step is implemented in ture Once the architecture model is created to the point
terms of the design of the visualization and interaction. that one can move on to relate the implementation com-
                                                            ponents to the architecture components they implement,
Architecture modeling The first step of the reflexion users can drag and drop implementation components
analysis is to create a model of the architecture. Concep- onto architecture components. This kind of mapping is
tually, the model forms a graph where nodes represent expressed through nesting, that is, the objects represent-
architectural components and directed edges specify ex- ing an implementation entity are stacked on the area of
pected dependencies between the connected components. an object representing an architectural component. This
Nodes can be nested in other nodes expressing hierar- interaction leverages the conceptual schemas (small ob-
chical systems [70]. Our users can create such models jects can be grabbed and moved) and the laws of Gestalt
on various devices, namely, ordinary desktop computers (an implementation entity is enclosed by an architecture
with mouse interactions, tablets with a pen recognizing component). It also leverages the mirror neurons as the
shapes and edge-drawing actions (a user can virtually physical action of the movement can be observed by the
draw an architecture with the pen), and with physical ac- other participants.
tions with the handheld controllers in VR environments.        Initially, the implementation is drawn as a Code-City
We plan to support MicroSoft’s HoloLens for AR, too. For next to the space where the architecture model is created
desktops, we also experimented with a hand-tracking de- by the user. Because the Code-City’s elements are ex-
vice named Leap Motion enabling a user to create nodes tracted by a static analysis, an automated layout will first
and edges with hand gestures. The problem with this decide how to place them. The user has the choice among
                                                            various layouts we offer. The implementation nodes can
    1
      https://facewaretech.com                              afterwards be moved around within the limits of their
containing node. The user can select the code metrics          mental reflexion analysis [71] to compute the effect of
determining the width, height, depth, and color range          each mapping decision thereby keeping the effort of the
of the nodes. All nodes of the same type (e.g., classes)       recalculation to a minimum. This on-the-fly computation
have the same shape, which can be selected by the user,        also supports what-if scenarios, where a user can drag an
too. The type of the edges is depicted by color. Their         implementation node over different architecture nodes
direction is shown as a color gradient of the chosen color.    to see the possible effect of a mapping immediately. All
Many other applications use arrow heads instead, but           edges effected by a new mapping are animated so that
they may overlap for nodes with many connecting edges.         the mapping effects can be observed among the many
Edges are laid out through hierarchical bundling [29],         other edges present in the scene.
which helps to reduce visual clutter when there are many          Edges in both the implementation and architecture
dependencies. Incoming and outgoing direct and transi-         are typed. Typed dependencies in an architecture make
tive edges can be hidden or highlighted on demand. The         sense, for instance, to allow calls between components
source code leading to a particular node or edge can be        but not accesses to attributes. As mentioned above, col-
opened in an in-game window with syntax highlighting.          ors are used for the type of edges. As a consequence,
   To map an implementation entity onto the architecture,      edge color cannot be used to distinguish between imple-
the user just grabs the object and drags it to the archi-      mentation and architecture dependencies. To show this
tecture component. This constitutes an explicit mapping.       distinction, architecture dependencies are drawn noti-
All descendants of the moved object in the node hierarchy      cably thicker than implementation edges. For realistic
are moved along with it—unless they have been mapped           systems, there are many more implementation than ar-
before. Those are implicitly mapped. If a user wants to        chitecture edges and the focus in this visualization is the
map the implicitly mapped entities to somewhere else, he       architecture, hence, it makes sense to show the architec-
or she just moves its node in the architecture to another      ture dependencies more prominently.
target. Again, syntactic checks are in place to make sure         The remaining question is now: how to show whether
that an implementation node cannot be moved to another         an edge is allowed, divergent, absent, or convergent when
implementation node (unless that one is its original par-      color is no option because it is already used for edge
ent), because that would create a node hierarchy that is       types? Animation is already used for changed edges and
inconsistent to the code.                                      animation in generally should be kept to a minimum;
   Nodes that were mapped explicitly or implicitly are         otherwise it may become annoying. We see no need to
marked visually as such. The mapping creates a logi-           highlight allowed implementation edges and convergent
cal copy of a node when it was moved into an archi-            architecture edges, because everything is in order with
tecture node. Its original representation in the separate      these. A user wants to see primarily where implementa-
Code-City for the implementation becomes transparent           tion and architecture differ, that is, divergent and absent
to make clear that it has already been mapped. If either of    edges should be easy to spot. Some tools decorate edges
the two nodes is selected, its counterpart is selected, too,   with a symbol to mark them as divergent or absent, but
to make their connection clear. Preserving the original        we find such decorations be difficult to relate to a par-
node in the implementation representation helps to study       ticular edge when there are many edges, in particular,
its relation to other nodes in the context of the original     if edge bundling is applied. For this reason, we are us-
implementation, which may be a useful information for          ing a radiance effect and dashed lines for absences and
the decision where to map its neighbors. It also helps to      divergences.
assess the progress of the mapping process.
                                                               Added value of architecture Architecture confor-
Reflexion analysis As soon as two ends of an imple-            mance checking is an important measure to ensure ar-
mentation dependency (source and target nodes of the           chitecture and implementation are in sync, but there is
corresponding edge) are mapped (implicitly or explicitly),     potential for more added value. Architecture is a suitable
the automated reflexion analysis can determine whether         abstraction to discuss other aspects of the implementa-
the implementation dependency is allowed (i.e., covered        tion within distributed teams. Because the implementa-
by a corresponding architecture dependency) or repre-          tion is visually embedded in the architecture in our visu-
sents a divergence (i.e., there is no such corresponding       alization it is easy to relate details of the implementation
architecture dependency allowing it). Likewise, initially      to the architecture. For instance, we support dynamic
when nothing has been mapped, all architecture depen-          analysis where a user can trace the control flow by way of
dencies are so called absences, that is, there is no im-       animated edges for dynamic calls, which raises the level
plementation dependency confirming them. Whenever              of abstraction in the context of debugging. This way, the
nodes are mapped, these could turn into so called con-         static architecture gets also a dynamic view. Similarly,
vergences, that is, there is actually an implementation        we visualize performance data by way of spheres above
dependency confirming them. We are using our incre-            methods whose radius represents the CPU time spent
within those and the number of calls by way of a color       [8] S. M. Charters, C. Knight, N. Thomas, M. Munro,
gradient, which allows to relate performance bottlenecks         Visualisation for informed decision making; from
to the architecture. Test coverage metrics can be visu-          code to components, in: International Conference
alized through coloring such that the untested parts of          on Software Engineering and Knowledge Engineer-
the architecture can be spotted easily. Also, we show the        ing, 2002, pp. 765–772.
change history along with the trends in code erosion as      [9] M. Balzer, A. Noack, O. Deussen, C. Lewerentz, Soft-
a kind of movie that allows how the system evolved both          ware landscapes: Visualizing the structure of large
in terms of changes and quality. The implementation              software systems, in: IEEE TCVG Symposium on
embedded in the architecture drawn as a Code-City is             Visualization, 2004, pp. 261–266.
a uniform representation in all these development prac- [10] T. Panas, R. Berrigan, J. Grundy, A 3d metaphor
tices and may provide insights that can be discussed in a        for software production visualization, in: Inter-
distributed team.                                                national Conference on Information Visualization,
                                                                 IEEE, 2003, pp. 314–319.
                                                            [11] A. Marcus, L. Feng, J. I. Maletic, 3D representations
5. Conclusions                                                   for software visualization, in: ACM International
                                                                 Symposium on Software Visualization, 2003, pp.
In this paper, we have described our software visualiza-
                                                                 27–36.
tion platform SEE and how it can be used to support the
                                                            [12] R. Wettel, M. Lanza, Visualizing software systems
reflexion analysis collaboratively for distributed teams.
                                                                 as cities, in: IEEE International Workshop on Visu-
We explained the most important design decisions for
                                                                 alizing Software for Understanding and Analysis,
the visualization and interaction based on current knowl-
                                                                 2007, pp. 92–99.
edge of cognitive psychology. It is still work in progress.
                                                            [13] R. Wettel, M. Lanza, Codecity: 3d visualization of
As a next step, we plan to evaluate our design decisions
                                                                 large-scale software, in: Companion of the 30th
empirically.
                                                                 International Conference on Software Engineering,
                                                                 ACM, 2008, pp. 921–922.
References                                                  [14] R. Wettel, M. Lanza, Visual exploration of large-
                                                                 scale system evolution, in: IEEE Working Confer-
  [1] C. Ebert, M. Kuhrmann, R. Prikladnicki, Global             ence on Reverse Engineering, 2008, pp. 219–228.
      software engineering: evolution and trends, in: [15] F. Fittkau, S. Roth, W. Hasselbring, ExplorViz: vi-
      International Conference on Global Software Engi-          sual runtime behavior analysis of enterprise appli-
      neering, 2016, pp. 144–153.                                cation landscapes, in: European Conference on
  [2] M. Magin, S. Kopf, A Collaborative Multi-Touch             Information Systems, 2015, pp. 1–13.
      UML Design Tool, Technical Report TR-2013-001, [16] F. Fittkau, A. Krause, W. Hasselbring, Exploring
      University of Mannheim, Germany, 2013.                     software cities in virtual reality, in: IEEE Work-
  [3] M. Ferenc, I. Polasek, J. Vincur, Collaborative mod-       ing Conference on Software Visualization, 2015, pp.
      eling and visualization of software systems using          130–134.
      multidimensional UML, in: IEEE Working Confer- [17] G. o. Balogh, A. Szabolics, A. Beszédes,
      ence on Software Visualization, 2017, pp. 99–103.          Codemetropolis: Eclipse over the city of source
  [4] G. C. Murphy, D. Notkin, K. Sullivan, Software re-         code, in: IEEE International Working Conference
      flexion models: Bridging the gap between source            on Source Code Analysis and Manipulation, 2015,
      and high-level models, in: ACM SIGSOFT Sympo-              pp. 271–276.
      sium on the Foundations of Software Engineering, [18] L. Merino, M. Ghafari, C. Anslow, O. Nierstrasz,
      ACM Press, 1995, pp. 18–28.                                Cityvr: Gameful software visualization, in: IEEE
  [5] B. Johnson, B. Shneiderman, Tree-maps: A space-            International Conference on Software Maintenance
      filling approach to the visualization of hierarchical      and Evolution (TD Track), 2017, pp. 633–637.
      information structures, in: Proceedings of the Con- [19] L. Merino, A. Bergel, O. Nierstrasz, Overcoming
      ference on Visualization, IEEE Computer Society            issues of 3d software visualization through immer-
      Press, 1991, pp. 284–291.                                  sive augmented reality, in: IEEE Working Confer-
  [6] K. Andrews, J. Wolte, M. Pichler, Information pyra-        ence on Software Visualization, 2018, pp. 54–64.
      mids: A new approach to visualising large hierar- [20] W. Scheibel, C. Weyand, J. Döllner, Evocells - A
      chies, in: IEEE Conference on Visualization, 1997,         treemap layout algorithm for evolving tree data,
      pp. 49–52.                                                 in: International Joint Conference on Computer
  [7] C. Knight, M. Munro, Virtual but visible software,         Vision, Imaging and Computer Graphics Theory
      in: International Conference on Information Visu-          and Applications, 2018, pp. 273–280.
      alization, IEEE, 2000, pp. 198–205.                   [21] L. Merino, M. Hess, A. Bergel, O. Nierstrasz,
     D. Weiskopf, Perfvis: Pervasive visualization in im-    [34] D. A. Bowman, E. T. Davis, L. F. Hodges, A. N. Badre,
     mersive augmented reality for performance aware-             Maintaining spatial orientation during travel in an
     ness, in: ACM/SPEC International Conference on               immersive virtual environment, Presence: Teleoper.
     Performance Engineering, 2019, pp. 13–16.                    Virtual Environ. 8 (1999) 618–631.
[22] D. Limberger, W. Scheibel, J. Döllner, M. Trapp, Ad-    [35] B. E. Riecke, D. W. Cunningham, H. H. Bülthoff,
     vanced visual metaphors and techniques for soft-             Spatial updating in virtual reality: the sufficiency
     ware maps, in: International Symposium on Visual             of visual information, Psychological Research 71
     Information Communication and Interaction, 2019,             (2007) 298–313.
     pp. 1–8.                                                [36] J. W. Regian, W. L. Shebilske, J. M. Monk, Virtual
[23] A. Schreiber, L. Nafeie, A. Baranowski, P. Seipel,           reality: An instructional medium for visual-spatial
     M. Misiak, Visualization of software architectures           tasks, Journal of Communication 42 (1992) 136–149.
     in virtual reality and augmented reality, IEEE          [37] N. Capece, U. Erra, S. Romano, G. Scanniello, Visu-
     Aerospace Conference (2019) 1–12.                            alising a software system as a city through virtual
[24] M. Steinbeck, R. Koschke, M.-O. Rüdel, How                   reality, in: L. T. De Paolis, P. Bourdot, A. Mongelli
     EvoStreets are observed in three-dimensional and             (Eds.), Augmented Reality, Virtual Reality, and Com-
     virtual reality environments, in: IEEE International         puter Graphics, Springer International Publishing,
     Conference on Software Analysis, Evolution and               Cham, 2017, pp. 319–327.
     Reengineering, 2020, pp. 332–343.                       [38] J. I. Maletic, J. Leigh, A. Marcus, G. Dunlap, Visual-
[25] F. Jung, V. Dashuber, M. Philippsen, Towards col-            izing object-oriented software in virtual reality, in:
     laborative and dynamic software visualization in             International Workshop on Program Comprehen-
     vr, in: Proceedings of the International Joint Con-          sion, 2001, pp. 26–35.
     ference on Computer Vision, Imaging and Com-            [39] T. Panas, T. Epperly, D. Quinlan, A. Saebjornsen,
     puter Graphics Theory and Applications - Volume              R. Vuduc, Communicating software architecture
     3: IVAPP, INSTICC, SciTePress, 2020, pp. 149–156.            using a unified single-view visualization, in: IEEE
[26] V. Dashuber, M. Philippsen, J. Weigend, A layered            International Conference on Engineering Complex
     software city for dependency visualization, in: In-          Computer Systems, IEEE, 2007, pp. 217–228.
     ternational Joint Conference on Computer Vision,        [40] P. Khaloo, M. Maghoumi, E. Taranta, D. Bettner,
     Imaging and Computer Graphics Theory and Ap-                 J. Laviola, Code park: A new 3d code visualization
     plications, volume 3, SciTePress, 2021, pp. 15–26.           tool, in: IEEE Working Conference on Software
[27] F. Steinbrückner, C. Lewerentz, Representing de-             Visualization, IEEE, 2017, pp. 43–53.
     velopment history in software cities, in: ACM In-       [41] L. Merino, J. Fuchs, M. Blumenschein, C. Anslow,
     ternational Symposium on Software Visualization,             M. Ghafari, O. Nierstrasz, M. Behrisch, D. A. Keim,
     ACM, 2010, pp. 193–202.                                      On the impact of the medium in the effectiveness
[28] R. Koschke, Software visualization in software               of 3d software visualizations, in: IEEE Working
     maintenance, reverse engineering, and reengineer-            Conference on Software Visualization, IEEE, 2017,
     ing: A research survey, Journal on Software Main-            pp. 11–21.
     tenance and Evolution 15 (2003) 87–109.                 [42] A. Schreiber, M. Brüggemann, Interactive visual-
[29] D. H. R. Holten, Hierarchical edge bundles: Visual-          ization of software components with virtual reality
     ization of adjacency relations in hierarchical data,         headsets, in: IEEE Working Conference on Soft-
     IEEE Transactions on Visualization and Computer              ware Visualization, IEEE, 2017, pp. 119–123.
     Graphics 12 (2006) 741–748.                             [43] J. Waller, C. Wulf, F. Fittkau, P. Döhring, W. Hassel-
[30] D. H. R. Holten, Visualization of graphs and trees           bring, Synchrovis: 3d visualization of monitoring
     for software analysis, Ph.D. thesis, Technical Uni-          traces in the city metaphor for analyzing concur-
     versity of Delft, 2009.                                      rency, in: IEEE Working Conference on Software
[31] J. Bohnet, Visualization of Execution Traces and its         Visualization, 2013, pp. 1–4.
     Application to Software Maintenance, Dissertation,      [44] K. Ogami, R. G. Kula, H. Hata, T. Ishio, K. Mat-
     Hasso-Plattner-Institut, Universität Potsdam, 2010.          sumoto, Using high-rising cities to visualize perfor-
[32] SoftVis3D, Softvis3d website, https://softvis3d.com,         mance in real-time, in: Software Visualization (VIS-
     2021. Online; accessed 30-June-2021.                         SOFT), 2017 IEEE Working Conference on, IEEE,
[33] S. S. Chance, F. Gaunet, A. C. Beall, J. M. Loomis,          2017, pp. 33–42.
     Locomotion mode affects the updating of objects         [45] F. Fernandes, C. S. Rodrigues, C. Werner, Dynamic
     encountered during travel: The contribution of               analysis of software systems through virtual reality,
     vestibular and proprioceptive inputs to path integra-        in: Symposium on Virtual and Augmented Reality,
     tion, Presence: Teleoper. Virtual Environ. 7 (1998)          2017, pp. 331–340. In Spanish.
     168–178.                                                [46] R. A. Ruddle, S. J. Payne, D. M. Jones, Navigating
     large-scale virtual environments: what differences       [58] C. Zirkelbach, A. Krause, W. Hasselbring, Hands-
     occur between helmet-mounted and desk-top dis-                On: Experiencing Software Architecture in Virtual
     plays?, Presence: Teleoperators & Virtual Environ-            Reality, Research Report 1809, Christian-Albrechts-
     ments 8 (1999) 157–168.                                       Universität zu Kiel, 2019.
[47] B. Sousa Santos, P. Dias, A. Pimentel, J.-W. Bagger-     [59] R. Holt, A. Winter, A. Schürr, GXL: toward a stan-
     man, C. Ferreira, S. Silva, J. Madeira, Head-mounted          dard exchange format, in: IEEE Working Confer-
     display versus desktop for 3d navigation in virtual           ence on Reverse Engineering, 2000, pp. 162–171.
     reality: A user study, Multimedia Tools and Appli-       [60] C. G. Jung, Gesammelte Werke, Band 6: Psycholo-
     cations 41 (2009) 161–181.                                    gische Typen, Walter Verlag, 1995, p. 474 f.
[48] R. A. Ruddle, P. Péruch, Effects of proprioceptive       [61] J. Dominic, B. Tubre, J. Houser, C. Ritter, D. Kunkel,
     feedback and environmental characteristics on spa-            P. Rodeghero, Program comprehension in virtual
     tial learning in virtual environments, International          reality, in: Proceedings of the 28th International
     Journal of Human-Computer Studies 60 (2004) 299–              Conference on Program Comprehension, 2020, pp.
     326.                                                          391–395.
[49] P. Isenberg, D. Fisher, M. R. Morris, K. Inkpen Quinn,   [62] W. MacNamara, Evaluating the effectiveness of
     M. Czerwinski, An exploratory study of co-located             the gestalt principles of perceptual observation for
     collaborative visual analytics around a tabletop dis-         virtual reality user interface design (2017).
     play, IEEE Symposium on Visual Analytics Science         [63] M. Fabbri-Destro, G. Rizzolatti, Mirror neurons and
     and Technology (2010) 179–186.                                mirror systems in monkeys and humans, Physiol-
[50] P. Isenberg, D. Fisher, S. A. Paul, M. R. Morris,             ogy 23 (2008) 171–179.
     K. Inkpen, M. Czerwinski, Co-located collabora-          [64] F. De Vignemont, T. Singer, The empathic brain:
     tive visual analytics around a tabletop display, IEEE         how, when and why?, Trends in cognitive sciences
     Transactions on Visualization and Computer Graph-             10 (2006) 435–441.
     ics 18 (2012) 689–702.                                   [65] A. Prakash, W. A. Rogers, Why some humanoid
[51] C. Anslow, S. Marshall, J. Noble, R. Biddle, Source-          faces are perceived more positively than others:
     vis: Collaborative software visualization for co-             Effects of human-likeness and task, International
     located environments, in: IEEE Working Confer-                Journal of Social Robotics 7 (2015) 309–331.
     ence on Software Visualization, 2013, pp. 1–10.          [66] M. Mori, K. F. MacDorman, N. Kageki, The uncanny
[52] C. Anslow, Reflections on collaborative software              valley [from the field], IEEE Robotics & Automation
     visualization in co-located environments, in: IEEE            Magazine 19 (2012) 98–100.
     International Conference on Software Maintenance         [67] R. Koschke, M. Steinbeck, Clustering paths with dy-
     and Evolution, 2014, pp. 645–650.                             namic time warping, in: IEEE Working Conference
[53] B. Kot, B. Wuensche, J. Grundy, J. Hosking, Infor-            on Software Visualization, 2020, pp. 89–99.
     mation visualisation utilising 3d computer game          [68] M. Steinbeck, R. Koschke, M.-O. Rüdel, Comparing
     engines case study: A source code comprehension               the EvoStreet visualization technique in two- and
     tool, in: ACM SIGCHI New Zealand Chapter’s                    three-dimensional environments—a controlled ex-
     International Conference on Computer-Human In-                periment, in: International Conference on Program
     teraction: Making CHI Natural, 2005, pp. 53–60.               Comprehension, 2019, pp. 231–242.
[54] M. D’Ambros, M. Lanza, A flexible framework to           [69] M. Rüdel, J. Ganser, R. Koschke, A controlled exper-
     support collaborative software evolution analysis,            iment on spatial orientation in VR-based software
     in: European Conference on Software Maintenance               cities, in: IEEE Working Conference on Software
     and Reengineering, 2008, pp. 3–12.                            Visualization, 2018, pp. 21–31.
[55] M. D’Ambros, M. Lanza, Distributed and collabo-          [70] R. Koschke, D. Simon, Hierarchical reflexion mod-
     rative software evolution analysis with Churrasco,            els, in: IEEE Working Conference on Reverse Engi-
     Science of Computer Programming 75 (2010) 276–                neering, 2003, pp. 36–45.
     287.                                                     [71] R. Koschke, Incremental reflexion analysis, Journal
[56] T. Panas, T. Epperly, D. Quinlan, A. Saebjornsen,             on Software Maintenance and Evolution 25 (2013)
     R. Vuduc, Communicating software architecture                 601–637.
     using a unified single-view visualization, in: IEEE      [72] P. Seipel, A. Stock, S. Santhanam, A. Baranowski,
     International Conference on Engineering Complex               N. Hochgeschwender, A. Schreiber,              Adopt-
     Computer Systems, 2007, pp. 217–228.                          ing conversational interfaces for exploring OSGi-
[57] E. Stroulia, I. Matichuk, F. Rocha, K. Bauer, In-             based software architectures in augmented reality,
     teractive exploration of collaborative software-              IEEE/ACM 1st International Workshop on Bots in
     development data, in: IEEE International Confer-              Software Engineering (BotSE) (2019) 20–21.
     ence on Software Maintenance, 2013, pp. 504–507.