<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>IS-EUD</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Environment for Virtual Merchandizing in XR</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alessandro Menale</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jacopo Mereu</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Carlo Nuvole</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Luigi Pannuti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Emanuele Mario Spano</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lucio Davide Spano</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Techedge S.p.A.</institution>
          ,
          <addr-line>Via Caldera 21, Building B2, 20153, Milano</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Cagliari, Dept. of Mathematics and Computer Science</institution>
          ,
          <addr-line>Via Ospedale 72, 09124, Cagliari</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>eXtended Reality, End-User Development</institution>
          ,
          <addr-line>Configuration, Natural Language, Rules, Event-Condition-</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>9</volume>
      <fpage>6</fpage>
      <lpage>8</lpage>
      <abstract>
        <p>∗Corresponding author.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Actions
This paper presents the current development state of VMXR, a Proof of Concept (PoC) environment
allowing people without programming experience to create and configure product showcases in a
Virtual and eXtended reality setting. The aim of the PoC is to identify proper metaphors and workflows
for supporting showcase designers in creating interactions with the virtual product representation or
enhancing the physical environment through additional information and media.</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        The availability in the past few years of consumer-targeted hardware for immersive Virtual
and eXtended Reality experiences (VR and XR) attracted the attention of diferent media and
software companies, which started investing efort in producing content and experiences for
these modalities. However, while content created by teams including professional developers
and 3D artists is steadily growing and it will grow in the next years [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], it is still challenging to
open the creation of XR content to non-tech professionals. Current commercial tools contain
scene builders or inspectors, allowing non-experts to position static contents and activate simple
trigger-based interactions (e.g., showing or hiding information overlays) [
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2, 3, 4</xref>
        ]. Supporting
end users in defining more complex interactions in virtual environments is still an open research
question. Recent advances in this field have been limited to environments based on 360°videos [
        <xref ref-type="bibr" rid="ref5 ref6">5,
6</xref>
        ], or they are mainly targeted to developers requiring to reduce the burden of build-text-fix
cycle [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. This paper summarises the advances in developing VMXR (Virtual Merchandising in
eXtended Reality), a Proof of Concept (PoC) environment for supporting end users in creating
XR experiences dedicated to product showcases. The PoC applies the method described in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]
to a relevant field for the Italian industry, establishing a workflow among developers and
content designers. The tool allows them to create and update the experiences by inserting and
configuring objects that define domain-relevant actions. They define the interactive behaviour
of these objects through Event-Condition-Action (ECA) rules.
      </p>
    </sec>
    <sec id="sec-3">
      <title>2. Related Work</title>
      <p>End users can create VR environments with the help of various commercial and research
tools. Most of these programs can only derive static scenes or overlay multimedia elements.
For instance, Spoke [9], supports the composition of 3D models through user-friendly tools.
However, these tools only allow static scene configuration, while proper XR experiences require
interactive features beyond exploration and animations.</p>
      <p>
        Tools specifically designed for end users exist. XOOM [ 10] allows creating web-based
immersive VR applications but specifically focused on the cultural heritage domain. More general tools
like VR GREP [11] have a broader application scope, but the available interactions are limited
to the navigation and reactions to button activations. More recently, tools like FlowMatic [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]
provided an immersive authoring tool that allows programmers to specify reactions to discrete
events (e.g., user actions and system timers). VREUD [12] defines a simplified authoring
environment for novice VR developers, while X-Reality exploits a dataflow graphical environment for
creating applications exploiting both virtual contents and physical devices. These approaches
focus on rapid prototyping or simplified definitions of common interactions. ECARules4All [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ],
the work that inspired our tool, provides a more general approach, which leverages meta-design
for reducing the development complexity. End users exploit configurable templates for creating
XR experiences, instead of developing the whole environments from scratch. Professional
developers create a simplified programming interface based on natural language rules to support
such configuration and the definition of complex interactive behaviours.
      </p>
    </sec>
    <sec id="sec-4">
      <title>3. VMXR Overview and Workflow</title>
      <p>The proposed solution is similar to Content Management Systems for the web: users without
technical knowledge start from a template XR environment containing dummy content, which
they adapt by adding meaningful content and configuring its behaviour. Diferent users can use
the same template, but they will end up defining diferent experiences. In our case, end users
start from predefined environments representing virtual shops. These predefined environments
are the correspondent of web templates in our solution. Experienced developers and/or 3D
modellers create them and provide means for their configuration. In VMXR, this process consists
in adding 3D objects from a predefined list of possible categories and defining the interactions
they support. More in detail, configuring a template means:
• Changing settings related to objects or features already present in the scene (e.g. material
and position of walls);
• Inserting objects not present in the template, from a predefined object list;
• Insert multimedia material (images, text, video);
• Define interactions with objects (e.g. what happens when a user grabs a product).</p>
      <p>
        While we can find consistent solutions supporting the first three points in the literature (see
Section 2), for the fourth point, we applied the general solution proposed in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], which provides
the end users with an efective but understandable mechanism for defining the environment
behaviour. We use Event-Condition-Action rules expressed in natural language, following this
pattern:
W h e n &lt; E v e n t &gt; ( i f &lt; C o n d i t i o n &gt; ) ? t h e n ( &lt; A c t i o n &gt; ) * .
      </p>
      <p>In this schema, an event is a user-input notification or the successful execution of an action.
User inputs include pointing, selecting, grabbing and releasing objects in the environment. In
addition, it is also possible to specify proximity interactions according to the user’s position
in the environment (e.g., when the user is close to a table). An action is a high-level feature
supported by a given category of objects. For instance, lights support actions such as turn-on
or turn-of, and screens can display images, play or stop video playback. There are also general
actions associated with all types of objects, such as moving them to an absolute position or
relative to other objects (i.e., up, below, left, right etc.). Conditions allow checking the state
of an object before executing actions. They may be simple (containing only one check) or
composite (including more than one simple condition in and/or). Usually, rules do not have
conditions since end users seldom use them in their definitions [ 13, 14], but we included them
in the language to increase its expressiveness.</p>
      <p>VMXR distinguishes four roles. The developer (Dev) defines the set of objects that a given
environment template includes and implements the high-level actions they support. The
environment configurator (EC) instead creates the floor plan of the virtual shop and inserts
ifxed elements of furniture. This supports the requirement of diferent producers to keep
a consistent design in all their shops (either real or virtual) for brand identity. Instead, an
experience configurator (XC) creates his/her own version of a given shop for inserting elements
related to a product campaign. This means inserting the target products and the dedicated
furniture elements for showcasing them, together with media elements providing additional
information or enhancing the experience. In addition, an XC defines the interactions in the
environment through the ECA rules. Finally, the user is the final consumer of the experience.</p>
      <p>Such a role organization corresponds to the meta-design approach defined in [ 15, 16, 17].
Devs are at the meta-design level, including professional developers using Turing-Machine
equivalent languages for defining the core aspects of the system. EC and XC are at the design
level: they are domain experts using less expressive power participating in the software design.
Finally, user s are at the use level since they experience the interaction and basically are not
aware of the shared process between developers and domain experts leading the creation of the
experience.</p>
      <p>The workflow supported by VMXR for creating XR showcases consists of four steps:
1. High-level object implementation, including the identification and implementation
of the objects required by EC and XC.
2. Configuration of the environment . The EC defines the room floor plan through a
web interface and inserts fixed furnishing elements. VMXR allows inserting only objects
defined by Devs at Step 1, which are, in general, useful for creating more than one virtual
shop. The result of this step is a reusable template.
3. Experience configuration . The XC selects a template from those available after Step 2
and inserts additional objects from those defined in Step 1. Moreover, s/he creates the
rules to specify the interaction. The result of this step is an executable interactive XR
environment.
4. Interaction with the environment. The final user (e.g. a customer) puts on the visor
and visits the configured environment by interacting with the virtual objects present.</p>
    </sec>
    <sec id="sec-5">
      <title>4. VMXR Walktrough</title>
      <p>This section shows the current state of the PoC development. We exploit web-based technologies
for supporting the authoring, while the final experience is a native build of the XR environment
for diferent platforms of a Unity3D-based application.</p>
      <p>The workflow starts from a Dev that uploads the 3D models of the assets that may be used
for creating the XR experiences, including pieces of furniture, screens, counter displays etc.,
together with the virtual models of the products. Besides the assets for managing the shop
appearance, the Dev implements the code (i.e., Unity scripts) required for executing the
highlevel actions available for the XC. In the example in Figure 1, shows the interface Dev use for
uploading 3D models into the set of available objects. Through that interface, they can set the
properties and assign the required components (Unity scripts) they developed for implementing
the high-level actions.</p>
      <p>After creating the required assets, the EC furnishes the diferent shop categories the brand
manages. S/he starts by defining relevant floor plans, by using the interface in Figure 2, which
allows the creation of walls, doors and windows. After that, s/he can position the diferent fixed
furniture pieces, i.e. those that do not change across diferent product campaigns. The interface
for this task is similar to the one used by an XC for positioning products or removable furniture
elements (see Figure 3, top part). For both users, the set of available objects available in the
main menu of the authoring environment, grouped into the categories defined by Devs.</p>
      <p>Figure 3 shows the interface XCs use for creating the interaction rules. It consists of a
constrained editor, allowing the selection of the diferent parts that constitute an action or an
event in the rule language, which are the subject, the verb and an optional object. The tool
suggests the available options for each part to avoid syntax errors. In addition, it maintains
consistent the rule by resetting all the fields that depend on another one, when the latter changes.
For instance, in the event triggering the rule depicted in Figure 3 (the when), the Player is the
subject. The fields containing the verb ( is close to) and the object (Object TV-trigger ) depend on
the subject selection. So, if the XC changes the subject, the interface resets the verb and the
object.</p>
      <p>When the experience design is finished, the final user can interact with the immersive
environment, experiencing the efects of the ECA rules.</p>
    </sec>
    <sec id="sec-6">
      <title>5. Conclusion and Future Work</title>
      <p>In this paper, we presented the current state of VMXR, a tool allowing end users to create XR
showcase experiences. The tool applies meta-design to provide end users with abstractions they
can manipulate, involving professional developers in defining the available virtual object types
and the associated high-level actions. Rules expressed constrained natural language support
defining the interactive behaviour of the environment. In future work, besides completing the
technical implementation of the environment, we aim to evaluate and improve it by deploying
the solution in a real-world scenario. Further research directions include enhancing the natural
language processing techniques used in the tool, by integrating the support provided by the
latest large generative models.
Defining configurable virtual reality templates for end users, Proc. ACM Hum.-Comput.</p>
      <p>Interact. 6 (2022). URL: https://doi.org/10.1145/3534517. doi:1 0 . 1 1 4 5 / 3 5 3 4 5 1 7 .
[9] Mozilla, Mozilla spoke, 2022. URL: https://hubs.mozilla.com/spoke, [Online; accessed
17-February-2022].
[10] F. Garzotto, M. Gelsomini, V. Matarazzo, N. Messina, D. Occhiuto, Xoom: An end-user
development tool for web-based wearable immersive virtual tours, in: J. Cabot, R. De
Virgilio, R. Torlone (Eds.), Web Engineering, Springer International Publishing, Cham, 2017,
pp. 507–519.
[11] T. Zarraonandia, P. Díaz, I. Aedo, A. Montero, Inmersive end user development for virtual
reality, in: Proceedings of the International Working Conference on Advanced Visual
Interfaces, AVI ’16, Association for Computing Machinery, New York, NY, USA, 2016, p.
346–347. URL: https://doi.org/10.1145/2909132.2926067. doi:1 0 . 1 1 4 5 / 2 9 0 9 1 3 2 . 2 9 2 6 0 6 7 .
[12] E. Yigitbas, J. Klauke, S. Gottschalk, G. Engels, Vreud - an end-user development tool
to simplify the creation of interactive vr scenes, in: 2021 IEEE Symposium on Visual
Languages and Human-Centric Computing (VL/HCC), 2021, pp. 1–10. doi:1 0 . 1 1 0 9 / V L /
H C C 5 1 2 0 1 . 2 0 2 1 . 9 5 7 6 3 7 2 .
[13] B. Ur, M. Pak Yong Ho, S. Brawner, J. Lee, S. Mennicken, N. Picard, D. Schulze, M. L.</p>
      <p>Littman, Trigger-action programming in the wild: An analysis of 200,000 ifttt recipes, in:
Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI
’16, Association for Computing Machinery, New York, NY, USA, 2016, p. 3227–3231. URL:
https://doi.org/10.1145/2858036.2858556. doi:1 0 . 1 1 4 5 / 2 8 5 8 0 3 6 . 2 8 5 8 5 5 6 .
[14] B. Ur, E. McManus, M. Pak Yong Ho, M. L. Littman, Practical trigger-action programming
in the smart home, in: Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems, CHI ’14, Association for Computing Machinery, New York, NY,
USA, 2014, p. 803–812. URL: https://doi.org/10.1145/2556288.2557420. doi:1 0 . 1 1 4 5 / 2 5 5 6 2 8 8 .
2 5 5 7 4 2 0 .
[15] C. Ardito, P. Buono, M. F. Costabile, R. Lanzilotti, A. Piccinno, L. Zhu, On the
transferability of a meta-design model supporting end-user development, Universal Access in the
Information Society 14 (2015) 169–186. URL: https://doi.org/10.1007/s10209-013-0339-7.
doi:1 0 . 1 0 0 7 / s 1 0 2 0 9 - 0 1 3 - 0 3 3 9 - 7 .
[16] G. Desolda, C. Ardito, M. Matera, Empowering end users to customize their smart
environments: Model, composition paradigms, and domain-specific tools, ACM Trans.
Comput.-Hum. Interact. 24 (2017). URL: https://doi.org/10.1145/3057859. doi:1 0 . 1 1 4 5 / 3 0 5 7 8 5 9 .
[17] G. Fischer, E. Giaccardi, Y. Ye, A. G. Sutclife, N. Mehandjiev, Meta-design: A manifesto
for end-user development, Commun. ACM 47 (2004) 33–37. URL: https://doi.org/10.1145/
1015864.1015884. doi:1 0 . 1 1 4 5 / 1 0 1 5 8 6 4 . 1 0 1 5 8 8 4 .</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Vv.Aa.</surname>
          </string-name>
          ,
          <source>Virtual Reality Market Research Report, Technical Report, Fortune Business Insights</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Google</surname>
          </string-name>
          , Google poly,
          <year>2017</year>
          . URL: https://poly.google.com/.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>F.</given-names>
            <surname>Games</surname>
          </string-name>
          , Fungus,
          <year>2020</year>
          . URL: https://fungusgames.com.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Ottifox</surname>
          </string-name>
          , Ottifox,
          <year>2018</year>
          . URL: https://ottifox.com/index.html.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>I.</given-names>
            <surname>Blečić</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Cuccu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Fanni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Frau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Macis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Saiu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Senis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Spano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tola</surname>
          </string-name>
          ,
          <article-title>First-person cinematographic videogames: Game model, authoring environment, and potential for creating afection for places</article-title>
          ,
          <source>J. Comput. Cult. Herit</source>
          .
          <volume>14</volume>
          (
          <year>2021</year>
          ). URL: https: //doi.org/10.1145/3446977.
          <source>doi:1 0 . 1 1</source>
          <volume>4 5 / 3 4 4 6 9 7 7 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Fanni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Senis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Murru</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Romoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Spano</surname>
          </string-name>
          , I. Blečić,
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Trunfio</surname>
          </string-name>
          ,
          <article-title>Pacpac: End user development of immersive point and click games</article-title>
          , in: A.
          <string-name>
            <surname>Malizia</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Valtolina</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Morch</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Serrano</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Stratton (Eds.),
          <string-name>
            <surname>End-User</surname>
            <given-names>Development</given-names>
          </string-name>
          , Springer International Publishing, Cham,
          <year>2019</year>
          , pp.
          <fpage>225</fpage>
          -
          <lpage>229</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , S. Oney,
          <article-title>FlowMatic: An Immersive Authoring Tool for Creating Interactive Scenes in Virtual Reality, Association for Computing Machinery</article-title>
          , New York, NY, USA,
          <year>2020</year>
          , p.
          <fpage>342</fpage>
          -
          <lpage>353</lpage>
          . URL: https://doi.org/10.1145/3379337.3415824.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>V.</given-names>
            <surname>Artizzu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Cherchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Frau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Macis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pitzalis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tola</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Blecic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Spano</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>