<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>D. Giorgi);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Claudio Vairo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marco Callieri</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fabio Carrara</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Paolo Cignoni</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marco Di Benedetto</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Claudio Gennaro</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daniela Giorgi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gianpaolo Palma</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lucia Vadicamo</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ISTI-CNR</institution>
          ,
          <addr-line>via G. Moruzzi, 1, Pisa, 56100</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2092</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>The Social and hUman ceNtered XR (SUN) project is focused on developing eXtended Reality (XR) solutions that integrate the physical and virtual world in a way that is convincing from a human and social perspective. In this paper, we outline the limitations that the SUN project aims to overcome, including the lack of scalable and cost-efective solutions for developing XR applications, limited solutions for mixing the virtual and physical environment, and barriers related to resource limitations of end-user devices. We also propose solutions to these limitations, including using artificial intelligence, computer vision, and sensor analysis to incrementally learn the visual and physical properties of real objects and generate convincing digital twins in the virtual environment. Additionally, the SUN project aims to provide wearable sensors and haptic interfaces to enhance natural interaction with the virtual environment and advanced solutions for user interaction. Finally, we describe three real-life scenarios in which we aim to demonstrate the proposed solutions.</p>
      </abstract>
      <kwd-group>
        <kwd>extended reality</kwd>
        <kwd>artificial intelligence</kwd>
        <kwd>digital twins</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>1 and</p>
      <p>Giuseppe Amato1</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <sec id="sec-2-1">
        <title>The Social and hUman ceNtered XR (SUN) project aims at</title>
        <p>investigating and developing extended reality (XR)
solutions that integrate the physical and the virtual world in
a convincing way, from a human and social perspective.</p>
      </sec>
      <sec id="sec-2-2">
        <title>The virtual world will be a means to augment the physical world with new opportunities for social and human interaction. Figure 1 summarizes the current limitations and SUN’s solutions to achieve this.</title>
        <p>More in detail, the relevant limitations that we will
address are:
• Lack of solutions to develop scalable and
costefective new XR applications : building an XR
application for a new physical environment
requires creating from scratch its accurate digital
twin, with both visual and physical/functional
properties, with significant eforts and high costs.
• Lack of convincing solutions for mixing the virtual
and physical environment : generally, augmented
reality is limited to the visual alignment of
physical and virtual components. However, a
convincing XR requires physical and virtual elements
nized by CINI, May 29–31, 2023, Pisa, Italy
∗Corresponding author.
vironments
– The system will learn from the physical
world, during its use. Like a baby, who
learns how to recognize and use objects
with the experience, SUN will use artificial
intelligence, computer vision, and sensor
analysis to incrementally learn, during its
usage, the visual and physical properties
of real objects, and to generate, recognize,
and use digital twins in the virtual
environment.
– Learned objects and environments will be
incrementally added to the SUN Digital</p>
      </sec>
      <sec id="sec-2-3">
        <title>Twins Library of available items and reused in various XR applications.</title>
        <p>• Seamless and convincing interaction between the and thermal cues under fingertips) for XR
physical and the virtual world scenarios, such as interaction with 3D
vir– Objects and environments of the physical tual objects.</p>
        <p>world have a digital twin in the virtual – We will provide advanced solutions for
world, with the same physical and visual user interaction, including gaze-based and
properties. gesture-based interaction.
– Manipulating an object in the physical • Artificial intelligence-based solutions to address
world will have the same plausible efect in current computing, memory, and network
limitathe virtual world. For instance, a physical tions of wearable devices
wrench could be used to unscrew a virtual
bolt. Complementarily, a virtual wrench – AI will be used to generate plausible,
highmanipulated in the virtual world, will pro- quality renderings also for coarse-grained,
vide the user with a feeling consistent with low-resolution, or incomplete 3D models,
a physical wrench. leveraging on solutions similar to those
used for deep-fake generation.
• Wearable sensors and haptic interfaces for
convincing and natural interaction with the virtual The solutions will be demonstrated in three real-life
environment scenarios. The rest of the paper is organized as follows:
– Wearable haptics will enhance physical in- Section 2 describes the scenarios. Section 3 presents the
teraction with virtual menus and contex- technical parts of the project. Finally, Section 4 concludes
tual information displayed to the user. In the paper.
addition, solutions for body contextual
information (such as skin stretch-vibration
on body parts) will lightly guide the user in 2. Scenarios
tasks such as remote training, home
physical exercises, and interaction with other The project will assess the developed technologies in
persons. these three real-life scenarios:
– Novel wearable haptic interfaces will al- 1. Extended reality for rehabilitation.</p>
        <p>low multisensory feedback (vibrotactile
2. Extended reality for safety and social interaction improvement, enhancing accuracy as well as portability
at work. in the home environment. Starting from a visual
rep3. Extended reality for people with serious mobility resentation of an exercise, delivered through an avatar
and verbal communication diseases. of the therapist performing it in the correct way, the
patient will be called to repeat it, or in sync with the</p>
        <p>
          XR experiments are going to be good examples to help avatar performing the exercise. Wearables and wireless
better understand the social impact of those technologies sensors will be used to measure and deliver contextual
even considering that COVID-19 boosted a positive atti- information, including multiple inertial measurement
tude toward them [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. The SUN application for safety at units (dynamic motion, body and limb orientation
kiwork, fragile people in rehabilitation, and in social cues netics &amp; kinematics, maximum respective joint angle
communication can act as the driving forces behind devel- achieved), surface electromyography (neuromuscular
pooping the empathic stimuli and facilitating XR empathic tentials such as activation and fatigue), and smartwatch
experiences [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. (heart rate, oxygen saturation, arterial blood pressure).
Besides an accelerometer, a flex sensor could be employed
2.1. Extended reality for rehabilitation instead of a gyroscope for specific angle measurement
The aim of the proposed rehabilitation scenario is to mo- during limb bending and a force-sensitive sensor to
meativate the patient to exercise eficiently by providing feed- sure limb pressure, with the potential of classifying a
back in relation to performance, while the physiotherapy variety of rehabilitation exercises over a sensor network.
exercises are performed in any setting, e.g. clinical, at Data from a camera and the sensors will be collected in
home, indoors, outdoors, or even in public areas. This real-time and send to a GPU server (either cloud-based
scenario is based on the use of a digital tool employ- exploiting the potential of cloud-based AI infrastructure
ing Virtual Reality (VR), Augmented Reality (AR), and or edge-based exploiting edge AI) [
          <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
          ]. An AI algorithm
Mixed Reality (MR) to assist and monitor individual mo- will process the sensory input, responding to the level
tor learning in the context of a supervised personalized of muscle activation by reducing or enhancing the
exremote exercise rehabilitation program for the manage- ercise dificulty-intensity, and to the movement quality
ment of injuries/pathologies. The digital tool will also providing personalized feedback for movement
correctenable supervised personal training. Important success ness. Clinicians’ feedback can also be added at any point
factors for home-based programs is to include patients during the automated feedback to enhance accuracy,
sewith a favorable prognosis and increasing adherence to cure safety and further improve the algorithm. Usability
the program. The main goal of this scenario is to improve testing is required a priori to the application of such
comcompliance to the physiotherapy protocol, increase pa- plex innovative digital tools. The goal of this scenario is
tient engagement, monitor physiological conditions, and to also increase user engagement for diverse populations.
provide immediate feedback to the patient by classify- These can be achieved by the notion of personalization
ing an exercise in real-time as correctly or incorrectly for diverse populations and rehabilitation needs and
abiliexecuted, according to physiotherapists’ set criteria. It ties. An iterative-convergent-mixed-methods design will
is already known that visuo-physical interaction results be used to assess and mitigate all serious usability
isin better task performance than visual interaction alone sues and to optimize user experience and adoption. This
tracking performance. Wearable haptics (such as EMG) design will provide transparency and guidance for the
can be used to enhance physical interaction by monitor- development of the tool and its implementation into
clining body contextual information. Visual AI algorithms ical pathways. The methodological framework is defined
can enhance the understanding of the alignment of the by the ISO 9241-210-2019 (Human-centred design for
inbody, capturing the outline and giving real-time person- teractive systems) constructs, efectiveness, eficiency,
alized feedback to improve the quality of the movement. satisfaction, and accessibility.
        </p>
        <p>Rehabilitation adherence and fidelity are especially
challenging, alongside motor learning with personalized feed- 2.2. Extended reality for safety and social
back. Physiotherapy after orthopedic surgery and other interaction at work
kinds of upper limb motor impairment such as “frozen
shoulder” or severe hand arthritis is crucial for complete
rehabilitation but is often repetitive, tedious, and
timeconsuming. Actually, in order to achieve motor recovery
a very long physiotherapy treatment, sometimes more
than 50 sessions, is needed. Therefore, there is a necessity
to suitably address the evidence-practice gap and
translate digital innovations into practice while enabling their
AR and VR can create more immersive experiences for
people at work in order to make their job safer, by
providing new ways to be aware of possible hazards and receive
more efective, engaging and entertaining training on
safety procedures. This is to alert and prevent serious
accidents provoked by the co-occurrences of diferent
causes, which can be avoided by conscious collaboration.</p>
        <p>With VR/AR headsets, workers will be able, for
example, to better understand dificult-to-grasp concepts or 2.3. Extended reality for people with
topics such as protocols and procedures for safety and serious mobility and verbal
security. It ofers a great opportunity for the industry communication diseases
to be able to make use of a variety of diferent options
relying on extended reality experiences to provide each Some people with various motor disabilities or after
worker with an immersive access to relevant informa- strokes have huge dificulties in communicating with
tion, and to encourage the adoption of safer behaviours. each other and even to address their vital needs. The
Extended reality encompassing virtual, augmented, and project will join the challenge to find a dedicated
commixed reality brings immersive experiences to workers munication pathway for those people introducing the
no matter where they are. A XR experience can “come possibility to interact with some specific social cues and
alive” when a worker puts on a VR headset and walks, for transform them in clear communication or actions. The
instance, on a shop floor. Workers can experience hard- proposal is to count on residual abilities giving them a
to-conceptualise current-day topics through extended meaning in terms of communication supported when
reality, such as moving around in potentially dangerous needed by avatars in a virtual environment. The
objecenvironments. XR technology will improve efectiveness tive is to realise low-cost non-invasive tools based for
and user engagement. Through an adequate design of instance on the existing biofeedback, face expression
XR contents, immersive experience seamlessly integrates and other input. The challenge will be to create a
soseveral principles not present when using non-immersive lution allowing the design of person specific extended
contents such as better contextualization and real-time reality interaction even at home in the working place or
decision-making feedback, in a safe yet realistic environ- at school and developing novel multi-user virtual
comment for practice. The Holo-Light’s software Engineering munication and collaboration solutions that provide
coSpace AR 3S, which allows workers to visualise and work herent multisensory experiences and optimally convey
with design files including CAD in an XR environment, relevant social cues. Successful implementation of the
will be optimised and upgraded to encourage social inter- pilot for this scenario allows people with
communicaaction among workers to also provide them with alerts tion and motor disabilities to interact with friends and
and indications about the possibility of incidents. Cor- relatives, meeting realistically in an extended (physical +
rect and incorrect behaviour will be simulated. Overall, virtual) environment. The person with communication
it will enable designing, prototyping, quality assurance, and motor disabilities will be represented by an avatar
and overall technical industrial training and education which will interact with other people staying in a
physiin emerging manufacturing technology scenarios also cal augmented environment. Simultaneously, the person
with the opportunity to not just increase the speed in with communication and motor disabilities will have the
acquiring new tasks or to improve industrial processes illusion to be also in the same physical environment with
but overall, also increasing the safety for workers dealing the other people, and can interact with the new
interwith complex machines. Features that will be exploited faces ofered by SUN. SUN, in fact, will develop a new
in the scenario include: generation of non-invasive bidirectional body-machine
interfaces (BBMIs), which will allow a smooth and very
• High Quality 3D Content in XR: To securely im- efective interaction of people with diferent types of
port and view all your design files for AM in XR. sensory-motor disabilities with virtual reality and avatars.
Visualize and manipulate even data-intensive 3D The BBMI will be able to decode motor information by
content. using arrays of printed electrodes to record muscular
• Merge the Real and Virtual: Place your CAD/engi- activities and inertial sensors. The position of the
senneering designs in the real world. Manipulate sors will be customised according to the specific motor
projected assemblies directly at their envisaged abilities of the subjects. For example, it could be possible
destination. to use shoulder and elbow movements, and it could be
• Work on Complex Holograms: Fully manipulate also possible to record muscular activities from auricular
your designs and assemblies. Place, rotate, adjust, muscles. All these sensors will be used to record
inforresize, slice and dice. Save your work. mation during diferent upper limb and hand functions
• Share the Experience: Collaborate with your col- movements. We will use a procedure to identify the
sigleagues, partners, or customers in AR. Set up local nals more useful for the diferent tasks and implement
and global meetings in an XR environment. a dedicated decoding algorithm. This approach allowed
developing a simplified and yet very efective approach
to control flying drones and will probably provide very
interesting results also in this case. A machine learning
approach will be implemented for the decoding of the
diferent tasks relying on human-machine interfaces and
in general on decoding information from
electrophysiological and biomechanical signals. SUN will also allow to
give sensory feedback to the user about the movement
and the interaction with the avatar in the virtual
environment, using transcutaneous electrical stimulation or
small actuators for vibration.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Technical Challenges</title>
      <sec id="sec-3-1">
        <title>In this section, we will describe in more detail the technical parts of the projects and the challenges that our institute of the CNR (ISTI) wants to address.</title>
        <sec id="sec-3-1-1">
          <title>3.1. AI to learn objects for the virtual world</title>
          <p>
            Various techniques exist that allow recognizing
physical objects and environments and linking/registering
them with the corresponding virtual ones to enable
taskspecific interactions between users and augmented
environments. Existing techniques are based on image
classification, object detection, and semantic
segmentation via deep neural networks [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ]. However, identifying,
mapping, linking, and registering every single (instance
of) physical object in the virtual context is often not a
completely automatic process and requires manual efort.
          </p>
          <p>In the context of the project, we will advance the
current state of the art by developing AI-based solutions
that allow the incremental and automatic discovery of
environments, objects, and object usage patterns, while
the system is being used. This will permit the
incremental creation of libraries of reusable visually recognizable
virtual objects and environments, the cost reduction in
developing extended reality applications, and the
scalability of extended reality to large scenarios.</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>3.2. Acquisition of physical properties for</title>
        </sec>
        <sec id="sec-3-1-3">
          <title>3D objects</title>
          <p>
            Currently, most of the eforts to create digital twins
representation focus on the recreation of the shape and
appearance of an existing object. However, the mechanical
characteristics of a digital representation drive their
actual physical behavior and are necessary for interactive
environments to maintain the perceived plausibility of
the representation. Obtaining such information
practically and intuitively is a technical challenge [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ]. Current
solutions require laboratory settings or cumbersome
devices [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ]. We aim to face this challenge to create a digital
replica of objects that support truly realistic interaction
in XR environments, recreating convincing experiences.
          </p>
          <p>The main objective is to provide innovative techniques
to acquire and estimate the mechanical properties (like
mass distribution or stifness/softness) of 3D objects. This
will be done by integrating wearable fingertip-sensing
devices with maker-based video tracking during controlled
object manipulation. AI data interpretation, followed
by multimodal data fusion, will allow learning of the
expected inertia tensor and other physical properties
that best match the expected sensor readings. An
important aspect is the design and 3D printing of objects
with controlled mechanical properties for the
acquisition of datasets with ground truth data to use during the
learning phase.</p>
        </sec>
        <sec id="sec-3-1-4">
          <title>3.3. AI in support of 3D completion and convincing XR presentation</title>
          <p>XR environments are populated by 3D models
representing real-world objects (assets). Assets coming from stock
sources (commercial or open repositories) are often used
to cope with the general situation, but for specific
scenarios, it is often necessary to produce on the fly new
3D models. This task will address this phase of asset
creation.</p>
          <p>
            A first direction to pursue is to help in producing more
complete and convincing 3D models by adding AI
processing to the diferent steps of the acquisition pipeline.
To this aim, the photogrammetric reconstruction seems
the most promising one: AI processing can be efectively
used in removing from photos those problems that
reduce the accuracy of alignment and 3D data generation
or unwanted areas [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ]. Furthermore, AI can be used to
ifll gaps in the geometry and texture, like in [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ], possibly
having as data source the input photographic campaign,
to ensure a more coherent filling.
          </p>
          <p>On a diferent level, for specific classes of objects, we
can exploit the a priori knowledge of the domain, to
pre-train networks on examples. This will ensure better
support in the 3D model generation and a more coherent
and realistic completion of the models’ geometries and
textures.</p>
        </sec>
        <sec id="sec-3-1-5">
          <title>3.4. AI-assisted 3D acquisition of unknown environments with semantic priors</title>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>Automatic environment discovery is a challenging task</title>
        <p>
          that has been revamped in the deep learning era, and
automatic model instantiation has similarly received
academic and industrial attention [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. A unified solution
for parallel exploration and semantic-driven user
presentation is not yet available.
        </p>
        <p>We will use reinforcement learning for automatic
exploration (e.g., by unmanned vehicles) of unknown and
possibly large objects and environments, while
minimizing the scenario discovery time. To keep acquisition
feasible in a time-constrained context and allow an
immersive experience from the very start of the scenario
exploration, our solution will also deploy pre-built 3D
model instances for objects whose complete
reconstruction would need to acquire information from diferent
points of view, or explore novel methods (e.g., conditional
denoising difusion probabilistic models) to autogenerate
the underlying representation.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions and Future Work</title>
      <p>The Social and Human-centered XR project aims to
overcome the current limitations in developing XR solutions
that integrate the physical and virtual world from a
human and social perspective. The limitations addressed
in the SUN project include lack of solutions to develop
scalable and cost-efective new XR applications, lack of
convincing solutions for mixing the virtual and
physical environment, lack of plausible and convincing
human interaction interfaces in XR, and barriers due to
resource limitations of end-user devices. The SUN project
proposes scalable solutions to obtain plausible and
convincing virtual copies of physical objects and
environments, seamless and convincing interaction between the
physical and virtual world, wearable sensors and haptic
interfaces for convincing and natural interaction with
the virtual environment, and artificial intelligence-based
solutions to address current computing, memory, and
network limitations of wearable devices.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <sec id="sec-5-1">
        <title>This work has received financial support from the Hori</title>
        <p>zon Europe Research &amp; Innovation Programme under
Grant agreement N. 101092612 (Social and hUman
ceNtered XR - SUN project) and by PNRR - M4C2 -
Investimento 1.3, Partenariato Esteso PE00000013 - ”FAIR -
Future Artificial Intelligence Research” - Spoke 1
”Humancentered AI”, funded by the European Union under the
NextGeneration EU programme.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Kerdvibulvech</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. Chen,</surname>
          </string-name>
          <article-title>The power of augmented reality and artificial intelligence during the covid-19 outbreak</article-title>
          , in: HCI
          <source>International 2020- Late Breaking Papers: Multimodality and Intelligence: 22nd HCI International Conference, HCII 2020</source>
          , Copenhagen, Denmark,
          <source>July 19-24</source>
          ,
          <year>2020</year>
          , Proceedings 22, Springer,
          <year>2020</year>
          , pp.
          <fpage>467</fpage>
          -
          <lpage>476</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>V.</given-names>
            <surname>Paananen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Kiarostami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.-H.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Braud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hosio</surname>
          </string-name>
          ,
          <article-title>From digital media to empathic reality: A systematic review of empathy research in extended reality environments</article-title>
          ,
          <source>arXiv preprint arXiv:2203.01375</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Ciampi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gennaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Carrara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Falchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Vairo</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Amato, Multi-camera vehicle counting using edge-ai</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>207</volume>
          (
          <year>2022</year>
          )
          <fpage>117929</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Amato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Carrara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Falchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gennaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Vairo</surname>
          </string-name>
          ,
          <article-title>Facial-based intrusion detection system with deep learning in embedded devices</article-title>
          ,
          <source>in: Proceedings of the 2018 International Conference on Sensors, Signal and Image Processing</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>64</fpage>
          -
          <lpage>68</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L.</given-names>
            <surname>Nicholson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Milford</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Sünderhauf</surname>
          </string-name>
          , Quadricslam:
          <article-title>Dual quadrics from object detections as landmarks in object-oriented slam</article-title>
          ,
          <source>IEEE Robotics and Automation Letters</source>
          <volume>4</volume>
          (
          <year>2018</year>
          )
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>N. F.</given-names>
            <surname>Duarte</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Chatzilygeroudis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Santos-Victor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Billard</surname>
          </string-name>
          ,
          <article-title>From human action understanding to robot action execution: how the physical properties of handled objects modulate non-verbal cues</article-title>
          ,
          <source>in: 2020 Joint IEEE 10th International Conference on Development and Learning</source>
          and
          <string-name>
            <surname>Epigenetic Robotics (ICDL-EpiRob</surname>
            <given-names>)</given-names>
          </string-name>
          , IEEE,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L.</given-names>
            <surname>Marechal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Balland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Lindenroth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Petrou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kontovounisios</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Bello</surname>
          </string-name>
          ,
          <article-title>Toward a common framework and database of materials for soft robotics</article-title>
          ,
          <source>Soft robotics 8</source>
          (
          <year>2021</year>
          )
          <fpage>284</fpage>
          -
          <lpage>297</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Murtiyoso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Grussenmeyer</surname>
          </string-name>
          ,
          <article-title>Automatic point cloud noise masking in close range photogrammetry for buildings using ai-based semantic labelling</article-title>
          ,
          <source>The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-2/W1-2022</source>
          (
          <year>2022</year>
          )
          <fpage>389</fpage>
          -
          <lpage>393</lpage>
          . doi:
          <volume>10</volume>
          .5194/ isprs- archives
          <string-name>
            <surname>-</surname>
          </string-name>
          XLVI- 2
          <string-name>
            <surname>- W1-</surname>
          </string-name>
          2022- 389-
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Maggiordomo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Cignoni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tarini</surname>
          </string-name>
          ,
          <article-title>Texture inpainting for photogrammetric models</article-title>
          , Computer Graphics Forum in press (
          <year>2023</year>
          ). URL: https:// onlinelibrary.wiley.com/doi/abs/10.1111/cgf.14735. doi:
          <volume>10</volume>
          .1111/cgf.14735.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Avetisyan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Khanova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Choy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Dash</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Nießner</surname>
          </string-name>
          , Scenecad:
          <article-title>Predicting object alignments and layouts in rgb-d scans</article-title>
          ,
          <source>in: Computer VisionECCV</source>
          <year>2020</year>
          : 16th European Conference, Glasgow, UK,
          <year>August</year>
          23-
          <issue>28</issue>
          ,
          <year>2020</year>
          , Proceedings,
          <source>Part XXII 16</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>596</fpage>
          -
          <lpage>612</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>