<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <article-id pub-id-type="doi">10.1016/j.jmsy.2022.06.015</article-id>
      <title-group>
        <article-title>Towards Spatial Conceptual Modeling for Robotic Digital Twins Based on URDF</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Daniel Borcard</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hans-Georg Fill</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Fribourg</institution>
          ,
          <addr-line>Boulevard de Pérolles 90, 1700 Fribourg</addr-line>
          ,
          <country country="CH">Switzerland</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>14320</volume>
      <fpage>372</fpage>
      <lpage>389</lpage>
      <abstract>
        <p>In this paper we report on the design and implementation of a metamodel for the Unified Robot Description Format (URDF) on the MM-AR metamodeling platform. The goal of this approach is to provide the foundation for the integration of digital twins of robots as specified in URDF with conceptual enterprise models. In particular, the MM-AR platform and its underlying meta2 model permit the inherent integration of principles from spatial computing for enabling the representation and manipulation of conceptual models in spatial environments. Ultimately, this can be used to link behavioral specifications of robots on various abstraction levels with the structure and physical properties of robots as given by URDF.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Robotics</kwd>
        <kwd>Spatial Computing</kwd>
        <kwd>Conceptual Modeling</kwd>
        <kwd>Metamodeling</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Robotics is a complex technical domain combining multiple disciplines, such as mechanical engineering,
computer science and specific application domains. On a general level, we can distinguish in robotics
between the structure of robots and the behavior that they exhibit. The structure of robots can today be
well described thanks to the Unified Robot Description Format (URDF), which is supported by various
design tools such as SolidWorks [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], MATLAB[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] or Fusion 360 [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. URDF is an XML-based format that
specifies how a robot is designed in terms of physical geometry and movements. It is mainly used as a
basis for simulation purposes [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ].
      </p>
      <p>
        As explored in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], the behavior of robots has been described through models using languages such
as BPMN [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ], Petri nets [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] or behavior trees [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Because robots operate in spatial environments,
representing and editing their behavior poses some challenges. In particular, modeling languages that
have been previously used for this purpose lack inherent concepts for three-dimensional space. With the
proposal of spatial conceptual modeling, it has been demonstrated how principles from spatial computing
can be integrated on the meta level of modeling languages [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. This makes it possible to consider
spatial concepts inherently in any modeling language, including the corresponding tool support.
      </p>
      <p>Therefore, we propose a combination of URDF with spatial conceptual modeling. The goal is to
enable easier model-based design of robotic behaviors, while at the same time allowing users to view
or edit the robot state using URDF concepts. To achieve this, we first need to represent URDF using
spatial conceptual modeling concepts. For this purpose we rely on a meta2 model for spatial conceptual
modeling and a corresponding software implementation in the form of the MM-AR metamodeling
platform. This allows us to provide a first demonstration of the interactions between behavioral models
and 3D robotic architectures in a pick-and-place scenario.</p>
      <p>With this article, we investigate the following research questions (RQs):
• RQ1: What added value does a URDF metamodel provide over using URDF files directly?
• RQ2: Which URDF concepts and relationships are necessary and suficient for the URDF
metamodel to faithfully represent fixed industrial robots and support digital twins?
• RQ3: How can spatial conceptual modeling be embedded in the URDF metamodel to align robot
structure with process models?</p>
      <p>RQ1 evaluates the benefits and trade-ofs of a metamodel compared to a static, file-based approach.
RQ2 delineates the core elements and associations the metamodel must capture. RQ3 focuses on
integrating spatial conceptual modeling so that structural views are correctly represented in the 3D
space.</p>
      <p>The remainder of the paper is structured as follows: Section 2 introduces foundations required for
describing the approach and related work. Section 3 presents how the URDF metamodel was derived
and how it has been implemented in the MM-AR web tool. Section 4 then explores how the approach
was applied to the pick-and-place scenario. Finally, in Section 5, we discuss the approach and how it
relates to the field of Digital Twins. In Section 6 we conclude the findings and propose further research
directions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Foundations &amp; Related Work</title>
      <p>In this section we will present the necessary foundations and related work in robotics and URDF,
knowledge-based digital twins, as well as spatial conceptual modeling.</p>
      <sec id="sec-2-1">
        <title>2.1. Robotics &amp; URDF</title>
        <p>
          Robots come in various shapes and sizes. From small mobile units to robot vacuum cleaners [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] to large
6-Degree-of-Freedom (6-DoF) robotic arms in automotive construction [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. Stationary robots, i.e., as
opposed to mobile robots, are autonomous systems that are fixed to a specific point in space and cannot
move by themselves [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. This includes for example industrial 6-DoF arms (illustrated in Figure 5), delta
robots, or SCARA robots. A static robot is composed of a set of interconnected links and joints that
support the movement of an end efector through space [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. A link is a rigid body connected to one
or more other links by joints. A joint is a mechanical part that connects two rigid bodies [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. Joints
can be of diferent types depending on the desired motion and constraints, such as prismatic, revolute,
or planar [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. The end efector is a tool that allows the robot to perform its tasks. End efectors can
be, for example, a gripper (e.g., pneumatic gripper), a processing tool (e.g., cutting tool), or a sensor
(e.g., proximity sensor). By combining links, joints and end efectors, one can create various robot types
with diferent capabilities and reach. This variation in shape, size and capabilities can lead to dificulties
when it comes to the digital representation and manipulation of the robot.
        </p>
        <p>
          For this purpose, the Unified Robot Description Format (URDF) was developed to standardize the
representation of robots and their properties [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. URDF is an XML-based format used in various robotic
technologies such as the Robot Operating System (ROS) or robotic 3D modeling tools (e.g., Fusion 360).
URDF allows the modeler to represent a robot composed of rigid links and joints. URDF links have
attributes such as inertia, visual and collision that can be used for simulation purposes. Similarly, URDF
joints are defined by their axis (i.e., rotation, translation or planar depending on the desired type of
joint), limit and dynamics. Combining these aspects, URDF can be used for kinematics calculation,
simulation and prediction in specific tools such as Gazebo [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] or RViz [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. Gazebo is a popular open
source robotic simulation tool developed by Nathan Koenig and Andrew Howard, and now maintained
by Open Robotics and the broader community. Similarly, RViz is a 3D visualization tool for ROS
that allows the visualization of sensor data, robot models, and coordinate frames in real time. It was
originally developed at the Willow Garage robotic lab and is now maintained by the ROS and Open
Robotics community. URDF, however, lacks semantic information about the robot. The Semantic Robot
Description Format (SRDF) aims to address this by adding new concepts to URDF such as groups of
links and joints and end efectors. Xacro is an XML macro language used to construct and generate
large URDF files by using for example macros, properties, math expressions and conditional blocks [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
        </p>
        <p>URDF itself also covers mobile platforms (e.g., bases with planar/floating joints and wheel assemblies).
A complete mobile-robot digital twin additionally needs environment/localization frames (map/odom)
and motion constraints, which are, for now, outside of the scope of this work.</p>
        <p>URDF further allows representation of the physical aspects of a robot, but not its behavior. There is a
gap between low-level definitions of robots and high level tasks. Robot programming itself is a complex
undertaking that requires deep knowledge of the field. The use of conceptual modeling in this field
aims to abstract from the technical aspects of a specific domain to make it more accessible for domain
specialists to define robotic behavior.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Knowledge Based Digital Twin in Robotics</title>
        <p>
          According to Singh et al. [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] a digital twin is a dynamic and self-evolving digital model that aims to
represent a physical system. These digital models may pertain to four categories [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]:
1. Geometric models capture shape, size, internal structure, spatial frames, and poses. In robotics,
this includes kinematic chains, link/joint topology, and collision/visual meshes.
2. Physical models represent material and dynamic properties (e.g., mass and inertia,
stifness/compliance, thermal or electrical parameters). These enable virtual commissioning and quality
control by allowing the twin to predict working envelopes, tolerance propagation, and energy
consumption.
3. Behavioral models encode operational logic (i.e., sequential, concurrent, periodic, or
eventdriven) including control policies, task plans, and exception handling. They support monitoring,
anomaly detection, and online parameter adaptation so that the twin’s control/motion traces
match observations from the shop floor.
4. Rule models formalize domain knowledge and life-cycle constraints (e.g., safety zones,
capability and payload limits, maintenance policies). By codifying expert experience, they expose
evolutionary patterns and enable intelligent decisions such as reconfiguration, scheduling, and
compliance checks.
        </p>
        <p>The digital twin concept is widely used in robotics [19]. For the scope of this paper, we introduce
the notion of knowledge-based digital twins, in which the models of a digital twin are grounded in a
common meta2 model and can therefore be integrated with other conceptual and enterprise models.
This integration enables the re-use of knowledge from other model types in digital-twin scenarios. For
example, a user may define a pick-and-place task in some enterprise modeling scenarios for a 6-DoF
robotic arm. Then, the knowledge-based digital twin can exploit properties of the geometric model to
verify that the specified poses are physically reachable; the physical model can assess whether joint
eforts remain within predefined constraints; the behavior model can analyze the path required to
achieve the goal; and, finally, the rule model can check that the pick-and-place action does not violate
system rules, such as entering a safety area. We therefore leverage the capabilities of URDF to analyze
and simulate robotic tasks. By representing URDF models using spatial conceptual modeling, we can
reference and re-use their concepts and visualizations in other conceptual models such as business
process models or other types of enterprise models.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Spatial Conceptual Modeling</title>
        <p>
          Spatial conceptual modeling integrates principles from spatial computing in conceptual modeling [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
The goal is to anchor knowledge of conceptual models in the physical world. This may then be used
for augmented reality (AR) or virtual reality (VR) applications, as well as applications that contain
references to spatial entities in general, e.g., in architecture, cultural heritage, or digital twins. Such an
integration enhances the interaction with models that refer to the physical space by inherently providing
concepts for three-dimensional representations, such as location and transform properties. In addition,
spatial conceptual modeling can be used to represent further data requirements in spatial computing
such as the representation of fields, objects, networks, and events. A first implementation for proving
the feasibility of spatial conceptual modeling is provided in the form of the MM-AR metamodeling
platform1. It has been used for implementing and evaluating a tool for the Augmented Reality Workflow
Modeling Language (ARWFML), which makes it possible to create AR applications in a no-code fashion
and integrate them with other types of modeling languages [20, 21, 22].
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Related Work</title>
        <p>Generating URDF files has been extensively studied, with most contributions focusing on the
transformation pipeline between modeling tools and URDF artefacts [23, 24]. Schneider et al. present a
DSL-based development process for specifying grasping problems, reusing established standards, such
as URDF, to define robotic structure and dynamics [ 25]. While this strategy facilitates the integration of
emerging standards, it lacks a structured modeling approach to systematically exploit URDF concepts.</p>
        <p>Ringe et al. propose a metamodeling approach to classify robot morphologies by introducing a
dedicated metamodeling language [26]. Thereby, they can represent a broad spectrum of robot
morphologies, i.e., the physical structure, shape, and configuration of robots. However, despite its breadth,
this approach does not benefit from the extensive ecosystem of tools that support URDF. MetaMorph
therefore requires an explicit mapping back to the URDF format and does not solve the challenge of
URDF generation.</p>
        <p>With the approach we propose in the following we therefore aim to combine the strengths of the
URDF ecosystem with the flexibility of conceptual enterprise modeling. Thus, we want to bridge the gap
between robotic morphology definition and conceptual modeling, e.g., for defining robotic behavior.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Metamodel for URDF</title>
      <p>For integrating URDF with the approach of spatial conceptual modeling we use the meta2 model of
the MM-AR approach and the corresponding metamodeling platform. In this way, we will show how
URDF can be realized as a meta model using these foundations. We will first briefly present the MM-AR
meta2 model and then advance to the derivation of the URDF meta model including the expected user
interface.</p>
      <sec id="sec-3-1">
        <title>3.1. MM-AR Meta2 Model and Platform</title>
        <p>The MM-AR meta2 model and the according implementation allow a flexible realization of modeling
languages based on the approach of spatial conceptual modeling. The main concepts of the meta2 model
of MM-AR that will be required for describing the URDF meta model are as follows:
• SceneType denotes a template for a container of Classes with some common purpose, similar to
diagram types in 2D modeling environments.
• Class denotes a template for objects that have a common structure in the form of Attributes.
• Attribute denotes a template for a property, which can be attached to SceneTypes or Classes and
whose values are constrained through an AttributeTable or an AttributeType.
• AttributeTable denotes a template for a tabular structure of properties in the form of two or more</p>
        <p>Attributes.
• AttributeType denotes a template for constraining the value of an Attribute, e.g., a number, a mass,
or a reference to another object. For values other than references to objects, regular expressions
are used to constrain the range of values.</p>
        <p>For integrating fundamental properties of spatial conceptual modeling, all Classes have pre-defined
attributes for locations in diferent coordinate systems, transform attributes, as well as an attribute for
the three-dimensional graphical representation. As a consequence, every object that is instantiated
based on these concepts can be inherently represented in a 3D modeling environment. However, as
will be shown in the next section for robotics, further attributes for spatial information may be needed
depending on the application domain.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. URDF Metamodel using MM-AR</title>
        <p>URDF describes a robot as a set of links connected by joints. These links and joints have associated
properties. In our URDF metamodel, links and joints are modeled as Classes. Both have attributes that
capture the corresponding URDF properties.</p>
        <p>Every concept in URDF has its counterpart in the URDF metamodel, which builds on the MM-AR
meta2 model. For example, Origin is an AttributeTable with the Attributes , , , , ℎ, .
Using a table serves a dual purpose: it aggregates recurrent concepts and enables the inclusion of a
temporal perspective in the model with rows representing diferent states over time.</p>
        <p>Figure 1 depicts the URDF metamodel of the link concept. The Link Class has three optional
AttributeTables: Inertial, Collision, and Visual. The URDF name property is mapped to the default Name
Attribute available for every object in the MM-AR meta2 model; the same mapping applies to joint
names.</p>
        <p>The Inertial AttributeTable comprises two further AttributeTables: Origin and Inertia and one scalar
attribute, Mass. Inertia represents the link’s inertia tensor, Origin specifies the pose of the inertia frame,
and Mass records the link mass.</p>
        <p>The Visual Attribute table comprises three attribute tables: Origin, Geometry, and Material. Origin
is as defined above. Geometry defines the shape of the visual object and can be a sphere, cylinder, or
box, or a reference to an external 3D mesh (e.g., STL or glTF). Material may be a color defined by an
RGBA value or a reference to an external specification (e.g., a .material file). Some 3D mesh files
include embedded materials and textures; in such cases, a separate material attribute is unnecessary.
The Collision AttributeTable contains the Geometry and Origin tables.</p>
        <p>Figure 2 represents the URDF metamodel of the joint concept in URDF. This is a Class meaning that
it is a visual instance. It has seven AttributeTables: Origin, Axis, Calibration, Dynamics, Limit, Mimic
and Safety Controller. Parent and Child are references to links. The joint type attribute defines the type
of joint that links two parts. This type can be one of the following: revolute, continuous, prismatic, fixed,
lfoating, planar. Joint units are derived from the type of selected joint outside of the metamodel. For
example, revolute joints uses radians whereas prismatic joint have units in meters. The axis attribute
table reuses the x,y,z concept of the origin attribute table. The axis defines the joint axis in the joint
frame. The x,y,z components represent a vector and must be normalized. The Dynamics AttributeTable
defines the Damping and Friction attributes. They define the physical properties of the joint, that can
be leveraged for simulation. The units are derived from joint types. For example, the damping in
newton-seconds per meter for prismatic joints and in newton-meter-seconds per radian for revolute
joints and the friction in newtons for prismatic joints, in newton-meters for revolute joints. The Limit
attribute table defines the Lower, Upper, Efort and Velocity Attribute of a joint. This AttributeTable
defines the limits of the joint. The Calibration attribute type defines the Rising and Falling attributes.
They specify the joint position at which a calibration sensor or encoder index transitions high (“rising”)
or low (“falling”) during homing. The Safety controller defines soft lower limit, soft upper limit, k position
and k velocity. The soft limits are not physical limits but limits that can be defined by the user. K
position scales the maximum permitted speed based on how close the joint is to the soft limits, while K
velocity sets an overall speed ceiling, ensuring smooth deceleration near limits and preventing excessive
velocities throughout the motion range. The Mimic attribute table contains the reference to a joint to
mimic, the multiplier and the ofset.
- Link : Class [1..n]
- Joint : RelationClass [0..n]</p>
        <p>Link
- Name : String [1..1]
- Intertial : TableAttribute [0..1]
- Collision : TableAttribute [0..1]
- Visual : TableAttribute [0..1]
- Name : String [1..1]
- Type : Attribute [1..1]
- Origin : AttributeType [0..1]
- Parent : Attribute [1..1]
- Child : Attribute [1..1]
- Axis : TableAttribute [0..1]
- Calibration : TableAttribute [0..1]
- Dynamics : TableAttribute [0..1]
- Limit : TableAttribute [0..1]
- Mimic : TableAttribute [0..1]
- Safety control er : TableAttribute [0..1]</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Implementation of the URDF Metamodel on the MM-AR Platform</title>
        <p>The URDF metamodel has been implemented on the MM-AR platform. Although all data structures
including the access via a REST-API could be realized, the front-end of the platform still misses some UI
components required for editing instances of the URDF metamodel. Figure 3 therefore shows a mock-up
of the Visual attribute of a link as the currently used user interface does not yet support the concept of
a table of a table as detailed in Section 3.2. The Origin can have multiple rows representing multiple
states (i.e., positions) of the visual model.</p>
        <p>When loading URDF files into the MM-AR platform, the process as depicted in Figure 4 is initiated.
The URDF file is first imported into the MM-AR web client tool and converted to an instance of a
SceneType with the URDF metamodel. In this way, the concepts of the URDF file can be directly accessed
from other scene instances on the platform. The 3D view can be an AR scene or a VR environment.
Users then interact with the controller models that allow or restrict the interaction with the 3D model.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Example: Fixed-Robot Pick-and-Place</title>
      <p>In this section, we illustrate the concepts introduced above through a pick-and-place scenario.
Pickand-place is among the most common tasks in robotics [27] and can be generalized across a variety of
contexts. In our example, a box is picked from an initial location and placed, after a prescribed rotation,
at a target location.</p>
      <p>Figure 6 shows the URDF model obtained by converting the original URDF file so that it matches
the URDF metamodel defined in the previous sections. The model represents the 6-DoF robotic arm
Dobot Magician E62. It consists of a base and six links connected by six joints. The BPMN model in
Figure 7 captures the pick-and-place process and is linked to the URDF model in Figure 6. The selected
activity can be visualized in augmented reality (AR), as illustrated in Figure 5. The 3D view can be
aligned spatially to the robot’s physical position, yielding a ghost-like digital overlay of the real robot.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion</title>
      <p>The URDF schema could be successfully implemented on the MM-AR metamodeling platform. Although
the current user interface implementation of the MM-AR web client does not yet fully support all
necessary UI components - e.g., the above-mentioned tables-in-tables representations, we can assess
that the MM-AR meta2 model provides all concepts required for the implementation of URDF. Further,
although MM-AR natively supports the three-dimensional representation of objects - currently in GLTF
format only - and URDF is format-agnostic with regard to 3D formats, the sample robot files used so far</p>
      <sec id="sec-5-1">
        <title>2https://www.dobot-robots.com/products/education/magician-e6.html</title>
        <p>A BPMN implementation of the
pick-andFigure 7: place using the URDF model in the
MM</p>
        <p>AR platform
were only available in STL format. Therefore, the next step will be to either conduct conversions into
GLTF format if possible, or to extend the implementation of MM-AR to also support the STL format.</p>
        <p>Robotic tasks inherently unfold in 3D environments, which are often dificult to capture faithfully
with traditional conceptual modeling notations.</p>
        <p>In our approach, the 3D environment serves as a first-class representation of the digital twin’s current
state. Conventional conceptual modeling approaches typically constrain digital twin representations
to two-dimensional diagrams or isolated 3D visualizations with limited semantics. By integrating a
URDF model, we enhance the ability to represent digital twins in 3D while enriching them with explicit
conceptual semantics, e.g. for specifying behavior in relation to the URDF components.</p>
        <p>URDF benefits from a mature ecosystem of algorithms and simulation tools. Our metamodel
implementation enables these capabilities to be integrated into the MM-AR platform, thereby supporting
simulation and analysis scenarios. For example, inverse kinematics can be applied to assess reachability
between two poses, and collision checking can verify that no interferences occur for a given task
model. These extensions benefit both communities: robotics, which contributes established analyses
and simulations, and conceptual enterprise modeling, which provides rigorous domain-specific
languages and tooling. In doing so, the approach helps bridge the gap between conceptual modeling and
knowledge-based digital twins.</p>
        <p>The current metamodel does not yet include proposed URDF extensions such as sensors or end
efectors. Nevertheless, its structure permits the addition of non-robotic artifacts and attributes without
compromising interoperability with other models or the integration mechanisms.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion &amp; Outlook</title>
      <p>In this paper, we proposed a URDF metamodel that integrates with other modeling languages based
on the MM-AR meta2 model. The metamodel is directly mapped from the URDF XML specification,
enabling a faithful representation of robotic architectures composed of links and joints. Each element
can be associated with properties such as visual and geometry descriptions, mass and inertia, and joint
limits. Mapping URDF concepts to a metamodel allows the instantiation of models representing diverse
robotic systems; these instances can then be referenced by other (meta)models to support broader
modeling and analysis scenarios. By adopting the URDF metamodel, we reduce the efort required to
represent and edit 3D robotic tasks and leverage the existing ecosystem of URDF-based algorithms and
analyses. Designing and analyzing robotic tasks digitally could not only accelerate prototyping and
development but may also reduce costs compared to testing on physical machinery. We can answer the
research questions as follows. (RQ1) The metamodel provides first-class integration with enterprise and
process models via typed cross-model references. The inherent spatial anchoring (locations, transforms,
and 3D representations) can be consumed in AR/VR. The validation and analysis hooks (e.g., unit
constraints, joint-limit consistency, pose reachability) are defined at the modeling level, as well as the
queryability and versioning of structure and state beyond file-level XML. And crucially, it maintains
interoperability with the URDF ecosystem, while enabling conceptual links to behavior models that
raw URDF files do not support. (RQ2) For fixed industrial robots and digital-twin support, the minimal
set realized in our metamodel is:
• Link with Inertial (Origin, Inertia, Mass), Visual (Origin, Geometry, Material), and Collision (Origin,</p>
      <p>Geometry).
• Joint (a relation between Links) with type (revolute, continuous, prismatic, fixed, floating, planar),
Origin, Axis, Limit, Dynamics, Calibration, Safety Controller, and Mimic, plus a scene-level root
link and coordinate frames.</p>
      <p>These cover kinematic topology, core dynamics, geometry/meshes, and operational bounds, which we
found suficient to import an industrial 6-DoF arm, render it in 3D, attach time-varying poses, and run
typical analyses (e.g., reachability/collision) used in digital twins. This implementation aligns with the
elements defined by the URDF XML specification. Elements like SRDF groupings or sensors are useful
extensions but not required for faithful fixed-robot representation.</p>
      <p>(RQ3) We embed spatial conceptual modeling by leveraging MM-AR’s native spatial attributes so
that every URDF element has a pose and 3D representation. We store pose rows in AttributeTables (e.g.,
Origins) so the same robot structure can express diferent runtime states over time along a process. We
created typed relations between process activities (e.g., BPMN tasks) and URDF elements (links, joints,
target poses). This allows process models to constrain or derive spatial states (paths, keep-out zones,
approach poses) and to drive AR overlays, thereby aligning behavioral steps with the robot’s structural
and physical semantics.</p>
      <p>As next steps, we will implement a working prototype that combines the URDF (meta)model with a
process model to specify and execute robotic tasks in a 3D environment. We plan to include concepts
from SRDF and Xacro to have an even more complete robotic modeling metamodel.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>Financial support is gratefully acknowledged by the SmartLiving Lab (https://www.smartlivinglab.ch/
en/) funded by the University of Fribourg, EPFL, and HEIA-FR.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <sec id="sec-8-1">
        <title>The authors have not employed any Generative AI tools.</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Gouasmi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ouali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Fernini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Meghatria</surname>
          </string-name>
          , Kinematic Modelling and
          <article-title>Simulation of a 2-R Robot Using SolidWorks and Verification</article-title>
          by MATLAB/Simulink,
          <source>International Journal of Advanced Robotic Systems</source>
          <volume>9</volume>
          (
          <year>2012</year>
          )
          <article-title>245</article-title>
          . URL: https://doi.org/10.5772/50203. doi:
          <volume>10</volume>
          .5772/50203, publisher: SAGE Publications.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>K. K.</given-names>
            <surname>Pandey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yadrave</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shinde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pagare</surname>
          </string-name>
          ,
          <article-title>Design and Development of an Autonomous Robot Assistant</article-title>
          , in: B.
          <string-name>
            <given-names>B. V. L.</given-names>
            <surname>Deepak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. V. A. R.</given-names>
            <surname>Bahubalendruni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. R. K.</given-names>
            <surname>Parhi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. B.</given-names>
            <surname>Biswal</surname>
          </string-name>
          (Eds.),
          <source>Intelligent Manufacturing Systems in Industry 4.0</source>
          ,
          <string-name>
            <surname>Springer</surname>
            <given-names>Nature</given-names>
          </string-name>
          , Singapore,
          <year>2023</year>
          , pp.
          <fpage>381</fpage>
          -
          <lpage>389</lpage>
          . doi:
          <volume>10</volume>
          .1007/
          <fpage>978</fpage>
          -981-99-1665-8_
          <fpage>34</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Tola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Corke</surname>
          </string-name>
          ,
          <string-name>
            <surname>Understanding</surname>
            <given-names>URDF</given-names>
          </string-name>
          :
          <article-title>A Dataset and Analysis</article-title>
          ,
          <source>IEEE Robotics and Automation Letters</source>
          <volume>9</volume>
          (
          <year>2024</year>
          )
          <fpage>4479</fpage>
          -
          <lpage>4486</lpage>
          . URL: https://ieeexplore.ieee.org/abstract/document/10478618. doi:
          <volume>10</volume>
          . 1109/LRA.
          <year>2024</year>
          .
          <volume>3381482</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Tola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Corke</surname>
          </string-name>
          ,
          <string-name>
            <surname>Understanding</surname>
            <given-names>URDF</given-names>
          </string-name>
          :
          <article-title>A Survey Based on User Experience</article-title>
          ,
          <source>in: 2023 IEEE 19th International Conference on Automation Science and Engineering (CASE)</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          . URL: https: //ieeexplore.ieee.org/abstract/document/10260660. doi:
          <volume>10</volume>
          .1109/CASE56687.
          <year>2023</year>
          .
          <volume>10260660</volume>
          , iSSN:
          <fpage>2161</fpage>
          -
          <lpage>8089</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Borcard</surname>
          </string-name>
          , H.-G. Fill,
          <article-title>State of the Art and Research Directions for Visual Conceptual Modeling in Robotics</article-title>
          , in: R.
          <string-name>
            <surname>Guizzardi</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Pufahl</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Sturm</surname>
          </string-name>
          , H. van der Aa (Eds.), Enterprise,
          <source>BusinessProcess and Information Systems Modeling</source>
          , Springer Nature Switzerland, Cham,
          <year>2025</year>
          , pp.
          <fpage>415</fpage>
          -
          <lpage>430</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -95397-2_
          <fpage>26</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>K.</given-names>
            <surname>Bourr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Corradini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pettinari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Re</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Rossi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Tiezzi</surname>
          </string-name>
          ,
          <article-title>Disciplined use of BPMN for mission modeling of Multi-Robot Systems</article-title>
          , in: Proceedings, volume
          <volume>1613</volume>
          ,
          <year>2021</year>
          , p.
          <fpage>0073</fpage>
          . URL: https: //ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3045</volume>
          /paper01.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>F.</given-names>
            <surname>Corradini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pettinari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Re</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Rossi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Tiezzi</surname>
          </string-name>
          ,
          <article-title>A BPMN-driven framework for MultiRobot System development</article-title>
          ,
          <source>Robotics and Autonomous Systems</source>
          <volume>160</volume>
          (
          <year>2023</year>
          )
          <article-title>104322</article-title>
          . URL: https://www.sciencedirect.com/science/article/pii/S0921889022002111. doi:
          <volume>10</volume>
          .1016/j.robot.
          <year>2022</year>
          .
          <volume>104322</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>C.</given-names>
            <surname>Azevedo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Matos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. U.</given-names>
            <surname>Lima</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Avendaño</surname>
          </string-name>
          ,
          <article-title>Petri Net Toolbox for Multi-Robot Planning under Uncertainty</article-title>
          ,
          <source>Applied Sciences</source>
          <volume>11</volume>
          (
          <year>2021</year>
          )
          <article-title>12087</article-title>
          . URL: https://www.mdpi.com/2076-3417/11/24/12087. doi:
          <volume>10</volume>
          .3390/app112412087, publisher: Multidisciplinary Digital Publishing Institute.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Shin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jung</surname>
          </string-name>
          ,
          <article-title>A Survey of Behavior Tree-Based Task Planning Algorithms for Autonomous Robotic Systems</article-title>
          ,
          <source>in: 2024 15th International Conference on Information and Communication Technology Convergence (ICTC)</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>2039</fpage>
          -
          <lpage>2041</lpage>
          . URL: https://ieeexplore.ieee.org/abstract/ document/10827191. doi:
          <volume>10</volume>
          .1109/ICTC62082.
          <year>2024</year>
          .
          <volume>10827191</volume>
          , iSSN:
          <fpage>2162</fpage>
          -
          <lpage>1241</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>H.-G. Fill</surname>
          </string-name>
          , Spatial Conceptual Modeling:
          <article-title>Anchoring Knowledge in the Real World</article-title>
          , in: H.- G. Fill, H. Kühn (Eds.),
          <article-title>Metamodeling: Applications and Trajectories to the Future: Essays in Honor of Dimitris Karagiannis</article-title>
          , Springer Nature Switzerland, Cham,
          <year>2024</year>
          , pp.
          <fpage>35</fpage>
          -
          <lpage>50</lpage>
          . URL: https: //doi.org/10.1007/978-3-
          <fpage>031</fpage>
          -56862-
          <issue>6</issue>
          _3. doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -56862-
          <issue>6</issue>
          _
          <fpage>3</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>T. B. Asafa</surname>
            ,
            <given-names>T. M.</given-names>
          </string-name>
          <string-name>
            <surname>Afonja</surname>
            ,
            <given-names>E. A.</given-names>
          </string-name>
          <string-name>
            <surname>Olaniyan</surname>
            ,
            <given-names>H. O.</given-names>
          </string-name>
          <string-name>
            <surname>Alade</surname>
          </string-name>
          ,
          <article-title>Development of a vacuum cleaner robot</article-title>
          ,
          <source>Alexandria Engineering Journal</source>
          <volume>57</volume>
          (
          <year>2018</year>
          )
          <fpage>2911</fpage>
          -
          <lpage>2920</lpage>
          . URL: https://www.sciencedirect.com/science/ article/pii/S1110016818300899. doi:
          <volume>10</volume>
          .1016/j.aej.
          <year>2018</year>
          .
          <volume>07</volume>
          .005.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Bartoš</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Bulej</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bohušík</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Stanček</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ivanov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Macek</surname>
          </string-name>
          ,
          <article-title>An overview of robot applications in automotive industry</article-title>
          ,
          <source>Transportation Research Procedia</source>
          <volume>55</volume>
          (
          <year>2021</year>
          )
          <fpage>837</fpage>
          -
          <lpage>844</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/S2352146521004543. doi:
          <volume>10</volume>
          .1016/j.trpro.
          <year>2021</year>
          .
          <volume>07</volume>
          .052.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <article-title>International Organization for Standardization (ISO)</article-title>
          ,
          <source>ISO</source>
          <volume>8373</volume>
          :
          <fpage>2021</fpage>
          - Robotics - Vocabulary,
          <year>2021</year>
          . URL: https://www.iso.org/standard/75539.html.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>K. J. Waldron</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Schmiedeler</surname>
          </string-name>
          , Kinematics, in: B.
          <string-name>
            <surname>Siciliano</surname>
          </string-name>
          , O.
          <source>Khatib (Eds.)</source>
          , Springer Handbook of Robotics, Springer International Publishing, Cham,
          <year>2016</year>
          , pp.
          <fpage>11</fpage>
          -
          <lpage>36</lpage>
          . URL: https://doi.org/10.1007/ 978-3-
          <fpage>319</fpage>
          -32552-
          <issue>1</issue>
          _2. doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -32552-
          <issue>1</issue>
          _
          <fpage>2</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>N.</given-names>
            <surname>Koenig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Howard</surname>
          </string-name>
          ,
          <article-title>Design and use paradigms for Gazebo, an open-source multi-robot simulator</article-title>
          ,
          <source>in: 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</source>
          , volume
          <volume>3</volume>
          ,
          <year>2004</year>
          , pp.
          <fpage>2149</fpage>
          -
          <lpage>2154</lpage>
          vol.
          <volume>3</volume>
          . URL: https://ieeexplore.ieee.org/abstract/document/1389727. doi:
          <volume>10</volume>
          .1109/IROS.
          <year>2004</year>
          .
          <volume>1389727</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>H. R.</given-names>
            <surname>Kam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-H.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Park</surname>
          </string-name>
          , C.-H. Kim,
          <article-title>RViz: a toolkit for real domain data visualization</article-title>
          ,
          <source>Telecommunication Systems</source>
          <volume>60</volume>
          (
          <year>2015</year>
          )
          <fpage>337</fpage>
          -
          <lpage>345</lpage>
          . URL: https://doi.org/10.1007/s11235-015-0034-5. doi:
          <volume>10</volume>
          .1007/s11235-015-0034-5.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>M.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Fuenmayor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. P.</given-names>
            <surname>Hinchy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Qiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Murray</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Devine</surname>
          </string-name>
          , Digital Twin: Origin to Future,
          <source>Applied System Innovation</source>
          <volume>4</volume>
          (
          <year>2021</year>
          )
          <article-title>36</article-title>
          . URL: https://www.mdpi.com/2571-5577/4/2/36. doi:
          <volume>10</volume>
          .3390/asi4020036, publisher: Multidisciplinary Digital Publishing Institute.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>F.</given-names>
            <surname>Tao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Qi</surname>
          </string-name>
          , J. Cheng, P. Ji,
          <article-title>Digital twin modeling</article-title>
          ,
          <source>Journal of Manufacturing Systems</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>