<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Posters and Demos, October</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Comparison of Dependencies for Human-Robot Interaction Types</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jeshwitha Jesus Raja</string-name>
          <email>jeshwitha.jesusraja@study.thws.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Philipp Kranz</string-name>
          <email>philipp.kranz@thws.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marian Daun</string-name>
          <email>marian.daun@thws.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Human-Robot Interaction, Goal Modeling, Dependencies, Interaction Types</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Center for Robotics, Technical University of Applied Sciences Würzburg-Schweinfurt</institution>
          ,
          <addr-line>Schweinfurt</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>2</volume>
      <fpage>8</fpage>
      <lpage>31</lpage>
      <abstract>
        <p>Typically, four types of human-robot interaction are distinguished: Coexistence, Synchronization, Cooperation, and Collaboration. They difer in the degree of interaction between human and robot and have therefore high impact on safety requirements for a system. Leading to human-robot cooperation and collaboration systems rarely being introduced in industrial practice due to the high risks associated with them. An underlying problem for this, is the lack of understanding for the diferences and necessities of these interaction types on a conceptual level. In this paper, we investigate the diferences between the four human-robot interaction types with respect to the dependencies between human and robot. Although in all interaction types humans and robots depend on each other, we show that these dependencies are very diferent in nature. For future use cases, this analysis helps in binding safety risks to concrete properties of the human-robot dependencies.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        To enhance performance, both industry and academia have been continuously endeavoring to
develop and refine self-assessment models capable of evaluating organizations’ readiness for
Industry 4.0 [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The integration of intelligent robotics systems is seen as a pivotal factor to
achieve this goal since it enhances eficiency, flexibility, and overall productivity of modern
manufacturing landscapes [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. For all of these reasons, in the last years, research eforts have
been directed towards a modular robotics with the goal of improving Industry 4.0 readiness [3].
      </p>
      <p>The integration of intelligent robotic systems in Industry 4.0 along with the support of
humanrobot interaction (HRI) is seen as a key driver to reduce production costs while simultaneously
improving product quality, particularly for producing small batch sizes or highly individualized
products [4].</p>
      <p>HRI is the field of designing, understanding, and evaluating robotic systems, that involves
humans and robots interacting through communication [5]. Safety during HRI is vital, especially
the safety of the human, namely, the safe interaction between operators and collaborative robots
[6]. In current practice, the introduction of collaborative systems is often hindered by insuficient
nEvelop-O
(M. Daun)
CEUR
Workshop
Proceedings
understanding of the safety implications resulting from the interactions between human and
robot in direct collaborative scenarios [7]. Therefore, we see a rising introduction of
humanrobot interaction, focusing on a clear separation between the working spaces of human and
robot. At the same time, the benefits of collaborative environments are seen as a key for further
advancing of flexible manufacturing.</p>
      <p>In this paper, we contribute an investigation of the dependencies between human and robot.
To do so, we use goal models to highlight the diferences in dependencies between human and
robot for diferent interaction types. We conclude by proposing a framework of human-robot
dependencies that relate to diferent HRI types. This allows gaining a deeper understanding of
the nature of human-robot interaction and its diferent manifestations (i.e. the diferent HRI
types). It, furthermore, allows the development of systematic approaches to specify HRI and to
assess the risks attached to the diferent dependencies.</p>
      <p>The paper is structured as follows, Section 2 gives an overview of the related work. Section 3
investigates the dependencies that exist in diferent types of HRI. Section 4 then discusses and
evaluates the diferences in the previously investigated dependencies. Section 5 then concludes
the paper along with possible future work.</p>
    </sec>
    <sec id="sec-3">
      <title>2. Foundations and Related Work</title>
      <sec id="sec-3-1">
        <title>2.1. Types of Human-Robot Interaction</title>
        <p>The field of HRI lacks consensus on the terminology used to classify the diferent types of
interactions between humans and robots [8]. We refer to Bauer et al.’s [9] categorization of
interaction types, as it covers numerous diferent modalities and is widely used in the field.</p>
        <p>The various interaction types are illustrated in Figure 1 and are described in brief below:
• Human-robot Coexistence is a type of interaction where both human and robot work
in separated workspaces without any protective fences between them [10]. There exist no
direct contact between the human and the robot, as both work on separate components
in their respective workspaces.
• Human-robot Synchronization is the type of interaction where both human and robot
work on the same component in the same workspace, but not at the same time. They can
have access to the workspace and component one after the other. The contact between
them is unnecessary and should be avoided.
• Human-robot Cooperation is the type of interaction where both human and robot
work on diferent components in the same workspace and at the same time. The contact
between human and robot might occur due to working in the same workspace, but it is
unnecessary.
• Human-robot Collaboration is the type of interaction where both human and robot
work on the same component in the same workspace and at the same time. The contact
between human and robot is necessary.</p>
      </sec>
      <sec id="sec-3-2">
        <title>2.2. Goal Modeling</title>
        <p>Goal modeling is an established approach in requirements engineering [11] and addresses the
goals from the very early stages of development [12], already allowing the reasoning of concepts
and diferent solution alternatives [ 13]. Common goal modeling approaches are iStar [14, 15, 16],
GRL [17] (a light-weight standardized version of iStar) and KAOS [18, 19].</p>
        <p>Goal models provide a systematic approach to displaying stakeholder or system goals in
their context. The goals capture, at diferent levels of abstraction, the various objectives
the system under consideration should achieve[20]. The iStar framework is, among others,
beneficial in capturing and analyzing properties of complex systems in terms of actors, their
intentions, and their relationships [21]. Goal models emphasize the relation between the various
elements, which allows one to specify the hierarchical decomposition of goals, contributions,
and dependencies. GRL and iStar have already been shown to be useful for modeling robotic
systems in early phases (e.g., [22]). For instance, Morales et al. [23] use iStar to successfully
specify teleo-reactive robots. Their extension is later on systematically integrated into the work
by Gonçaleves et al. [24].</p>
        <p>In previous work, we proposed the use of a lightweight GRL-compliant iStar extension to
model collaborative cyber-physical systems [25]. Among the chosen system types to which
the extension was applied, a fleet of collaborative transport robots used in a modern factory
for evaluation. In more recent work, we extended this goal modeling extension to support
safety analyses of human-robot collaborations [26, 22] and the specification of digital twins for
human-robot collaboration systems [27].
3. Investigating Dependencies in Human-Robot Interaction
In this paper, we investigate what dependencies exist in diferent HRI scenarios. We, therefore,
compare how dependencies difer for the four types of HRI: Coexistence, synchronization,
cooperation, and collaboration. To do so, we introduce and then analyze a case example from
the industry domain that includes all four types in one assembly process.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.1. Case Example</title>
        <p>To visualize the diferences and maintain relation to the real world, we chose a small scale
industry case example from the manufacturing domain, the assembly of a toy truck. The
idea is that the truck is assembled on a collaborative workspace, where a human and a cobot
work together in close proximity. A projector is used to display instructions for the human
to follow, and it also displays boundaries of the workspaces for the human, the cobot and the
area for both human and cobot. During the assembly of the toy truck, diferent human-robot
interaction scenarios are applied, which correspond to four types of human-robot interaction:
coexistence, synchronization, cooperation, and collaboration as shown in Figure 1. A detailed
explanation of the tasks involved in the assembly process for each interaction type is provided
in the subsections 3.2, 3.3, 3.4 and 3.5</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.2. Human-Robot Coexistence</title>
        <p>While the human prepares four axle holders by inserting two screws in each holder, the robot
assembles the base of the truck by inserting the cabin, the load carrier, and the chassis into the
mounting bracket. Figure 2 shows the goal model for this interaction type. There exist no new
dependencies except the normal ones between the actors during the execution tasks, as there is
currently no communication between them.</p>
      </sec>
      <sec id="sec-3-5">
        <title>3.3. Human-Robot Cooperation</title>
        <p>Pick and place 8
screws
Pick and place 4
axle holders
Prepare axle
holder
AND</p>
        <p>Assemble 4 axle
holders</p>
        <p>AND
Certain tasks of the cobot are dependent on the tasks of the human and vice versa. These tasks
are not necessarily bidirectional. The dependencies here illustrate how one actor must wait for
the other actor to complete a task on a certain component. If there are diferent components
involved, there exist no dependency between both actors. For example, the task of the actor
human ‘Place one axle holder on the left of rear axle’ is dependent on the task of the actor cobot
‘Place rear axle in the back of the chassis’ meaning the cobot has to finish working on the part
‘rear axle’ before the human could start working on it.</p>
        <p>AND</p>
      </sec>
      <sec id="sec-3-6">
        <title>3.4. Human-robot Synchronization</title>
        <p>human-robot synchronization is similar to that of human-robot cooperation. The diference is
Pick and place 8
screws</p>
        <p>AND
that, unlike cooperation, synchronization requires maintaining spatial restrictions. The human
is not allowed to enter the shared
workspace
while the robot is working in the said
workspace
and vice versa.</p>
        <p>After the cobot picks and
places the front axle, it
moves out of the shared
workspace. The human then enters the</p>
        <p>workspace and screws two axle holders to the left and
right of the front axle. The human again leaves the area so that the cobot can place the rear
axle. It is then mounted in the same way.</p>
        <p>In Figure 4, the dependency shown in color blue is the one representing human-robot
synchronization. The dependencies are considered from the fact that when the human is
working in the shared area, the cobot is not allowed to enter the area and when the cobot is
working, the human is not allowed to enter the shared area. For example, task ‘Place one axle
holder on the right of front axle’ is dependent on ‘Place front axle in the front of the chassis’.
Another example is that the task of the actor cobot ‘Place rear axle in the back of the chassis’
is dependent on the task of the human ‘Screw 2 screws to the right front axle holder’. The
dependency explained here states that one task has to be completed for the second task to begin.
Another dependency type states that the task must be completed, and the human must exit the
shared workspace for the robot to enter and perform its task, and vice versa. Synchronization
is restricted in comparison to cooperation due to these dependencies.</p>
      </sec>
      <sec id="sec-3-7">
        <title>3.5. Human-robot Collaboration</title>
        <p>Human-robot collaboration difers from those for human-robot synchronization and cooperation,
as an additional task of holding the axles is performed by the robot while the human is screwing
the axle holders. After the cobot picks and places the front axle, it holds the axle in place for
the human to screw the axle holders. Once both the axle holders are fixed, the cobot then
moves on to performing the next task. The rear axle is then assembled the same way. Thus, for
collaboration, human and cobot are working in the shared area, on the same component and at
the same time.</p>
        <p>In figure 4, the dependencies for human-robot collaboration are shown in the color orange.
When the robot is holding and the human is screwing, both the human and robot are working
together, which means they are working on the axles of the truck at the same time in the shared
workspace. This is specified in the goal model, with a bidirectional dependency between tasks
like ‘Screw 2 screws to the right rear axle holder’ of the human and ‘Hold rear axle’ of the robot.
Another example of the bidirectional dependency can be seen between the task ‘Screw 2 screws
to the left front axle holder’ and ‘Hold front axle’. The collaboration also involves an active
interaction where the task of robot ‘Pick rear axle’ is dependent on the task of the human ‘Screw
2 screws to the left front axle holder’ which means the robot cannot pick the rear axle until the
human is done screwing the axle holder to the right of the front axle.
4. Dependencies in Human-Robot Interaction</p>
      </sec>
      <sec id="sec-3-8">
        <title>4.1. Identified Dependency Types in HRI</title>
        <p>In human-robot coexistence, there exist no dependency between the actors human and robot. As
for the rest, the dependencies depend on the sequence of execution of tasks and the restrictions
of the workspace. In human-robot synchronization, tasks are performed one after the other by
both actors, stating one actor has to wait for the other actor to finish a task. The actor also has
to wait for the other actor to leave the part and the shared workspace for them to perform their
own tasks.</p>
        <p>In summary, for human-robot synchronization, two diferent dependency types need to be
distinguished: process sequence dependencies and spatial dependencies. Process sequence
dependencies refer to dependencies resulting from the assembly process (i.e. one task has to be
ifnished before another task can be performed). Spatial dependencies result from the sharing of
the workspace (e.g., the human must have left the workspace, before the cobot is allowed to
enter).</p>
        <p>For human-robot cooperation, again process sequence dependencies are as seen for
humanrobot synchronization, as the tasks of both actors depend on other tasks to be executed by that
time. As human and cobot do not work on the same component at the same time, it is not
necessary to leave the shared workspace. However, it is necessary to observe spatial time
dependencies, which difer from those used in human-robot synchronization. The actors are
not dependent on a cleared space; rather, they must adapt their behavior when the other actor is
present. For example, a robot must work more slowly if the human is present in a certain area.</p>
        <p>In human-robot collaboration, both actors are dependent on each other when performing a
task on a part in the shared area. Thus, we see bidirectional synchronization dependencies,
where both actors need to synchronize to work on a task at the same time. Thus, the work of
the cobot depends on the human and vice versa. In addition, we again have process sequence
dependencies and spatial time dependencies for certain tasks.</p>
        <p>In summary, five types of dependencies to be diferentiated to properly describe and
diferentiate HRI are identified:
• Normal dependency: In addition, to the dependencies related to HRI, normal
dependencies still exist, which are known from other system types as well. For instance, actors or
tasks being dependent on the availability of a resource. In this paper, we do not discuss
these normal dependencies to avoid confusion with the dependencies related to the HRI
types.
• Process sequence dependency: Dependency rooted in the production process, where
one task can only be executed after the successful completion of another task.
• Spatial dependency: Dependency where a task by one actor can only be executed if the
workspace has been prepared, meaning the actor who performed the previous task has
vacated the workspace, allowing the current actor complete access to the entire area.
• Spatial time dependency: Behavioral dependency, where the tasks of two actors are
interdependent and must adhere to specific time related conditions during their execution.
• Bidirectional synchronization dependency: Dependency where a task by one actor
is dependent on a task by another actor, and reciprocally, that task by the other actor is
dependent on the task by the first actor.</p>
        <p>Figure 5 visualizes the diferent dependency types and their relations to the diferent HRI
types using a simplified GRL meta-model based on [ 15]. We introduce the actors human and
collaborative robot, which are central for HRI. In addition, the four newly identified dependency
types are highlighted.</p>
        <p>(Classical) 
Dependency
Bidirectional 
Synchronization 
Dependency
Spatial Time </p>
        <p>Dependency
Process Sequence </p>
        <p>Dependency</p>
        <p>Spatial 
Dependency</p>
        <p>XOR‐
Refinement
0..1 to
to</p>
        <p>Contribution Type
2..* 1..* 1..*
00..1..11 &lt;I&lt;n0AEt.lbe.e*nsmttrieaoncntta&gt;l&gt;  00....**contributesTo
{xor}</p>
        <p>IOR‐
Refinement</p>
        <p>0..1
Goal
Softgoal</p>
        <p>Task
Resource</p>
        <p>Belief
Human‐Robot 
Coexistence</p>
        <p>Human‐Robot 
Synchronization</p>
        <p>Human‐Robot 
Cooperation</p>
        <p>Human‐Robot 
Collaboration</p>
      </sec>
      <sec id="sec-3-9">
        <title>4.2. Discussion</title>
        <p>A comparative analysis of the interdependencies between humans and robots reveals that
the specific interdependencies involved vary depending on the interaction type. While only
the normal dependencies need to be taken into account for coexistence, synchronization also
involves process sequence and spatial dependencies, and cooperation involves process sequence
and spatial time dependencies. In the context of collaboration, the same dependencies that apply
to cooperation are also relevant, with the additional consideration of bidirectional dependencies
(Figure 5).</p>
        <p>It is essential to consider the potential safety risks associated with each dependency between
humans and robots when performing tasks together. Therefore, it is crucial to identify these
dependencies at the outset of the assembly sequence planning process, including how they
may be influenced by the choice of interaction modality. Goal models ofer a straightforward
method for identifying task-specific dependencies and the overall number of dependencies
to be considered in the assembly process. This approach allows planning experts to evaluate
the relative merits of diferent interaction modalities, taking into account the additional efort
required to manage dependencies at the task level within the assembly process.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>5. Conclusion and Future work</title>
      <p>In future manufacturing scenarios, HRI is crucial, as evidenced by increasing research in
this area. Despite strong research interest in intensive human-robot collaborations involving
physical contact, there is reluctance in industry applications due to safety concerns and a
lack of understanding of how these interactions afect each other. This paper investigates the
diferences in dependencies between humans and robots across various interaction types.</p>
      <p>In general, it is accepted, that there are four major types of HRI: Coexistence, Synchronization,
Cooperation, and Collaboration, each varying in the degree of interaction and dependencies
between humans and robots. In this paper, we used goal models to investigate the dependencies
between human and robot for the diferent types of HRI. We conclude that there exist diferent
types of dependencies as well. These dependencies can be categorized into four types excluding
the normal dependency: Process sequence, Spatial, Spatial Time, and Bidirectional, which are
based on the sequence of task execution and workspace restrictions. Process sequence refers to
the order in which tasks are performed, spatial to the physical workspace shared, spatial time
to the timely behavior in shared workspaces, and bidirectional to mutual communication. We
then showed how these relate to the interaction types. It can be said that we do not have a
1:1 relation between interaction types and dependency types. However, not every dependency
type is used for every interaction type and vice versa. Which we believe to have an impact on
the safety assessment of the HRI types and needs to have an influence on structured approaches
to develop human-robot interactions.</p>
      <p>Understanding and managing these dependencies is crucial for ensuring both the safety
and successful completion of tasks in HRI. These factors play a significant role in designing
eficient workflows and enhancing the overall synergy between human workers and robotic
systems. For the future, safety and potential threats must be taken into account concerning
these types of dependencies. Therefore, an extensive analysis on the correlation between the
dependency and the safety must be conducted. In addition, generalizability of our findings
must be assured by application to diferent use cases in HRI. Furthermore, HRI are also used
outside the manufacturing domain, for which transferability of the results to other domains
(e.g., service robotics) should be proved.
[3] G. J. Hamlin, A. C. Sanderson, Tetrobot modular robotics: Prototype and experiments,
in: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems.</p>
      <p>IROS’96, volume 2, IEEE, 1996, pp. 390–395.
[4] J. Scholtz, Theory and evaluation of human robot interactions, in: 36th Annual Hawaii
International Conference on System Sciences, 2003. Proceedings of the, IEEE, 2003, pp.
10–pp.
[5] M. A. Goodrich, A. C. Schultz, et al., Human–robot interaction: a survey, Foundations and</p>
      <p>Trends® in Human–Computer Interaction 1 (2008) 203–275.
[6] A. Zacharaki, I. Kostavelis, A. Gasteratos, I. Dokas, Safety bounds in human robot
interaction: A survey, Safety science 127 (2020) 104667.
[7] R. Galin, R. Meshcheryakov, Collaborative robots: development of robotic perception
system, safety issues, and integration of ai to imitate human behavior, in: Proceedings of
15th International Conference on Electromechanics and Robotics” Zavalishin’s Readings”
ER (ZR) 2020, Ufa, Russia, 15–18 April 2020, Springer, 2021, pp. 175–185.
[8] F. Vicentini, Terminology in safety of collaborative robotics, Robotics and
Computer</p>
      <p>Integrated Manufacturing 63 (2020) 101921.
[9] W. Bauer, M. Bender, M. Braun, P. Rally, O. Scholtz, Lightweight robots in manual assembly–
best to start simply, Frauenhofer-Institut für Arbeitswirtschaft und Organisation IAO,
Stuttgart 1 (2016).
[10] C.-P. Lam, C.-T. Chou, K.-H. Chiang, L.-C. Fu, Human-centered robot navigation—towards
a harmoniously human–robot coexisting environment, IEEE Transactions on Robotics 27
(2011) 99–112. doi:10.1109/TRO.2010.2076851.
[11] A. Van Lamsweerde, Goal-oriented requirements engineering: A guided tour, in:
Proceedings fith ieee international symposium on requirements engineering, IEEE, 2001, pp.
249–262.
[12] A. M. Grubb, M. Chechik, Formal reasoning for analyzing goal models that evolve over
time, Requirements Engineering 26 (2021) 423–457.
[13] J. Horkof, E. Yu, Interactive goal model analysis for early requirements engineering,</p>
      <p>Requirements Engineering 21 (2016) 29–61.
[14] E. S. Yu, Towards modelling and reasoning support for early-phase requirements
engineering, in: Proceedings of ISRE’97: 3rd IEEE International Symposium on Requirements
Engineering, IEEE, 1997, pp. 226–235.
[15] D. Amyot, J. Horkof, D. Gross, G. Mussbacher, A lightweight grl profile for i* modeling, in:
Advances in Conceptual Modeling-Challenging Perspectives: ER 2009 Workshops CoMoL,
ETheCoM, FP-UML, MOST-ONISW, QoIS, RIGiM, SeCoGIS, Gramado, Brazil, November
9-12, 2009. Proceedings 28, Springer, 2009, pp. 254–264.
[16] F. Dalpiaz, X. Franch, J. Horkof, istar 2.0 language guide, arXiv preprint arXiv:1605.07767
(2016).
[17] ITU International Telecommunication Union, Recommendation ITU-T Z.151: User
Requirements Notation (URN), Technical Report, 2018.
[18] A. Dardenne, A. Van Lamsweerde, S. Fickas, Goal-directed requirements acquisition,</p>
      <p>Science of computer programming 20 (1993) 3–50.
[19] A. Van Lamsweerde, Requirements engineering: From system goals to UML models to
software, volume 10, Chichester, UK: John Wiley &amp; Sons, 2009.
[20] A. van Lamsweerde, Goal-oriented requirements engineering: a guided tour, in:
Proceedings Fifth IEEE International Symposium on Requirements Engineering, 2001, pp. 249–262.
doi:10.1109/ISRE.2001.948567.
[21] E. Yu, Modeling strategic relationships for process reengineering (2010).
[22] M. Manjunath, J. Jesus Raja, M. Daun, Early model-based safety analysis for collaborative
robotic systems, IEEE Transactions on Automation Science and Engineering (2024).
[23] J. M. Morales, E. Navarro, P. Sánchez, D. Alonso, Tristar: an i* extension for teleo-reactive
systems requirements specifications, in: Proceedings of the 30th Annual ACM Symposium
on Applied Computing, 2015, pp. 283–288.
[24] E. Goncalves, J. Araujo, J. Castro, Prise: A process to support istar extensions, Journal of</p>
      <p>Systems and Software 168 (2020) 110649.
[25] M. Daun, J. Brings, L. Krajinski, V. Stenkova, T. Bandyszak, A grl-compliant istar extension
for collaborative cyber-physical systems, Requirements Engineering 26 (2021) 325–370.
[26] M. Daun, M. Manjunath, J. Jesus Raja, Safety analysis of human robot collaborations with
grl goal models, in: International Conference on Conceptual Modeling, Springer, 2023, pp.
317–333.
[27] J. Jesus Raja, M. Manjunath, M. Daun, Towards a goal-oriented approach for engineering
digital twins of robotic systems., in: ENASE, 2024, pp. 466–473.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hizam-Hanafiah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Soomro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. L.</given-names>
            <surname>Abdullah</surname>
          </string-name>
          ,
          <article-title>Industry 4.0 readiness models: a systematic literature review of model dimensions</article-title>
          ,
          <source>Information</source>
          <volume>11</volume>
          (
          <year>2020</year>
          )
          <fpage>364</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Soori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Dastres</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Arezoo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. K. G.</given-names>
            <surname>Jough</surname>
          </string-name>
          ,
          <article-title>Intelligent robotic systems in industry 4.0: A review</article-title>
          ,
          <source>Journal of Advanced Manufacturing Science and Technology</source>
          (
          <year>2024</year>
          )
          <fpage>2024007</fpage>
          -
          <lpage>0</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>