<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Exploring LLMs and Semantic XAI for Industrial Robot Capabilities and Manufacturing Commonsense Knowledge⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Muhammad Raza Naqvi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Arkopaul Sarkar</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Farhad Ameri</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Linda Elmhadhbi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Thierry Louge</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mohamed Hedi Karray</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>INSA Lyon, Université Lumiere Lyon 2, Université Claude Bernard Lyon 1, Université Jean Monnet Saint-Etienne,DISP UR4570</institution>
          ,
          <addr-line>0 Av.Albert Einstein, Villeurbanne, 69621, Rhone</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Laboratoire Génie de Production(LGP) Université de Technologie de Tarbes(UTTOP)</institution>
          ,
          <addr-line>47 Av. d'Azereix, 65000, Tarbes</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>School Of Manufacturing Systems: Networks, Arizona State University</institution>
          ,
          <addr-line>Mesa, Arizona, AZ 85212</addr-line>
          ,
          <country country="US">United States</country>
        </aff>
      </contrib-group>
      <kwd-group>
        <kwd>eol&gt;Advertise Capabilities</kwd>
        <kwd>Operational Capabilities</kwd>
        <kwd>Manufacturing Common-Sense knowledge</kwd>
        <kwd>Semantic Explainable AI</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Context and Motivations</title>
      <p>
        In the context of Industry 4.0, flexible manufacturing is especially essential for developing
future factories with enhanced planning, scheduling, and control [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The quick and efective
adaptation in the production line in response to customers’ requirements or facing unwanted
situations will considerably promote flexibility in manufacturing. For instance, due to the
COVID-19 crisis1, some companies have been re-purposing their production lines to join the
ifght against the pandemic 2 (e.g., perfumes to hand sanitizers and vehicles to ventilators). The
question is how the human expert who runs the factory can know if the currently available
resources (machines, tools, equipment, and technicians) can quickly and eficiently switch to a
specific production process based on new work orders [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>Similar disruptions may arise from extreme weather, longer-term climate change, declining
international order, economic crises, changing societal priorities, cyber threats, or terrorism.
Furthermore, flexibility in the manufacturing industries is more critical than ever to tackle
ever-growing product variability, supply chain volatility, and unpredictability of customer
requirements. Product life cycles are becoming more and more dynamic; also, at the same
time, the number of product variants continues to grow. There is a need for more flexible,
trustworthy, and eficient manufacturing processes.Traditional manufacturing paradigms such
as lean manufacturing, just-in-time production, and KANBAN 3 systems often struggle to adapt
to unforeseen disruptions. These systems can fall into a "rupture condition" when faced with
unexpected challenges like pandemics, economic crises, or cyber threats. Rapid reconfiguration
of operations highlights the need for flexibility in manufacturing to manage product variability,
supply chain volatility, and unpredictability of customer demand.</p>
      <p>Most production planning still depends on industry standards, best practices, manufacturing
know-how, and machinists’ experiential knowledge. Despite the advent of Artificial Intelligence
(AI), industries still depend on human expertise to make process planning decisions in an
unprecedented situation because of uncertain or low expectations of AI investments (Trust).
Personal judgment overrides AI-based decision-making (Human-reasoning).</p>
      <p>
        However, human experts will only trust and accept AI results if the automated
decisionmaking process is transparent [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ][
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. If the AI system decides that a production step or plan
matches or does not match the capabilities of a particular machine, the human expert should
understand why this decision is made (i.e., which of the machine’s capabilities leads to such
a decision). Accordingly, a trustworthy AI, commonly called explainable AI (XAI) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], should
be able to explain why a particular decision was reached in a way that human experts can
understand, for example, in the case of real-time decision-making for manufacturing operations,
why some processes were performed, what process is currently being performed, and what
processes should occur in the future. However, the explainability of the AI model standalone is
not enough; decision logic must also be transparent and grounded in interpretable knowledge
structures to ensure accurate understanding, paving the way for integrating semantics alongside
these models.
      </p>
      <p>
        Semantics is the study of meaning [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]; it corresponds to providing structured and meaningful
explanations that are not only data-driven but also align with real-world concepts. Ontologies are
crucial to providing this semantic structure and adding context to the explanations. Ontologies
are defined as an explicit formal semantic representation of knowledge through logical axioms
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        The use of semantics in explaining various types of supervised and unsupervised learning
models was presented by Seeliger et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. In a narrower scope, Bianchi et al. [9] reviewed
methods of embedding knowledge graphs in ML models to achieve explainability. Regarding
the quality of the explanations, past research in behavioral psychology [10][11] showed that
three qualities make the explanations more intuitive for humans. Such explanations must
be more straightforward in that they include fewer general reasons, mention well-known
events as reasons, and be coherent and consistent with prior knowledge. This stresses the
need to adopt XAI techniques that are not solely data-driven (statistical learning) but embed
semantic-driven symbolic reasoning in the prediction models from the ground up. Research is
scarce in this direction, with only a handful of other projects targeting XAI-based hybrid AI for
manufacturing, such as AI4EU444or XMANAI5. Humans expect explainable decisions based
on what they consider commonsense (i.e., simple facts about people and everyday life, data
evidence, and causal reasoning).
3https://leanmanufacturingtools.org/kanban/
4https://www.ai4europe.eu/
5https://ai4manufacturing.eu/
      </p>
      <p>Commonsense knowledge (CSK) explanation is the most meaningful and sought-after
explanation technique [12]. According to the state of the art, incorporating and implementing CSK
capabilities into AI can enhance the overall manufacturing potential and accelerate the growth
of AI applications in the industry [13]. However, representing CSK related to manufacturing is
challenging [14]. In the literature, significant progress has been made in four areas related to
CSK in AI, mainly reasoning about taxonomic categories, logic, time and space, and actions
[15].</p>
      <p>Yet, there seems to be no CSK for the industry that integrates all main aspects of manufacturing
process requirements. The manufacturing commonsense knowledge (MACS) should cover
generic manufacturing background knowledge that human experts, such as machinists, planners,
and shopfloor managers, carry as part of their experience to know or assume to understand
and reason about information (or situation) that they are dealing with (e.g., a machine needs
some kind of energy, a machine can breakdown, the production process needs raw materials,
etc.). Generating commonsense explanations requires integrating richer domain knowledge
[16] represented through ontologies.</p>
      <p>The ontologies should cover production-relevant domains, including machine capabilities,
production process, product specifications, raw materials, etc.</p>
      <p>We examine current methods for manufacturers to increase transparency in AI system
decision-making as a state of the art methodology structure shown in Fig.1. Two promising
areas of XAI are ontology-based (O-XAI) and semantic-based (S-XAI), which use semantic
information to provide human-readable explanations of AI decisions. Translating AI algorithm
decision paths to meaningful explanations using semantics, O-XAI, and S-XAI helps humans
identify cross-cutting concerns influencing AI system decisions. We discuss the pros and cons
of using O-XAI and S-XAI systems in manufacturing and future research potential to guide
researchers and practitioners in utilizing these explainable systems for decision-making [18].</p>
      <p>"CSK and Hybrid AI for Trustful and flexible manufacturing 4.0" (Chaikmat 4.0) 6 is a research
project funded by the French National Agency of Research ANR that aims to add flexibility and
transparency to manufacturing through trustful automatic decision.</p>
      <p>In the context of flexible manufacturing, the selection of resources must also consider the
dynamically changing conditions of the shop floor, including factors such as the age of the
machines, maintenance histories, and the availability of operators. These considerations are
crucial to determining the actual performance of machines and equipment. CHAIKMAT’s core
objective is to evaluate whether available machines can execute specific production processes
efectively.</p>
      <p>The project seeks to add flexibility and transparency to manufacturing through reliable
automatic decision-making systems. These systems are designed to provide human experts
with meaningful explanations about decisions, using MACS to make the process clear and
understandable. To meet these challenges, CHAIKMAT,s proposes a hybrid approach that
bridges the gap between human expertise and AI-based decision-making. This strategy proposes
eficient resource use and enhanced industrial flexibility, laying the groundwork for more
advanced manufacturing operations [17].</p>
    </sec>
    <sec id="sec-2">
      <title>2. Thesis Objectives</title>
      <p>The primary objective of this thesis, within the scope of the project, is to investigate the use
and efects of ontology-based models in manufacturing in the context of semantic reasoning,
explainability, and eficient formalization of machine capabilities. The focus is primarily on
robotics, with the intention of improving the explainability of how robots are assigned tasks
based on their capabilities. This involves developing and validating ontological models to
encapsulate MACS and robotics capabilities for decision-making and explainability. The following
are the objectives that have been outlined below:
• Formalization of machine specifications that include capabilities, capacities, functions,
quality, and process characteristics, focusing on robotics.
• Establish a framework for identifying MACS patterns, extracting MACS, formalizing
MACS using standard vocabulary, converting them into semantic rules, and leveraging
MACS patterns within decision-making processes.
• Utilization of machine capabilities and MACS patterns, alongside a neural network
framework, to explain whether an existing set of machines can perform a specific task or not
based on robot capabilities.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Research Questions</title>
      <p>Developing methods for utilizing machine specifications and MACS patterns, along with
integrating a semantics-based explainable AI (S-XAI) framework, is crucial to achieving the primary
objectives of this thesis. These elements will enhance decision-making processes, increase user
trust, and provide transparency. The following research questions have been formulated to
guide this investigation:
• How can we formalize key notions such as Capability, Capacity, Function, Quality, and</p>
      <p>Process Characteristics in the context of robotics?
• How can MACS be extracted, formalized, and integrated into decision-making processes
to enhance explainability?
• How can ontologies and knowledge graphs can facilitate the creation of explanations that
align with human understanding while enhancing the use of S-XAI in manufacturing
decision-making processes?</p>
    </sec>
    <sec id="sec-4">
      <title>4. Contribution</title>
      <p>This thesis presents three critical contributions to improving the explainability of the
decisionmaking process in manufacturing by integrating robotics capabilities, MACS, and S-XAI.</p>
      <p>Our first contribution is the development of the Robotic Capability Ontology (RCO) [ 19], an
application ontology specifically designed to model robotic capabilities [ 19]. RCO utilizes the
Manufacturing Service Description Language (MSDL) [20], a domain reference ontology created
for manufacturing services and aligned with the Basic Formal Ontology (BFO) [21], Industrial
ontology Foundry (IOF) [22], Information Artifact Ontology (IAO) [23], and Relations ontology
(RO) [24] as shown in Fig,2. MSDL’s modular structure and domain-neutral classes allow RCO
to describe and expand upon robotic capabilities accurately. This enables the design of an
ontological model that precisely captures robotic specifications from its specification manual
as advertised capabilities to actual capabilities in an operational environment as operational
capabilities.</p>
      <p>Our second contribution introduces the concept of MACS. We propose a methodology for
identifying MACS patterns, extracting, formalizing, and modeling this knowledge, and a
standardized vocabulary that organizes MACS into semantic rules, as shown in Fig.3. These semantic
rules are then converted into schema languages such as SPARQL and Datalog rules, which are
used to create a MACS Knowledge Graph (MACS-KG) to demonstrate the applicability of the
proposed method.</p>
      <p>Our third contribution is developing a (S-XAI) framework as shown in Fig.4. that incorporates
a neural network model trained on historical data about diferent tasks to predict robotic
operational capabilities such as ’Repeatability’ and ’Precision’ on a given set of coordinates for
a new task.</p>
      <p>Also, utilizing XAI techniques like Local Interpretable Model-Agnostic Explanations (LIME),
Partial Dependency Plots (PDP), and Permutation Features Importance (PF) for the prediction
of operational capabilities from neural networks alongside natural language explanations based
on MACS patterns, the system provides clear, logical, and understandable explanations for each
prediction.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Summary</title>
      <p>We propose an S-XAI framework to address the concern about explainability issues related to
the decision-making process in the manufacturing industry. The framework combines neural
networks for predictive analysis of robot operational capabilities with rule-based reasoning
grounded in MACS. This approach brings transparency and explainability to decision-making,
ensuring stakeholders can trust and understand automated decisions.</p>
      <p>A vital feature of the framework is the ability to integrate symbolic and sub-symbolic
reasoning paradigms, enabling real-time, explainable decisions in manufacturing environments. By
incorporating a neural network, the system will be scalable with the increasing data volume
and continuously improve through active learning. At the same time, the use of manufacturing
commonsense knowledge ensures that explanations provided to users are contextually relevant
and easy to comprehend, fostering user trust and system acceptance.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This work is performed within the CHAIKMAT project funded by the French National Research
Agency (ANR) under grant agreement ” ANR-21-CE10-0004-01”
[9] Bianchi, F., Rossiello, G., Costabello, L., Palmonari, M., Minervini, P.: Knowledge graph
embeddings and explainable AI. arXiv. (Apr. 2020). https://doi.org/10.3233/SSW200011
[10] Lombrozo, T.: Explanation and abductive inference. In: The Oxford Handbook of Thinking
and Reasoning, pp. 260–276, New York, NY, US: Oxford University Press (2012)
[11] Thagard, P.: Explanatory coherence. Behav. Brain Sci. 12(3), 435–467 (1989). https://doi.or
g/10.1017/S0140525X00057046
[12] Reiter, E. (2019). Natural Language Generation Challenges for Explainable AI. Proceedings
of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial
Intelligence (NL4XAI 2019). https://doi.org/10.18653/v1/w19-8402
[13] Logic, language and commonsense. (1987). Artificial Intelligence, 169–176. https://doi.org/
10.1016/b978-0-08-034112-5.50020-6
[14] Rehse, J.-R., Mehdiyev, N., Fettke, P.: Towards explainable process predictions for industry
4.0 in the DFKI-smart-lego-factory. KI - Künstliche Intell. 33(2), 181–187 (2019). https:
//doi.org/10.1007/s13218-019-00586-1
[15] Davis, E.: Logical formalizations of commonsense reasoning: a survey. J. Artif. Intell. Res.</p>
      <p>59, 651–723 (2017). https://doi.org/10.1613/jair.5339
[16] Panetto, H., Debruyne, C., Hepp, M., Lewis, D., Ardagna, C. A., &amp; Meersman, R. (2019).</p>
      <p>Correction to: On the Move to Meaningful Internet Systems. On the Move to Meaningful
Internet Systems: OTM 2019 Conferences, C1–C1. https://doi.org/10.1007/978-3-030-332
46-4_47
[17] A. Sarkar, M. R. Naqvi, L. Elmhadhbi, D. Sormaz, B. Archimede, M. H. Karray,
CHAIKMAT 4.0 - Commonsense Knowledge and Hybrid Artificial Intelligence for Trusted
Flexible Manufacturing, Springer International Publishing, 2023, p. 455–465. https://doi:
10.1007/978-3-031-17629-6_47.
[18] M. R. Naqvi, L. Elmhadhbi, A. Sarkar, B. Archimede, M. H. Karray, Survey on
ontologybased explainable ai in manufacturing, Journal of Intelligent Manufacturing (2024). https:
//doi:10.1007/s10845-023-02304-z.
[19] M. R. Naqvi, A. Sarkar, F. Ameri, S. N. Araghi, M. H. Karray, Application of msdl in
modeling capabilities of robots (2023). https://ceur-ws.org/Vol-3595/paper7.pdf.
[20] Ameri, F., &amp; Dutta, D. (2006). An Upper Ontology for Manufacturing Service Description.</p>
      <p>Volume 3: 26th Computers and Information in Engineering Conference. https://doi.org/10
.1115/detc2006-99600
[21] J. N. Otte, J. Beverley, A. Ruttenberg, Bfo: Basic formal ontology, Applied ontology 17.</p>
      <p>https://doi.org/10.3233/AO-220262 (2022) 17–43.
[22] Drobnjakovic, M., Kulvatunyou, B., Ameri, F., Will, C., Smith, B., &amp; Jones, A. (2022). The
industrial ontologies foundry (IOF) core ontology. embeddings and explainable AI. arXiv.
(Apr. 2020). https://doi.org/10.3233/SSW200011
[23] Smith, B., Malyuta, T., Rudnicki, R., Mandrick, W., Salmen, D., Morosof, P., ... &amp; Parent, K.</p>
      <p>(2013). IAO-Intel: an ontology of information artifacts in the intelligence domain.
[24] W. J. Wildman, An introduction to relational ontology, The trinity and an entangled world:
Relationality in physical science and theology (2010) 55–73.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Nogalski</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Niewiadomski</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Szpitter</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Agility Versus Flexibility? The Perception of Business Model Maturity in Agricultural Machinery Sector Manufacturing Companies</article-title>
          .
          <source>Central European Management Journal</source>
          ,
          <year>2020</year>
          (3),
          <fpage>57</fpage>
          -
          <lpage>97</lpage>
          . https://doi.org/10.720 6/cemj.2658-
          <fpage>0845</fpage>
          .
          <fpage>27</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Järvenpää</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Siltala</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hylli</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Lanz</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Capability Matchmaking Procedure to Support Rapid Configuration and Re-configuration of Production Systems</article-title>
          .
          <source>Procedia Manufacturing</source>
          ,
          <volume>11</volume>
          ,
          <fpage>1053</fpage>
          -
          <lpage>1060</lpage>
          . https://doi.org/10.1016/j.promfg.
          <year>2017</year>
          .
          <volume>07</volume>
          .216
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Ma</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lei</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zheng</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shi</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yin</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Ma</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making</article-title>
          .
          <source>Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems</source>
          ,
          <volume>29</volume>
          ,
          <fpage>1</fpage>
          -
          <lpage>19</lpage>
          . https://doi.org/10.1145/3544548.3581058.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          (
          <year>2024</year>
          ).
          <article-title>Who Should I Trust: Human-AI Trust Model in AI Assisted DecisionMaking</article-title>
          .
          <source>Lecture Notes in Education Psychology and Public Media</source>
          ,
          <volume>41</volume>
          (
          <issue>1</issue>
          ),
          <fpage>236</fpage>
          -
          <lpage>241</lpage>
          . https: //doi.org/10.54254/
          <fpage>2753</fpage>
          -7048/41/20240805
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>R</surname>
          </string-name>
          ,
          <string-name>
            <surname>J</surname>
          </string-name>
          . (
          <year>2024</year>
          ).
          <article-title>Transparency in AI Decision Making: A Survey of Explainable AI Methods and Applications</article-title>
          .
          <source>Advances in Robotic Technology</source>
          ,
          <volume>2</volume>
          (
          <issue>1</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          . https://doi.org/10.23880/art -16000110
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Loebner</surname>
            ,
            <given-names>S. Understanding</given-names>
          </string-name>
          <string-name>
            <surname>Semantics</surname>
          </string-name>
          . Routledge. (
          <year>2013</year>
          ) https://doi.org/10.4324/97802035 28334
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Ontology.</surname>
          </string-name>
          (
          <year>2020</year>
          ). Ontology. https://doi.org/10.4135/9781526421036869920
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Seeliger</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pfaf</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krcmar</surname>
          </string-name>
          , H.:
          <article-title>Semantic web technologies for explainable machine learning models: a literature review</article-title>
          .
          <source>CEUR Workshop Proc</source>
          .
          <volume>2465</volume>
          (
          <issue>October</issue>
          ),
          <fpage>30</fpage>
          -
          <lpage>45</lpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>