<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Transparent and Adaptive AI Assistant for Teaching Knowledge Engineering</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Stefani Tsaneva</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Laura Waltersdorfer</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Majlinda Llugiqi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marta Sabou</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Vienna University of Economics and Business</institution>
          ,
          <addr-line>Vienna</addr-line>
          ,
          <country country="AT">Austria</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>As generative AI tools become more widespread, students are increasingly using them for assistance with complex tasks such as modeling ontology constraints. While the success of large language models have been widely explored, in-use applications remain underdeveloped, and experimental findings are often inaccessible to students or novice engineers. As a result, learners do not fully benefit from AI-assisted support or fail to critically engage with AI generated outputs. To bridge this gap, we propose a transparent, research-informed AI Assistant framework that follows hybrid intelligence principles and aims to support Knowledge Engineering education, with a focus on modeling logical ontology constraints. Preliminary results suggest that such a system can improve the accuracy of student-generated ontology models by over 10%.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Knowledge Engineering</kwd>
        <kwd>Ontology Evaluation</kwd>
        <kwd>Hybrid Intelligence</kwd>
        <kwd>Transparent AI Systems</kwd>
        <kwd>Large Language Models</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Knowledge Engineering (KE) comprises a range of activities such as knowledge acquisition and its
representation in semantic models, such as ontologies [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Traditionally, KE demands high manual
eforts to define, implement, and validate domain-specific requirements. Yet, there is a lack of tool
support for many KE tasks [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ], increasing the risk of modeling errors, particularly when curators
lack advanced KE training or deal with the modeling of complex logical constraints [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. For example,
the statement “Every Professor supervises at least one Student” can be modeled by defining the class
Professor as equivalent to individuals who supervise at least one Student. While such modeling is
logically consistent, it implies that anyone who supervises a student is, by definition, a professor—an
unintended consequence. Such semantic inaccuracies cannot be detected by logical reasoners and
traditionally require validation by domain experts or skilled knowledge engineers [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. Given the
complexity of the ontology modeling task and limited specialized tools to support ontology curation,
students and novice knowledge engineers often turn to generative AI tools [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        While large language models (LLMs) can provide support, they also come with inherent challenges
and limitations such as lack of reasoning skills [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and hallucinations leading to inaccurate or misleading
claims [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Yet students frequently over-rely on AI-generated outputs, accepting them without suficient
critical evaluation, especially when they lack knowledge in the subject [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ].
      </p>
      <p>
        In the context of KE, the usage of an LLM which lacks the necessary capabilities to perform a concrete
KE task, can fail to improve the quality of the developed resource and may even degrade it [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Although
some research has explored which LLMs perform best [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] or how to prompt for ontology-evaluation
tasks [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], these insights often remain inaccessible to the stakeholders who need them most. Students
and novice practitioners are rarely exposed to such experimental results, in part because tools based
on these findings are seldom developed. As a result, students frequently rely on familiar LLM-based
applications such as ChatGPT even when better models might be available [
        <xref ref-type="bibr" rid="ref6 ref9">9, 6</xref>
        ]. This gap highlights
the need for developing tools and educational resources that bridge the divide between research findings
and practical applications in KE.
      </p>
      <p>To address these limitations and extend the state of the art, we propose an AI Assistant framework
relying on hybrid (Human-AI) intelligence principles, designed to support students in ontology creation
and evaluation with focus on correct constraint (i.e., cardinality, universal and existential quantifiers)
modeling. By combining multiple LLMs and assigning them to sub-tasks according to their
experimentally observed strengths, the application ofers context-sensitive support throughout diferent stages
of the modeling process. Moreover, we aim to build trace-based transparency into the framework by
advanced system tracing. Such a setup would allow for contextual explanations of generated outputs, a
computation of custom confidence scores based on past performance and collection of traces which can
support future adaptations of the system. The envisioned framework would allow students to benefit
from AI-generated assistance while minimizing the risk of incorrect or misleading outputs. Ultimately,
the proposed approach seeks to improve both the quality of student-generated ontologies and the
learning experience itself. Preliminary result suggest that the AI-assisted approach can increase the
accuracy of created ontology models by over 10%.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Envisioned Application</title>
      <p>We propose an AI Assistant designed to (1) support students in detecting and classifying
constraintrelated modeling errors, (2) explain detected mistakes in an accessible and educational manner and (3)
generate possible corrections. The system, illustrated in Fig. 1, comprises several key components: First,
multiple LLMs are integrated to create a multi-functional AI Assistant that leverages the complementary
strengths of diferent models. Second, an audit layer is incorporated to ensure system transparency and
trustworthiness. Third, a feedback loop involving both student and instructor input fosters a hybrid
human–AI co-learning process and supports continuous system adaptation.</p>
      <p>adopt workflow based on LLM performance &amp; availability</p>
      <p>inform initial workflow design</p>
      <sec id="sec-2-1">
        <title>Constraint</title>
      </sec>
      <sec id="sec-2-2">
        <title>Generation</title>
      </sec>
      <sec id="sec-2-3">
        <title>Mistake</title>
      </sec>
      <sec id="sec-2-4">
        <title>Detection</title>
        <p>multi-LLM workflow</p>
      </sec>
      <sec id="sec-2-5">
        <title>Mistake</title>
      </sec>
      <sec id="sec-2-6">
        <title>Explanation</title>
      </sec>
      <sec id="sec-2-7">
        <title>Mistake</title>
      </sec>
      <sec id="sec-2-8">
        <title>Classification</title>
        <p>frequent mistakes
LLM availability 
Ontology
models 
constraints</p>
      </sec>
      <sec id="sec-2-9">
        <title>AI Assistant</title>
        <p>provides 
assistance</p>
      </sec>
      <sec id="sec-2-10">
        <title>Student</title>
      </sec>
      <sec id="sec-2-11">
        <title>Human-AI Team</title>
        <p>AI assistance metadata
AI assistance ratings
LLM 
capability scores</p>
      </sec>
      <sec id="sec-2-12">
        <title>Research </title>
      </sec>
      <sec id="sec-2-13">
        <title>Findings </title>
        <p>A
u
d
it
L
a
y
e
r
interaction details
AI assistance
annotations
LLM availability
LLM updated
capability scores</p>
      </sec>
      <sec id="sec-2-14">
        <title>Course Instructor</title>
        <p>invoke 
review</p>
      </sec>
      <sec id="sec-2-15">
        <title>Workflow </title>
      </sec>
      <sec id="sec-2-16">
        <title>Manager</title>
        <p>
          Multi-LLM Workflow. The system distributes KE task responsibilities according to research-reported
strengths of individual models in [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. For instance, GPT4o has demonstrated strong adherence to
instructions, making it well suited for modeling mistake classification. In contrast, Claude Sonnet excels
in generating corrected models due to its generative flexibility. For the modeling mistake detection task,
optimal performance varies across models depending on the constraint type. Integrating multiple LLMs
therefore enables a task-specific, performance-driven workflow that leverages the unique advantages of
each model.
        </p>
        <p>
          Trace-based Transparency. To mitigate the challenges of transparency and traceability associated with
multi-agent workflows, we integrate a Semantic-Web-based audit layer into the system following the
AuditMAI methodology [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. Based on identified audit questions, this layer captures detailed traces,
including results of experimental investigations (research finding) and outcomes of prior executions
(e.g., detected mistake types, LLM availability, or student feedback).
        </p>
        <p>These audit traces can be leveraged to provide contextual information for each AI suggestion, such
as which LLM generated the output and why it was chosen (e.g., superior past performance, fallback
in case of unavailability). In addition, the audit layer can be utilized to compute custom confidence
scores, based on model consensus and performance history, ofering users insight into the reliability of
AI outputs. This design encourages critical engagement with AI-generated content, rather than passive
acceptance.</p>
        <p>Co-learning and adaptation. Building on ideas from human–AI delegation frameworks [14], the
system includes a Workflow Manager that operates outside the human–AI team, consisting of the
student and AI Assistant. When recurring issues or misclassifications are flagged by the auditing layer,
workflows can be adapted, such as switching LLMs for specific tasks, or escalated to instructors for
review. This enables targeted intervention and the resulting feedback loop fosters co-learning: students
gain from AI guidance, while the system improves through ongoing human oversight and real-world
educational interaction.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Preliminary Results and Outlook</title>
      <p>
        We leverage findings and collected annotations from a recent experimental assessment of LLM
capabilities [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] to simulate initial AI-assisted workflows within the collaborative KE framework. We
outline two concrete AI-supported workflows: in the first, students describe their intended constraint,
and Claude Sonnet (claude-3-7-sonnet-20250219) generates the corresponding modeling. In the
second, students create their own modelings, and various LLMs are used to detect potential errors. In
particular, we select LLMs with the highest mistake detection recall scores reported in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] - Claude
Sonnet for cardinality constraints, Llama 3.3 (Llama-3.3-70B-Instruct-Turbo) for existential restrictions
and DeepSeek V3 for universal restrictions. To enable AI-assisted correction, constraints flagged as
incorrectly modeled, are replaced by alternative modelings generated by Claude Sonnet. Simulation
results suggest that students’ standalone performance in modeling logical ontology constraints (68.29%
accuracy) can be improved by both AI-assistance workflows with over 10%. The constraint generation
achieves 79.27% accuracy while the constraint validation and correction reaches 81.71%.
      </p>
      <p>
        In comparison, AI-assisted workflows using GPT-4o, a model most familiar to students [
        <xref ref-type="bibr" rid="ref6">15, 6</xref>
        ], result
in lower performance, failing to leverage the full potential of capability-informed LLM workflows.
Constraint generation with GPT-4o results in 67.07% accuracy, under-performing standalone student
modeling, while the GPT-assisted constraint validation and correction reaches 71.95%, ofering only a
slight improvement.
      </p>
      <p>It should be noted that the simulated workflows do not yet incorporate AI-generated mistake
explanations and do not consider cases in which students would revise their own models instead of fully relying
on AI-generated suggestions. The LLM-generated explanations and contextual information provided
by the auditing layer have the potential to further improve ontology modeling accuracy by fostering
deeper understanding and enabling informed student revisions. In future work, we will implement
the proposed framework and utilize it in Knowledge Engineering university courses. To assess the
efectiveness and usability of the system, we plan a number of comprehensive user studies, including
feedback surveys and interviews with students and instructors. This evaluation will inform future
refinements of the framework and provide empirical insights into how LLM-based AI Assistants can be
responsibly and meaningfully integrated into Knowledge Engineering education.
This research was funded in whole or in part by the Austrian Science Fund (FWF) BILAI (10.55776/COE12)
and HOnEst (V 745) projects. For open access purposes, the author has applied a CC BY public copyright
license to any author accepted manuscript version arising from this submission. Additionally, the
work was supported by the PERKS (101120323) project, co-funded by the European Union. Views and
opinions expressed are, however, those of the authors only and do not necessarily reflect those of the
European Union. Neither the European Union nor the granting authority can be held responsible for
them.</p>
    </sec>
    <sec id="sec-4">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work the authors used ChatGPT4 in order to suggest improvements to
the readability and language of the manuscript. After using this tool/service, the author(s) reviewed
and edited the content as needed and take(s) full responsibility for the content of the published article.
[14] A. Fuchs, A. Passarella, M. Conti, Optimizing delegation in collaborative human-ai hybrid teams,</p>
      <p>ACM Trans. Auton. Adapt. Syst. 19 (2024).
[15] H. Johnston, R. F. Wells, E. M. Shanks, T. Boey, B. N. Parsons, Student perspectives on the use
of generative artificial intelligence technologies in higher education, International Journal for
Educational Integrity 20 (2024) 2.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R.</given-names>
            <surname>Studer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Benjamins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fensel</surname>
          </string-name>
          ,
          <article-title>Knowledge engineering: Principles and methods</article-title>
          ,
          <source>Data &amp; Knowledge Engineering</source>
          <volume>25</volume>
          (
          <year>1998</year>
          )
          <fpage>161</fpage>
          -
          <lpage>197</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Carriero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Schreiberhuber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Tsaneva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Gonzalez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kim</surname>
          </string-name>
          , J. de Berardinis,
          <article-title>Ontochat: A framework for conversational ontology engineering using language models</article-title>
          ,
          <source>in: The Semantic Web: ESWC 2024 Satellite Events</source>
          , Springer Nature Switzerland, Cham,
          <year>2025</year>
          , pp.
          <fpage>102</fpage>
          -
          <lpage>121</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Tsaneva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Käsznar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sabou</surname>
          </string-name>
          ,
          <article-title>Human-centric ontology evaluation: Process and tool support</article-title>
          ,
          <source>in: Int. Conf. on Knowledge Engineering and Knowledge Management</source>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>182</fpage>
          -
          <lpage>197</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rector</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Drummond</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Horridge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rogers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Knublauch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Stevens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wroe</surname>
          </string-name>
          ,
          <article-title>Owl pizzas: Practical experience of teaching owl-dl: Common errors &amp; common patterns</article-title>
          ,
          <source>Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science</source>
          )
          <volume>3257</volume>
          (
          <year>2004</year>
          )
          <fpage>63</fpage>
          -
          <lpage>81</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Poveda-Villalón</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gómez-Pérez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Suárez-Figueroa</surname>
          </string-name>
          ,
          <article-title>Oops!(ontology pitfall scanner!): An on-line tool for ontology evaluation</article-title>
          ,
          <source>International Journal on Semantic Web and Information Systems (IJSWIS) 10</source>
          (
          <year>2014</year>
          )
          <fpage>7</fpage>
          -
          <lpage>34</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T.</given-names>
            <surname>Strubreiter</surname>
          </string-name>
          ,
          <article-title>Larger Language Model usage when learning Ontology Engineering</article-title>
          ,
          <source>Bachelor's thesis</source>
          , Vienna University of Economics and Business,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Amirizaniani</surname>
          </string-name>
          , E. Martin,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sivachenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mashhadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Shah</surname>
          </string-name>
          ,
          <article-title>Can llms reason like humans? assessing theory of mind reasoning in llms for open-ended questions</article-title>
          ,
          <source>in: Proceedings of the 33rd ACM International Conference on Information and Knowledge Management</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>34</fpage>
          -
          <lpage>44</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ji</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Frieske</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Su</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Ishii</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. J.</given-names>
            <surname>Bang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Madotto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Fung</surname>
          </string-name>
          ,
          <article-title>Survey of hallucination in natural language generation</article-title>
          ,
          <source>ACM Comput. Surv</source>
          .
          <volume>55</volume>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Freeman</surname>
          </string-name>
          ,
          <source>Student generative ai survey</source>
          <year>2025</year>
          , Higher Education Policy Institute: London, UK (
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>G.</given-names>
            <surname>Pitts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Rani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Mildort</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.-M.</given-names>
            <surname>Cook</surname>
          </string-name>
          ,
          <article-title>Students' reliance on ai in higher education: Identifying contributing factors, arXiv preprint (</article-title>
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Tsaneva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. B.</given-names>
            <surname>Herwanto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Llugiqi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sabou</surname>
          </string-name>
          ,
          <article-title>Knowledge Engineering with Large Language Models: A Capability Assessment in Ontology Evaluation, Submitted to Semantic Web Journal (Under Review)</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hwang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.-H.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>Llm-assisted ontology restriction verification with clustering-based description generation</article-title>
          ,
          <source>IEEE Access 13</source>
          (
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>L.</given-names>
            <surname>Waltersdorfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. J.</given-names>
            <surname>Ekaputra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Miksa</surname>
          </string-name>
          , M. Sabou,
          <article-title>AuditMAI: Towards an infrastructure for continuous AI auditing</article-title>
          ,
          <source>in: Austrian Symposium on AI, Robotics and Vision</source>
          (AIRoV), Innsbruck University Press,
          <year>2025</year>
          , pp.
          <fpage>4</fpage>
          -
          <lpage>14</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>