<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Techniques, Tools, and Insights</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Gunjan Singh</string-name>
          <email>gunjans@iiitd.ac.in</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Raghava Mutharaju</string-name>
          <email>raghava.mutharaju@iiitd.ac.in</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Knowledgeable Computing and Reasoning Lab, IIIT-Delhi</institution>
          ,
          <addr-line>New Delhi</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Ontology</institution>
          ,
          <addr-line>OWL 2, Reasoner, Benchmarking</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>SCME</institution>
          ,
          <addr-line>Doctoral Consortium, Tutorials</addr-line>
          ,
          <institution>Project Exhibitions</institution>
          ,
          <addr-line>Posters and Demos</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>Ontology-based reasoners are crucial for knowledge representation and reasoning across various domains, including healthcare and finance, as they facilitate informed decision-making. Despite their importance, evaluating and comparing reasoners remains challenging due to diferences in ontology expressivity, dataset characteristics, and the nature of reasoning tasks. This tutorial will guide participants through key performance metrics, experimental design strategies, and data considerations required for efective benchmarking, equipping them with the knowledge to evaluate and enhance the capabilities of the reasoners.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Goals and Objectives</title>
      <sec id="sec-1-1">
        <title>1.1. Overall Goal</title>
      </sec>
      <sec id="sec-1-2">
        <title>1.2. Concrete Objectives</title>
        <p>To equip researchers, practitioners, and students with the knowledge and skills required to perform
rigorous and meaningful benchmarking of ontology-based reasoners. This will advance the development
of high-performance reasoning systems and enhance their application in diverse real-world scenarios.
into performance variations.</p>
        <p>problem-solving.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Audience</title>
      <p>CEUR
Workshop</p>
      <p>ISSN1613-0073</p>
      <p>and benchmarking techniques.
• Level: Basic to Advanced. The tutorial is designed to be accessible to participants with a
foundational understanding of Knowledge Graphs (KG) or Artificial Intelligence (AI) concepts
while also providing advanced insights and practical knowledge for experienced researchers and
practitioners.
• Prerequisites: A basic understanding of knowledge representation and engineering concepts is
recommended. Familiarity with ontology tools and reasoning algorithms will be beneficial but
not required.
3. Topic Relevance and Novelty
• Relevance to the Scope of ER: Ontologies are one of the central mechanisms for conceptual
modeling. Reasoning over ontologies to derive inferences is an important task often performed
on ontologies. Benchmarking of ontology reasoners provides insights into the performance of
the reasoning system and the structure of the ontology. So, the proposed tutorial is very much
within the scope of the ER conference.
• Relevance to Practice: Participants will acquire practical skills in benchmarking techniques,
enabling them to evaluate and select appropriate reasoners for their domain specific applications.
This knowledge is essential for developing eficient AI systems that depend on accurate and
reliable reasoning mechanisms over ontologies.
• Novel Aspects: To the best of our knowledge, there has not been a tutorial on this theme in the
recent past at ER and related venues such as ISWC, ESWC, KR, AAAI, and IJCAI. So, we hope
that the ER participants will find this theme novel and interesting.</p>
    </sec>
    <sec id="sec-3">
      <title>4. Projected Benefits</title>
      <p>
        Targeted Knowledge Outcomes:
• Benchmarking Requirements: Understand the essential requirements for benchmarking
ontology-based reasoners, including key performance metrics, evaluation criteria, and the unique
challenges associated with diferent systems.
• Existing Methodologies: Gain insights into current benchmarking methodologies, recognize
their strengths and limitations, and understand how to apply these methodologies efectively.
• Practical Application: Develop the ability to apply benchmarking methodologies to evaluate
reasoners in diverse scenarios using practical tools and frameworks.
• Tool Selection and Configuration : Enhance skills in selecting and configuring appropriate
benchmarking tools and frameworks to improve the accuracy and relevance of evaluation results.
• Addressing Gaps: Identify gaps in existing benchmarking practices and explore potential
solutions to address these shortcomings, contributing to the advancement of benchmarking
techniques.
5. Detailed Outline and Timetable
1. Introduction and Motivation (15 minutes):
• Outline the goals and objectives of the tutorial.
• Provide an overview of ontology-based reasoning and its significance.
• Discuss the importance of benchmarking in advancing reasoning systems.
• Highlight existing RDF benchmarks such as LUBM, BSBM, SP2Bench, and WatDiv, and
identify gaps and the need for OWL 2 benchmarks.
2. Ontology Reasoners (15 minutes):
• Explore diferent types of ontology reasoners and their characteristics, including commonly
used ones such as Konclude (a tableau-based reasoner) and ELK (a rule-based reasoner),
discussing their strengths, limitations, supported profiles (e.g., ELK focuses on ℰ ℒ ++) and
performance evaluation requirements for static reasoners.
3. Existing Benchmarking Methodologies (15 minutes):
• Conduct a detailed examination of the existing OWL2Bench [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] benchmark and another
      </p>
      <p>
        OntoGen benchmark, discussing its suitability for evaluating OWL 2 reasoners.
4. Hands-on Session (30 minutes) : Participants will work on the following.
• Use a provided GitHub repository having a Docker container pre-configured with static
reasoners and run several ontologies generated using diferent benchmarks on various
reasoners to observe and compare performance metrics. Participants will explore how
diferent reasoners handle the same ontology and how performance varies with diferent
benchmarks.
• Custom build their ontologies using our tool, OntoGen [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Participants will learn how to
create ontologies that fit their specific needs and see how these ontologies interact with
diferent reasoners.
• Visualize several ontologies using visualization tools such as WebVOWL to gain insights
into performance variations.
• Analyze and interpret the results, leading to a discussion of common pitfalls and best
practices for accurate and meaningful evaluations.
5. Open Challenges and Conclusion (15 minutes):
• Briefly introduce neurosymbolic ontology reasoning.
• Discuss future directions and challenges in benchmarking static, streaming, and
neurosymbolic reasoners.
• Share additional resources and references for further exploration.
      </p>
      <p>• Get feedback from the participants on the tutorial.
6. Tutorial Method
6.1. Teaching Methods
• Lectures and Presentations: Use slide presentations to explain key concepts, methodologies,
and benchmarking techniques.
• Hands-on Sessions: Provide a GitHub repository with a Docker container that includes
preconfigured static and streaming reasoners, along with sample ontologies. Participants can easily
use their ontologies and run the reasoners without having a complex installation setup.
• Interactive Discussions: Foster engagement through discussions and Q&amp;A sessions, addressing
participant questions and encouraging knowledge exchange.</p>
      <sec id="sec-3-1">
        <title>6.2. Technology Requirements</title>
        <p>• Standard Equipment: PC projector for presentations and visual aids.</p>
        <p>• Additional Requirements: Internet access for live demonstrations and interactive tools.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Declaration on Generative AI</title>
      <p>In preparing this tutorial proposal, generative AI tools—specifically ChatGPT—were used solely for
grammar checking, spelling correction, and improving the readability of certain sentences. All
AIsuggested edits were carefully reviewed and refined by the authors. The conceptualization, structure,
and content of the proposal were developed entirely by the authors, and the use of ChatGPT was limited
to enhancing the clarity and presentation of the text.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bhatia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mutharaju</surname>
          </string-name>
          ,
          <article-title>OWL2Bench: A Benchmark for OWL 2 Reasoners, in: The Semantic Web - ISWC</article-title>
          <year>2020</year>
          - 19th International Semantic Web Conference, Athens, Greece, November 2-
          <issue>6</issue>
          ,
          <year>2020</year>
          , Proceedings,
          <string-name>
            <surname>Part</surname>
            <given-names>II</given-names>
          </string-name>
          , volume
          <volume>12507</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2020</year>
          , pp.
          <fpage>81</fpage>
          -
          <lpage>96</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>G.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Bhagat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bhatia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mutharaju</surname>
          </string-name>
          , OWL2Bench:
          <article-title>Towards a Customizable Benchmark for OWL 2 Reasoners</article-title>
          , in
          <source>: Proceedings of the ISWC 2020 Demos and Industry Tracks: 19th International Semantic Web Conference (ISWC</source>
          <year>2020</year>
          ),
          <article-title>Globally online</article-title>
          ,
          <source>November 1-6</source>
          ,
          <year>2020</year>
          (UTC), volume
          <volume>2721</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>344</fpage>
          -
          <lpage>349</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>