<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The First international OpenKG Workshop: Large Knowledge-Enhanced Models</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>August</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jeju Island</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>South Korea</string-name>
        </contrib>
      </contrib-group>
      <abstract>
        <p>Humankind accumulates knowledge about the world in the processes of perceiving the world, with natural languages as the primary carrier of world knowledge. Representing and processing these world knowledge has been central to its objectives since the early advent of AI. Indeed, both LLMs and KGs were developed to handle world knowledge but exhibit distinct advantages and limitations. LLMs excel in language comprehension and ofer expansive coverage of knowledge, but incur significant training costs and struggle with authenticity problems of logic reasoning. KGs provide highly accurate and explicit knowledge representation, enabling more controlled reasoning and being immune from hallucination problems, but face scalability challenges and struggle with reasoning transferability. A deeper integration of these two technologies promises a more holistic, reliable, and controllable approach to knowledge processing in AI. Natural languages merely encode world knowledge through sequences of words, while human cognitive processes extend far beyond simple word sequences. Considering the intricate nature of human knowledge, we advocate for the research over Large Knowledge-enhanced Models (LKM), specifically engineered to manage diversified spectrum of knowledge structures. In this workshop, we focus on exploring large models through the lens of “knowledge”. We expect to investigate the role of symbolic knowledge such as Knowledge Graphs (KGs) in enhancing LLMs, and also interested in how LLMs can amplify traditional symbolic knowledge bases.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>1. Overview
2. Topics
• Knowledgeable AI agents
• Integration of LLM and KG for world models
• Domain-specific LLMs training leveraging KGs
• Applications of combing KGs and LLMs
• Open resources combining KGs and LLMs
3. Editors and Organizing Committee</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>