=Paper= {{Paper |id=Vol-3818/preface |storemode=property |title=None |pdfUrl=https://ceur-ws.org/Vol-3818/preface.pdf |volume=Vol-3818 }} ==None== https://ceur-ws.org/Vol-3818/preface.pdf
                         The First international OpenKG Workshop:
                         Large Knowledge-Enhanced Models



                                                                                             August 3, 2024
                                                                                        Jeju Island, South Korea


                         1. Overview
                         Humankind accumulates knowledge about the world in the processes of perceiving the world, with
                         natural languages as the primary carrier of world knowledge. Representing and processing these world
                         knowledge has been central to its objectives since the early advent of AI. Indeed, both LLMs and KGs
                         were developed to handle world knowledge but exhibit distinct advantages and limitations. LLMs
                         excel in language comprehension and offer expansive coverage of knowledge, but incur significant
                         training costs and struggle with authenticity problems of logic reasoning. KGs provide highly accurate
                         and explicit knowledge representation, enabling more controlled reasoning and being immune from
                         hallucination problems, but face scalability challenges and struggle with reasoning transferability.
                         A deeper integration of these two technologies promises a more holistic, reliable, and controllable
                         approach to knowledge processing in AI.
                            Natural languages merely encode world knowledge through sequences of words, while human
                         cognitive processes extend far beyond simple word sequences. Considering the intricate nature of
                         human knowledge, we advocate for the research over Large Knowledge-enhanced Models (LKM),
                         specifically engineered to manage diversified spectrum of knowledge structures. In this workshop, we
                         focus on exploring large models through the lens of “knowledge”. We expect to investigate the role of
                         symbolic knowledge such as Knowledge Graphs (KGs) in enhancing LLMs, and also interested in how
                         LLMs can amplify traditional symbolic knowledge bases.


                         2. Topics
                                • Large model knowledge enhancement
                                • Integration of LLM and symbolic KR
                                • Knowledge-injecting LLM pretraining
                                • Structure-inducing LLM pre-training
                                • Knowledge-augmented prompt learning
                                • Knowledge-enhanced instruction learning
                                • Graph RAG and KG RAG
                                • LLM-enhanced symbolic query and reasoning
                                • Large model knowledge extraction
                                • Large model knowledge editing
                                • Large model knowledge reasoning
                                • Knowledge-augmented multi-modal large models
                                • Multimodal learning for KGs and LLMs
                                • Knowledge-enhanced Hallucination Detection and Mitigation
                                • Semantic tools for LLMs

                          The First international OpenKG Workshop: Large Knowledge-Enhanced Models, August 03, 2024, Jeju Island, South Korea
                                     © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).


CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
  • Knowledgeable AI agents
  • Integration of LLM and KG for world models
  • Domain-specific LLMs training leveraging KGs
  • Applications of combing KGs and LLMs
  • Open resources combining KGs and LLMs


3. Editors and Organizing Committee
  • Ningyu Zhang, Ass. Prof., Zhejiang University, Zhejiang, China
  • Tianxing Wu, Ass. Prof., Southeast University, Jiangsu, China
  • Meng Wang, Ass. Prof., Tongji University, Shanghai, China
  • Guilin Qi, Prof., Southeast University, Jiangsu, China
  • Haofen Wang, Distinguished Research Fellow, Tongji University, Shanghai, China
  • Huajun Chen, Prof., Zhejiang University, Zhejiang, China