<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Ontology-Guided On-Device Conversational Knowledge Capture with Large Language Models</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Tolga</forename><surname>Çöplü</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Haltia, Inc</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Arto</forename><surname>Bendiken</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Haltia, Inc</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Andrii</forename><surname>Skomorokhov</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Haltia, Inc</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Eduard</forename><surname>Bateiko</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Haltia, Inc</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Stephen</forename><surname>Cobb</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Haltia, Inc</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Ontology-Guided On-Device Conversational Knowledge Capture with Large Language Models</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">EBEDE9EC1C0A2EB3AE4C74EF0CCB9F79</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:21+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Generative AI applications must integrate users' personal information into the response generation process to offer an advanced user experience. One of the most effective methods for obtaining accurate and current user information is by capturing this data from AI interactions. This paper examines conversational knowledge capture using ontology and knowledge-graph approaches. We propose enhancing the large language model's (LLM) ability to capture precise and relevant information by training it with a subset of the KNOW ontology, which models personal knowledge. Our paper details the ontology-guided training process and evaluates the success of knowledge capture using a specially constructed dataset. Additionally, we emphasize the importance of privacy in handling personal information and investigate the implementation of knowledge capture with on-device language models. Our findings highlight the potential of on-device solutions to effectively capture personal knowledge while preserving user privacy.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Expectations for the quality and sophistication of human-AI interactions are steadily increasing. Generative AI applications are now expected to recognize users, understand their characteristics and preferences, and augment this information to enhance interactions. A fundamental challenge in providing this level of user experience is capturing up-to-date knowledge about the user through conversations. This process of identifying and recording personal knowledge and preferences from user interactions is defined as conversational knowledge capture (CKC).</p><p>CKC presents several critical challenges. Key issues include determining which knowledge from conversations should be captured, how the captured knowledge should be represented, whether the captured knowledge requires updating previous records, and whether the knowledge is duplicate. Fortunately, the emergence of neurosymbolic approaches, which combine large language models (LLMs) and symbolic AI, has provided researchers with new perspectives to address these challenges <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b3">4]</ref>. LLMs' capabilities in natural language processing can be integrated with the knowledge representation and factual reasoning abilities of knowledge graphs, enhanced by the structure, rules, and inference mechanisms offered by an ontology.</p><p>Another significant challenge related to CKC is ensuring the privacy of captured sensitive knowledge. Personal data, which is entirely owned by the user, should be considered vulnerable KBC-LM'24: Knowledge Base Construction from Pre-trained Language Models workshop at ISWC 2024 Envelope tolga@haltia.ai (T. Çöplü); arto@haltia.ai (A. Bendiken); andriy@haltia.ai (A. Skomorokhov); eduard@haltia.ai (E. Bateiko); steve@haltia.ai (S. Cobb) if it is sent to the cloud. On-device AI solutions, which do not require any data to leave the user's device, provide the most appropriate response to privacy needs <ref type="bibr" target="#b4">[5]</ref>. Capturing knowledge with the support of a local language model running on the device, securely storing this information in a local knowledge base, and utilizing it on the device when needed, provides a suitable environment for maintaining the privacy of personal knowledge. However, on-device language models come with their own limitations. The size, capabilities, and power consumption of language models running on personal devices such as smartphones, tablets, and computers must be carefully considered <ref type="bibr" target="#b5">[6]</ref>. Fortunately, remarkable developments are emerging every day. Thanks to R&amp;D efforts, LLMs with fewer parameters are now offering faster responses and improved performance compared with older large models.</p><p>In this paper, we explore the feasibility of generating personal knowledge graphs on-device through conversational interaction. Our approach focuses on ontology-guided knowledge extraction from prompts in the form of subject-predicate-object triples <ref type="foot" target="#foot_0">1</ref> . We have investigated various methods to enable the underlying language model to comprehend a predefined ontology, ensuring effective personal knowledge-graph generation. Subsequently, we selected the most suitable method based on the requirements of on-device execution. Utilizing a specially designed dataset, we evaluate the effectiveness of this method, emphasizing its strengths and identifying potential areas for improvement.</p><p>The structure of this paper is as follows: Section 2 discusses various approaches, including in-context learning and fine-tuning for ontology-guided knowledge capture, and focuses on the fine-tuning approach due to its suitability for on-device execution. Section 3 describes the experimental setup, presenting the development framework, language model selection, and the ontology and dataset creation process. Section 4 outlines our performance evaluation framework and the test results. Finally, Section 5 concludes the paper and suggests future directions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Ontology-Guided Symbolic Knowledge Capture</head><p>In the literature, language models have demonstrated their capability to transform unstructured text into knowledge graphs <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b10">11]</ref>. However, the process of populating a knowledge graph from user prompts in alignment with a predefined ontology has been explored only marginally <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b12">13,</ref><ref type="bibr" target="#b13">14,</ref><ref type="bibr" target="#b14">15,</ref><ref type="bibr" target="#b15">16,</ref><ref type="bibr" target="#b16">17]</ref>. Except for <ref type="bibr" target="#b16">[17]</ref>, these studies have enjoyed unconstrained processing and memory capacity. Large models with large context windows have enabled incontext learning methods relying on prompt engineering. However, on-device conversational knowledge capture is not similarly unconstrained. Given current context-window capacities, embedding an entire personal ontology into the system prompt would be unrealistic. Additionally, considering the inference speed of language models running on personal devices, the high token overhead introduced by this would present a barrier to efficient system operation.</p><p>An alternative to in-context learning involves training a language model with a predefined ontology so that the model internalizes it. There are two strategies to consider: pretraining the LLM on the ontology or fine-tuning it. This paper does not explore pretraining due to its extensive data, computational resource, energy, and time requirements. Additionally, pretraining does not offer a flexible response to ongoing changes or expansions in the ontology. Therefore, this paper focuses on fine-tuning as a method to train language models on personal ontologies, highlighting its advantages in feasibility and maintainability.</p><p>Fine-tuning is a process whereby a pretrained language model is further trained on a specific dataset to tailor its capabilities to a particular task. In our study, the language model is expected to understand the ontology classes and their properties, and use them to populate a knowledge graph from user prompts. The first step involves preparing a fine-tuning dataset, which includes user prompts, system prompts, and expected model responses for each concept in the ontology. This dataset is used to fine-tune the language model, which is then evaluated by testing it with new prompts to assess the effectiveness of the CKC process.</p><p>The following points highlight the key aspects of ontology fine-tuning:</p><p>• The training dataset's coverage and diversity are vital for successful fine-tuning. These characteristics greatly influence the LLM's ability to capture knowledge effectively. Details about the dataset and how it is constructed are discussed in Section 3.4.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>• The training dataset must include a variety of examples for the predefined ontology.</head><p>Research related to the structure of the examples prepared for ontology concepts is detailed in Section 4. • If the language model encounters a user prompt that is not relevant to the predefined ontology concepts, it should not attempt to capture knowledge. Therefore, the dataset should also contain sufficient out-of-context samples to enable the language model to distinguish between relevant and irrelevant information for capture.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Experimental Setup</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Development Framework</head><p>The methods suggested in this paper have been implemented using the Apple MLX framework <ref type="bibr" target="#b17">[18]</ref>. MLX is a specialized array framework designed for machine learning applications, akin to NumPy, PyTorch, or JAX, with the distinction of being exclusive to Apple silicon. Ontology fine-tuning has been conducted using the parameter-efficient QLoRA adapters <ref type="bibr" target="#b18">[19]</ref> on our custom dataset, comprising randomly selected, non-overlapping sets of training, validation, and test samples.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Language Model</head><p>Due to the constraint of on-device execution, our study does not use state-of-the-art largeparameter cloud-based language models. Instead, we opted for a relatively low-parameter model with proven effectiveness across diverse domains. Based on its performance in the Hugging Face Open LLM Leaderboard <ref type="bibr" target="#b19">[20]</ref> and its robust ecosystem, we selected Mistral-7B-Instruct-v0.2 <ref type="bibr" target="#b20">[21]</ref>, which is based on the Llama 2 <ref type="bibr" target="#b21">[22]</ref> architecture. The MLX 4-bit quantized version, with a disk size of 4.26 GB, stands out as a suitable model for many personal computers, tablets, and even new-generation smartphones.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Applied Ontology</head><p>Our study is inspired by KNOW <ref type="bibr" target="#b23">[23]</ref>-the Knowledge Navigator Ontology for the World-and utilizes it for representing personal information. KNOW was introduced as a pioneering framework designed to capture everyday knowledge to enhance language models in real-world generative AI applications such as personal AI assistants. The ontology focuses on human life, encompassing everyday concerns and significant milestones, and limits its initial scope to established human universals, including spacetime (places, events) and social dimensions (people, groups, organizations). This pragmatic approach emphasizes universality and utility, contrasting with previous works like Schema.org <ref type="bibr" target="#b24">[24]</ref> and Cyc <ref type="bibr" target="#b25">[25]</ref> by building on language models' inherent encoding of salient commonsense knowledge.</p><p>Because of the requirement that each element in the ontology be associated with a diverse set of prompt and response samples within the training dataset, our research focuses on a specific subset of the KNOW ontology. This subset concentrates on core family relationships with four ontology classes, eleven object properties, and one data property. A visual depiction of this subset is presented in Figure <ref type="figure" target="#fig_1">1</ref>.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Dataset</head><p>For a language model to effectively learn a predefined ontology and use it to perform knowledge extraction and capture, a robust and diverse training dataset is essential. Our paper focuses on a subset of the KNOW ontology that includes the concepts of 'person', 'name', 'sex', 'child', 'father', 'mother', 'sibling', 'sister', 'brother', 'spouse', 'partner', and 'knows'. We created 145 manually crafted user prompts along with their respective ontology responses for training. Additionally, to manage inputs that fall outside these ontology concepts, we included 32 generic user prompts in the dataset. The composition of this training dataset, which consists of 177 user prompts, is illustrated in Figure <ref type="figure" target="#fig_2">2</ref>. Concepts not associated with the ontology are labeled as the 'none' in the figure. Since each sample prompt usually includes multiple concepts, the chart shows more concept occurrences than prompts.</p><p>For the test set to be used in the evaluation, we generated 100 manual test prompts and their expected responses about family relations based on the television series 'The Waltons'. The Waltons is a classic American television series that aired from 1972 to 1981, depicting the life and challenges of a close-knit family in rural Virginia during the Great Depression and World War II. The show focuses on the daily experiences, values, and growth of the Walton family, emphasizing themes of love, perseverance, and community. The composition of this test dataset, which consists of 100 user prompts, is illustrated in Figure <ref type="figure" target="#fig_2">2</ref>.  The Turtle format was chosen for serializing the ontology population in our research because of its straightforward structure, readability, and prevalent use in existing pretraining datasets for LLMs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Number of</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Performance Evaluation</head><p>Our research focuses on fine-tuning an on-device language model with predefined ontology concepts and capturing knowledge from user prompts that fit the ontology. This section will detail the fine-tuning approaches and the relevant performance evaluations.</p><p>One of our main objectives is to comprehend the training dynamics of ontology-guided knowledge capture. Consequently, we will not emphasize the optimization of fine-tuning hyperparameters. For a fair performance evaluation, all tests in this section were conducted using the default QLoRA hyperparameters specified in the MLX framework. To ensure consistency in the test results, each training session was configured to run for 18 epochs.</p><p>Firstly, we investigated whether training data associated with each ontology concept is necessary for the successful fine-tuning process. For instance, we evaluated the ability of a language model, trained with only one of two semantically related concepts (e.g., 'brother'), to capture knowledge related to the concept that was not included in the training data (e.g., 'sister'). During the evaluations, the generated prompt responses were processed triple by triple and compared against the ground truth established for the test set. The findings are presented in Figure <ref type="figure" target="#fig_3">3</ref>. As illustrated in Figure <ref type="figure" target="#fig_3">3</ref>, a CKC QLoRA adapter trained exclusively on the 'brother' concept shows a significant decline in performance when tested with the 'sister' relation prompts. In contrast, an adapter trained on both the 'brother' and 'sister' concepts demonstrates excellent performance on the same test set. Although the language model's pretraining is sufficient to distinguish between the 'brother' and 'sister' concepts, our tests reveal that this is inadequate for effective knowledge capture.</p><p>The second issue we examined was whether it is necessary for each concept to be used in conjunction with other concepts in the fine-tuning dataset. To provide a concrete example, we assessed the performance of fine-tuning using samples that included only the 'brother' or only the 'sister' relationship when prompted with contexts where both the 'brother' and 'sister' concepts co-occur. We compared the performance of a knowledge capture adapter fine-tuned with samples containing only the 'brother' and 'sister' concepts against another adapter trained with the same data set, with extra samples where both 'brother' and 'sister' concepts appear together. The results of this comparison are presented below in Figure <ref type="figure" target="#fig_4">4</ref>.  Similarly, in Figure <ref type="figure" target="#fig_5">5</ref>, we presented the knowledge capture performance of two different adapters, one trained with only 'father-daughter' and 'mother-son' prompts and the other trained with all 'father-daughter', 'father-son', 'mother-daughter', and 'mother-son' prompts. Both Figure <ref type="figure" target="#fig_4">4</ref> and Figure <ref type="figure" target="#fig_5">5</ref> demonstrate that having cross relations between ontology concepts in the training set increases knowledge capture performance. In the future, we aim to repeat these tests with various ontologies and language models to obtain more general results regarding this approach.  Another aspect we researched in our study was the impact of the ontology size used in training on knowledge capture performance. In our previous tests, we trained the language model on specific ontology concepts and then examined the knowledge capture performance of the resulting adapter. At this stage, we compared the performance of a knowledge capture adapter trained solely on the ontology concepts present in the test set with an adapter trained on the entire core-family ontology. Our objective was to investigate whether adapters experience any degradation in performance as they are trained on more ontology concepts. In other words, we observed the scalability of ontology learning for the on-device language model we used. The results for the two different groups are presented in Figures <ref type="figure" target="#fig_6">6 and 7</ref>. As seen from the figures, as the number of concepts in ontology training increases, there is a slight decrease in the performance of the knowledge capture adapter. However, the observed degradation within the scope of the tests does not appear to lead to a major scalability problem.</p><p>In the subsequent phase, we explored the optimal number of training epochs required to achieve maximum performance for the training set. For this analysis, we continued using the default MLX QLoRA hyperparameters, but trained the QLoRA adapter over various epoch lengths. We then conducted evaluations on the test set using each trained adapter, and the findings are presented in Figure <ref type="figure" target="#fig_7">8</ref>. As depicted in Figure <ref type="figure" target="#fig_7">8</ref>, the success rate of the ontology population increases with longer training. However, considering the resource usage and energy consumption, we observe that 18 epochs is sufficient for fine-tuning.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>In this paper, we first explored on-device ontology-guided conversational knowledge capture and its importance in the generative AI domain. We then discussed the ontology approach and how to train an on-device LLM with ontology concepts. The language model was fine-tuned using a custom dataset focused on core family relationships, and we evaluated the model's ability to learn personal ontology concepts.</p><p>Our findings indicate that fine-tuning is particularly effective for training an on-device language model with ontology concepts for conversational knowledge capture. In our future work, we aim to integrate the generated knowledge graph with the language model for knowledge utilization, combining the strengths of the neural and symbolic AI approaches.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: A visual representation of the ontology design used in this paper</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Occurrences of ontology concepts in the training and test datasets</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: CKC performance (precision, recall, and f1-score) of 'sister' relation prompts for two differently fine-tuned QLoRA adapters</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: CKC performance (precision, recall, and f1-score) of 'brother-sister' relation prompts for two differently fine-tuned QLoRA adapters</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: CKC performance (precision, recall, and f1-score) of 'father-son' and 'mother-daughter' relation prompts for two differently fine-tuned QLoRA adapters</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: CKC performance (precision, recall, and f1-score) of 'father-daughter', 'father-son', 'motherdaughter', and 'mother-son' relation prompts for two differently fine-tuned QLoRA adapters</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: Ontology population performance (precision, recall, and f1-score) of the core-family ontology for various epochs</figDesc></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://www.w3.org/TR/rdf12-concepts/</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Sheth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Roy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gaur</surname></persName>
		</author>
		<idno>arXiv:</idno>
		<ptr target="2305.00813" />
		<title level="m">Neurosymbolic AI -Why, What, and How</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Neurosymbolic AI for Reasoning over Knowledge Graphs: A Survey</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">N</forename><surname>Delong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">F</forename><surname>Mir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Fleuriot</surname></persName>
		</author>
		<idno>arXiv:</idno>
		<ptr target="2302.07200" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note>cs, stat</note>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Describing and Organizing Semantic Web and Machine Learning Systems in the SWeMLS-KG</title>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">J</forename><surname>Ekaputra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Llugiqi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sabou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ekelhart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Paulheim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Breit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Revenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Waltersdorfer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">E</forename><surname>Farfar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Auer</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2303.15113</idno>
		<ptr target="http://arxiv.org/abs/2303.15113" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">L.-P</forename><surname>Meyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Stadler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Frey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Radtke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Junghanns</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Meissner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Dziwis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Bulert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Martin</surname></persName>
		</author>
		<idno>arXiv:</idno>
		<ptr target="2307.06917" />
		<title level="m">LLM-assisted Knowledge Graph Engineering: Experiments with ChatGPT</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">E</forename><surname>Marin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Perino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Di</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pietro</forename></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2107.03832</idno>
		<idno type="arXiv">arXiv:2107.03832</idno>
		<ptr target="http://arxiv.org/abs/2107.03832.doi:10.48550/arXiv.2107.03832" />
		<title level="m">Serverless Computing: A Security Perspective</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">A Performance Evaluation of a Quantized Large Language Model on Various Smartphones</title>
		<author>
			<persName><forename type="first">T</forename><surname>Çöplü</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Loedi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bendiken</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Makohin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Bouw</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Cobb</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2312.12472.arXiv:2312.12472" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">CycleGT: Unsupervised Graph-to-Text and Text-to-Graph Generation via Cycle Training</title>
		<author>
			<persName><forename type="first">Q</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Jin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Qiu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Wipf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhang</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2006.04702</idno>
		<idno type="arXiv">arXiv:2006.04702</idno>
		<ptr target="http://arxiv.org/abs/2006.04702.doi:10.48550/arXiv.2006.04702" />
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Entity-Relation Extraction as Multi-Turn Question Answering</title>
		<author>
			<persName><forename type="first">X</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Yin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Yuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Chai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.1905.05529</idno>
		<idno type="arXiv">arXiv:1905.05529</idno>
		<ptr target="http://arxiv.org/abs/1905.05529.doi:10.48550/arXiv.1905.05529" />
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">INFINITY: A Simple Yet Effective Unsupervised Framework for Graph-Text Mutual Conversion</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Fu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Qi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2209.10754</idno>
		<idno type="arXiv">arXiv:2209.10754</idno>
		<ptr target="http://arxiv.org/abs/2209.10754.doi:10.48550/arXiv.2209.10754" />
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Ontology Guided Information Extraction from Unstructured Text</title>
		<author>
			<persName><forename type="first">R</forename><surname>Anantharangachar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ramani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Rajagopalan</surname></persName>
		</author>
		<idno type="DOI">10.5121/ijwest.2013.4102</idno>
		<idno type="arXiv">arXiv:1302.1335</idno>
		<ptr target="http://arxiv.org/abs/1302.1335.doi:10.5121/ijwest.2013.4102" />
	</analytic>
	<monogr>
		<title level="j">International journal of Web &amp; Semantic Technology</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="19" to="36" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Prompt-Time Symbolic Knowledge Capture with Large Language Models</title>
		<author>
			<persName><forename type="first">T</forename><surname>Çöplü</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bendiken</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Skomorokhov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Bateiko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Cobb</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Bouw</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2402.00414</idno>
		<idno type="arXiv">arXiv:2402.00414</idno>
		<ptr target="http://arxiv.org/abs/2402.00414.doi:10.48550/arXiv.2402.00414" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">LLMs4OL: Large Language Models for Ontology Learning</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">B</forename><surname>Giglou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Souza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Auer</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2307.16648</idno>
		<idno type="arXiv">arXiv:2307.16648</idno>
		<ptr target="http://arxiv.org/abs/2307.16648.doi:10.48550/arXiv.2307.16648" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note>cs, math</note>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Ontology engineering with Large Language Models</title>
		<author>
			<persName><forename type="first">P</forename><surname>Mateiu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Groza</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2307.16699</idno>
		<ptr target="http://arxiv.org/abs/2307.16699" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">PIVOINE: Instruction Tuning for Openworld Information Extraction</title>
		<author>
			<persName><forename type="first">K</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Pan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chen</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2305.14898</idno>
		<idno type="arXiv">arXiv:2305.14898</idno>
		<ptr target="http://arxiv.org/abs/2305.14898.doi:10.48550/arXiv.2305.14898" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Text2KGBench: A Benchmark for Ontology-Driven Knowledge Graph Generation from Text</title>
		<author>
			<persName><forename type="first">N</forename><surname>Mihindukulasooriya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tiwari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">F</forename><surname>Enguix</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lata</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2308.02357</idno>
		<idno type="arXiv">arXiv:2308.02357</idno>
		<ptr target="http://arxiv.org/abs/2308.02357.doi:10.48550/arXiv.2308.02357" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Towards Ontology Construction with Language Models</title>
		<author>
			<persName><forename type="first">M</forename><surname>Funk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hosemann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Jung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Lutz</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2309.09898</idno>
		<idno type="arXiv">arXiv:2309.09898</idno>
		<ptr target="http://arxiv.org/abs/2309.09898.doi:10.48550/arXiv.2309.09898" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Prompt-Time Ontology-Driven Symbolic Knowledge Capture with Large Language Models</title>
		<author>
			<persName><forename type="first">T</forename><surname>Çöplü</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bendiken</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Skomorokhov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Bateiko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Cobb</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2405.14012.arXiv:2405.14012" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">MLX: Efficient and flexible machine learning on Apple silicon</title>
		<author>
			<persName><forename type="first">A</forename><surname>Hannun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Digani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Katharopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Collobert</surname></persName>
		</author>
		<ptr target="https://github.com/ml-explore" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">QLoRA: Efficient Finetuning of Quantized LLMs</title>
		<author>
			<persName><forename type="first">T</forename><surname>Dettmers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pagnoni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Holtzman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2305.14314</idno>
		<idno type="arXiv">arXiv:2305.14314</idno>
		<ptr target="http://arxiv.org/abs/2305.14314.doi:10.48550/arXiv.2305.14314" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<author>
			<persName><forename type="first">E</forename><surname>Beeching</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Fourrier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Habib</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Lambert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Rajani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Sanseviero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Tunstall</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Wolf</surname></persName>
		</author>
		<ptr target="https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard" />
		<title level="m">Open LLM Leaderboard</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title/>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">Q</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sablayrolles</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mensch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Bamford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">S</forename><surname>Chaplot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Casas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Bressand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Lengyel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Lample</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Saulnier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">R</forename><surname>Lavaud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-A</forename><surname>Lachaux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Stock</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">L</forename><surname>Scao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lavril</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lacroix</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">E</forename><surname>Sayed</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2310.06825</idno>
		<idno type="arXiv">arXiv:2310.06825</idno>
		<ptr target="http://arxiv.org/abs/2310.06825.doi:10.48550/arXiv.2310.06825" />
	</analytic>
	<monogr>
		<title level="j">Mistral</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">H</forename><surname>Touvron</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Martin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Stone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Albert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Almahairi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Babaei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Bashlykov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Batra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bhargava</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bhosale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Bikel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Blecher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">C</forename><surname>Ferrer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Cucurull</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Esiobu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Fernandes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Fu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Fu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Fuller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Goswami</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Hartshorn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hosseini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Hou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Inan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kardas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kerkez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Khabsa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Kloumann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Korenev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">S</forename><surname>Koura</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-A</forename><surname>Lachaux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lavril</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Liskovich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Mao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Martinet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mihaylov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Mishra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Molybog</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Nie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Poulton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Reizenstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Rungta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Saladi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Schelten</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Silva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">M</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Subramanian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><forename type="middle">E</forename><surname>Tan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Taylor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Williams</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">X</forename></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Kuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zarov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Fan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kambadur</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Narang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Rodriguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Stojnic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Edunov</surname></persName>
		</author>
		<author>
			<persName><surname>Scialom</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2307.09288</idno>
		<title level="m">Llama 2: Open foundation and fine-tuned chat models</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<title level="m" type="main">KNOW: A Real-World Ontology for Knowledge Capture with Large Language Models</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bendiken</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2405.19877.arXiv:2405.19877" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Schema.org: evolution of structured data on the web</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">V</forename><surname>Guha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Brickley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Macbeth</surname></persName>
		</author>
		<idno type="DOI">10.1145/2844544</idno>
		<ptr target="https://doi.org/10.1145/2844544.doi:10.1145/2844544" />
	</analytic>
	<monogr>
		<title level="j">Communications of the ACM</title>
		<imprint>
			<biblScope unit="volume">59</biblScope>
			<biblScope unit="page" from="44" to="51" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<title level="m" type="main">Building Large Knowledge-Based Systems; Representation and Inference in the Cyc Project</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">B</forename><surname>Lenat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">V</forename><surname>Guha</surname></persName>
		</author>
		<ptr target="https://github.com/HaltiaAI/paper-OGODCKC" />
		<imprint>
			<date type="published" when="1989">1989</date>
			<publisher>Addison-Wesley Longman Publishing Co., Inc</publisher>
			<pubPlace>USA</pubPlace>
		</imprint>
	</monogr>
	<note>1st ed. A. Online Resources Please refer to the paper&apos;s corresponding GitHub repository</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
