<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">LLMs for the Engineering of a Parkinson Disease Monitoring and Alerting Ontology</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Georgios</forename><surname>Bouchouras</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Dept. of Cultural Technology and Communication</orgName>
								<orgName type="laboratory">Intelligent Systems Lab</orgName>
								<orgName type="institution">University of the Aegean</orgName>
								<address>
									<postCode>81100</postCode>
									<settlement>Mytilene</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Pavlos</forename><surname>Bitilis</surname></persName>
							<email>pavlos.bitilis@aegean.gr</email>
							<affiliation key="aff0">
								<orgName type="department">Dept. of Cultural Technology and Communication</orgName>
								<orgName type="laboratory">Intelligent Systems Lab</orgName>
								<orgName type="institution">University of the Aegean</orgName>
								<address>
									<postCode>81100</postCode>
									<settlement>Mytilene</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Konstantinos</forename><surname>Kotis</surname></persName>
							<email>kotis@aegean.gr</email>
							<affiliation key="aff0">
								<orgName type="department">Dept. of Cultural Technology and Communication</orgName>
								<orgName type="laboratory">Intelligent Systems Lab</orgName>
								<orgName type="institution">University of the Aegean</orgName>
								<address>
									<postCode>81100</postCode>
									<settlement>Mytilene</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">George</forename><forename type="middle">A</forename><surname>Vouros</surname></persName>
							<email>georgev@unipi.gr</email>
							<affiliation key="aff1">
								<orgName type="department">Dept. Of Digital Systems</orgName>
								<orgName type="laboratory">Artificial Intelligence Lab</orgName>
								<orgName type="institution">University of Piraeus</orgName>
								<address>
									<postCode>18534</postCode>
									<settlement>Piraeus</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">LLMs for the Engineering of a Parkinson Disease Monitoring and Alerting Ontology</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">3B41D28B91FF1AC728A66A352C846774</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:28+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Ontology Engineering</term>
					<term>LLMs</term>
					<term>Parkinson Disease</term>
					<term>Human-LLM teaming. 1</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper investigates the integration of Large Language Models (LLMs) in the engineering of a Parkinson's Disease (PD) monitoring and alerting ontology. The focus is on the ontology engineering methodology which combines the capabilities of LLMs and human expertise to develop more robust and comprehensive domain ontologies, faster than humans do alone. Evaluating models like ChatGPT-3.5, ChatGPT4, Gemini, and Llama2, this study explores various LLM based ontology engineering methods. The findings reveal that the proposed hybrid approach (both LLM and human involvement), namely X-HCOME, consistently excelled in class generation and F-1 score, indicating its efficiency in creating valid and comprehensive ontologies faster than humans do alone. The study underscores the potential of the combined LLMs and human intelligence to enrich PD domain knowledge and enhance expert-generated PD ontologies. In overall, the presented approach exemplifies a promising collaboration between machine capabilities and human expertise in developing ontologies for complex domains.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The integration of LLMs (Large Language Models) with ontological frameworks is gaining prominence in the field of knowledge Representation (KR) and Artificial Intelligence (AI) <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. As KR methods become more demanding, there is a noticeable trend towards the use of LLMs for the construction, refinement, and mapping of ontologies, tasks that have been traditionally performed and supervised by human experts with in-depth knowledge of the domain and of the engineering of ontologies <ref type="bibr" target="#b2">[3]</ref>. Since LLMs are trained on big data, they are making expert-level insights across domains more accessible and cost-effective. Moreover, while LLMs are getting more effective at engineering ontologies, their capabilities are significantly enhanced in the era of Neurosymbolic AI, i.e., combining the deep and varied knowledge of statistical AI with the semantic reasoning of symbolic AI <ref type="bibr" target="#b3">[4]</ref>.</p><p>Neurosymbolic AI is particularly significant in addressing complex health problems such as the monitoring and alerting patients and doctors of Parkinson Disease (PD), the second most common neurodegenerative disease globally <ref type="bibr" target="#b4">[5]</ref>. Despite extensive research, the nature of PD remains elusive, and current treatments offer only partial effectiveness <ref type="bibr" target="#b5">[6]</ref>. In response, related ontologies have been developed to enhance understanding, monitoring and alerting, and treatment approaches. Specifically, the Wear4PDmove ontology <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8]</ref> has been recently developed with the aim to integrate heterogeneous sensor (movement) and personal health record (PHR) data, as a knowledge model used to interface/connect patients and doctors with smart devices and health applications. This ontology aims to semantically integrate heterogeneous data sources, such as dynamic/stream data from wearables and static/historic data from personal health records, to represent personal health knowledge in the form of a Personal Health Knowledge Graph (PHKG). Also, it supports health applications' reasoning capabilities for high-level event recognition in PD monitoring, such as identifying events like 'missing dose' or 'patient fall' <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b8">9]</ref>. This and associated ontologies facilitate the critical integration of AI-driven tools and domain-specific knowledge, making it easier to integrate and reason with health data and promote creative PD treatment approaches.</p><p>PD monitoring and alerting of patients requires flexible KR methods to effectively adapt to their health changes. LLMs have shown impressive abilities in handling vast quantities of data and producing valuable insights from their near real-time analysis. Yet, their use in monitoring PD and alerting patients is limited by factors like inadequate reasoning abilities and reliance on specialized health knowledge. Health is a complicated domain, with distinct contexts, subtle meaning variations, and disease-specific vocabularies. To effectively capture and express this complex knowledge, it is necessary to fine-tune and train LLMs specifically for the domain, which can demand a significant number of resources that are not always available, or health/medical experts are not willing to provide for many different reasons. Also, healthcare ontologies now adhere to several standards and forms. The technical challenge, however, lies in the integration and reconciliation of information from many heterogeneous sources into a coherent ontology, while also ensuring interoperability. To achieve an efficient ontology development process within an ontology engineering methodology (OEM), LLMs must be able to navigate these disparities efficiently. Existing research on PD has already utilize ontologies <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b9">10]</ref>. However, maintaining these ontologies in this rapidly changing field of PD, calls for constant effort and resources. Failure to update/refine the ontology may result in outdated information.</p><p>This study aims to investigate the possibilities of LLM-based collaborative OE (ontology engineer) to improve the speed and accuracy of PD knowledge representation. LLMs can efficiently analyze large volumes of health-related data, recognize patterns and semantic connections between them <ref type="bibr" target="#b10">[11]</ref>. Human specialists contribute to ensuring the precision and domain-specific significance of the acquired knowledge. LLMs and humans, working together, can collaboratively engineer PD-related ontologies that efficiently support the monitoring and alerting of patients and doctors.</p><p>This paper presents experiments with LLMs for PD ontology engineering. More important, in this paper, an extension of a human-centered collaborative OEM (HCOME) <ref type="bibr" target="#b11">[12]</ref> with LLM-based tasks is propose and assessed (namely X-HCOME). The aim is to provide a novel OEM, including both humans and LLMs in the engineering of ontologies, with a focus on speed, conceptualization, and human-assistance. The final product of this work will be an OEM more effective in knowledge representation than those used solely by humans or LLMs. The paper focuses on LLM-based collaborative OE to create comprehensive PD ontologies and discusses limitations identified from the experimental results.</p><p>The organization of this paper is as follows: Section 2 presents related work on integrating LLMs to OE; Section 3 describes the proposed research methodology; Section 4 presents the conducted experiment; Section 5 presents further experimentation; and finally, section 6. Discuss the results and draws the conclusions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head><p>Oksannen et al. ( <ref type="formula">2021</ref>) developed an approach to derive product ontologies from textual reviews using BERT models. Their approach, which required minimum manual annotation, demonstrates increased precision and recall in comparison to established methods such as Text2Onto and COMET, signifying a noteworthy advancement in automatic ontology extraction <ref type="bibr" target="#b12">[13]</ref>. The BERTMap, a tool designed for the visualization and analysis for Bidirectional Encoder Representations from Transformers by <ref type="bibr" target="#b13">He et al. (2022)</ref>, demonstrates the effectiveness of LLMs by exceling at ontology mapping (OM), especially in unsupervised and semi-supervised scenarios, surpassing current OM systems. It demonstrates the precision of LLMs in matching entities between knowledge graphs <ref type="bibr" target="#b13">[14]</ref>. <ref type="bibr" target="#b14">Ning et al. (2022)</ref>, introduce a technique to extract factual information from LLMs by creating prompts for pairs of subjects and relations. They utilize an approach that incorporated pre-trained LLMs with prompt templates derived from web material and personal expertise. The authors identify effective prompts through a parameter selection technique and filter the generated entities to pinpoint reliable choices. They stress the significance of investigating parameter combinations, testing LLMs, and expanding research into different domains <ref type="bibr" target="#b14">[15]</ref>.</p><p>Lippolis et al. concentrate on harmonizing entities across ArtGraph and Wikidata. By combining traditional querying with LLMs, they achieve a high accuracy in entity alignment, showcasing the efficiency of LLMs in filling knowledge gaps in intricate databases <ref type="bibr" target="#b15">[16]</ref>. <ref type="bibr" target="#b16">Funk et al. (2023)</ref> investigates the capability of ChatGPT3.5 in creating concept hierarchies in several fields. Their method decreases mistakes and generates appropriate concept names, demonstrating the effectiveness of LLMs in the semi-automatic creation of ontologies. Studies on GPT4's abilities in structured intelligence within ontologies indicate its potential for groundbreaking progress. Their study emphasizes the importance of implementing controlled LLM integration in business environments through a collaborative framework. <ref type="bibr" target="#b16">[17]</ref>. <ref type="bibr" target="#b17">Biester et al. (2023)</ref> develops a technique that utilizes prompt ensembles to improve knowledge base development. When applied to models such as ChatGPT and Google BARD, they demonstrate notable enhancements in precision, recall, and F-1-score, highlighting the effectiveness of LLMs in improving knowledge bases <ref type="bibr" target="#b17">[18]</ref>. Mountantonakis and Tzitzikas (2023) devise a technique to verify ChatGPT information by utilizing RDF Knowledge Graphs. They confirm the accuracy of 85.3% of ChatGPT facts, highlighting the significance of verification services in maintaining data precision <ref type="bibr" target="#b18">[19]</ref>. <ref type="bibr" target="#b19">Pan et al. (2023)</ref> suggests combining LLMs with KGs to improve reasoning skills. Their frameworks attempt to combine the benefits of both LLMs and KGs, resulting in enhanced data processing and reasoning abilities <ref type="bibr" target="#b19">[20]</ref>. Joachimiac et al. (2023), used the Spindoctor approach, which employed LLMs to summarize gene sets, demonstrating the versatility of LLMs in analyzing intricate biological information. Their method showcased the effectiveness of LLMs in summarizing text specifically related to gene ontology <ref type="bibr" target="#b20">[21]</ref>. The SPIRES approach developed by <ref type="bibr" target="#b21">Caufield et al. (2023)</ref> demonstrates the adaptability of LLMs in extracting information from unstructured texts in many fields. This zero-shot learning method does not require any model adjustment, demonstrating the wide range of applications of LLMs in various disciplines <ref type="bibr" target="#b21">[22]</ref>. <ref type="bibr" target="#b22">Mateiu et al. (2023)</ref> showcase the application of GPT3 in converting natural language words into ontology axioms. Their methodology facilitates ontology creation, enhancing accessibility and efficiency, demonstrating the effectiveness of LLMs in streamlining intricate ontology engineering processes <ref type="bibr" target="#b22">[23]</ref>.</p><p>However, the aforementioned studies primarily concentrate on the capabilities of LLMs in isolation or in comparison with traditional methods, often emphasizing automated or semiautomated processes. What remains less explored, and thus the focus of current study, is the symbiotic integration of both human expertise and LLMs in the process of OEM. This novel approach aims to harness the speed and computational efficiency of LLMs while simultaneously capitalizing on the complex understanding and conceptualization skills of human experts. Furthermore, it is reasonable to believe that the differences between LLMs have strengths and weaknesses that can help researchers and practitioners choose the best models for use in realworld entity resolution <ref type="bibr" target="#b23">[24]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Research Methodology</head><p>The forthcoming section presents an experiment encompassing two distinct phases, focusing on the development and assessment of ontologies, with a special emphasis on classes. The initial phase involves generating an ontology for PD monitoring and alerting, mainly powered by the autonomous capabilities of LLMs. This process utilizes both 'One Shot' (OS) and 'Chain of Thought' (CoT) techniques. The OS method involves presenting a model with a single prompt and expecting it to produce a suitable response based only on this input. In a one-shot situation, the model is not provided with several examples for learning and must complete the task with little context. This is a straightforward approach where the model uses its pre-trained knowledge to infer the most likely answer. For this paper purposes, CoT refers to a methodological approach where the OS is segmented into two sequential prompts. This segmentation allows for a structured progression in the reasoning process, whereby each prompt is strategically designed to focus on a specific element of the overall task. By employing sequential prompting, we direct the language model to tackle each segment of the problem individually, thereby facilitating a cumulative build-up of information and reasoning. Subsequently, in the second phase, a hybrid OEM is established, which integrates human expertise with the abilities of LLMs. This collaboration aims to elevate the quality and practicality of the ontology within the PD monitoring and alerting framework. Figure <ref type="figure" target="#fig_0">1</ref> depicts a flowchart that outlines this two-phase experimental process. Initially, four LLMs independently develop an ontology with minimal human input (phase 1). The process evolves into a more collaborative approach (Human and LLMs) with the X-HCOME OEM (phase 2). The resulting ontologies are then compared against a gold standard ontology using various metrics. The process is further customized (further experimentation) through expert evaluations and refinement of the gold standard ontology. To fulfill the study's objective, the following will be conducted: a) an examination of the LLMs attempting to construct ontologies in with minimal human intervention and b), an examination of the X-HCOME methodology in OE and its evaluation by comparing the quality of LLM-generated ontologies with human-generated ones. The X-HCOME methodology is an extension of the Human-Centered Collaborative Ontology Engineering methodology (HCOME) <ref type="bibr" target="#b11">[12]</ref>. This extension concerns the inclusion of LLM-based tasks (along with the human-centered ones) in the OE lifecycle. This study aims to show that ontologies that are collaboratively engineered by humans (knowledge engineers, knowledge workers, domain experts, etc.) and machines (LLMs) are of higher quality than ontologies that are created by humans or LLMs alone. A secondary goal is to support the hypothesis that working along with LLMs, humans can complete ontology engineering tasks (and consequently, the OE lifecycle) much faster i.e., from several days or weeks to hours. The proposed research methodology is driven by two specific hypotheses. These hypotheses drive the experimental phases carried out to assess the efficacy of the proposed approach. Hypothesis 1: LLMs, when prompted with domain-specific queries, can autonomously develop a coherent and comprehensive ontology, as it is in the case of PD monitoring and alerting ontology. LLMs have the ability to extract domain knowledge efficiently from their extensive data repositories, and construct ontologies using different prompts engineered by human-user of the LLM.</p><p>• This hypothesis is tested in Phase 1 of our experiments, where LLMs are tasked with creating a PD patients' monitoring and alerting patients ontology from ground zero, using domain-specific prompts. The effectiveness of LLMs in developing an accurate and relevant ontology is measured against a gold standard -expert-generated ontology.</p><p>In this study, the Wear4PDmove <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8]</ref> is utilized as the gold standard ontology, and it will be referred to as such throughout the remainder of the study.</p><p>o Phase 1: Initiating LLMs to develop the ontology. During the initial phase of the experiments, the LLMs will independently (no human-involvement) reconstruct the Wear4PDmove ontology from scratch. This phase comprises the following steps:</p><p>1. LLMs construct an ontology in Turtle format. The ontology represents various aspects of PD patient care, including monitoring, alerting, patients' health record and healthcare team coordination. 2. Validate the ontology by assessing its accuracy and coherence with OOPS! <ref type="foot" target="#foot_1">3</ref>and Protégé<ref type="foot" target="#foot_2">4</ref> tools (Pellet). 3. Use metrics such as Precision, Recall, and the F-1-score (Table <ref type="table" target="#tab_0">1</ref>) to compare the LLM-generated ontology with the gold standard ontology created by human experts. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Hypothesis 2:</head><p>The combination of human expertise and LLM capabilities enhances the quality and applicability of the developed ontology, as it is in the case of PD monitoring and alerting ontology.</p><p>• This hypothesis is related to Phase 2 experimentation, where the X-HCOME methodology is deployed. It assesses how the collaboration between humans and LLMs contributes to refining and validating the ontology, ensuring its relevance and accuracy e.g., in the case of PD monitoring and alerting patients. o Phase 2. The X-HCOME methodology presented in this paper involves a number of steps assigned to either human experts or LLMs in an alternating manner during the OE process. These steps are:</p><p>1. (Human): Define prompts and provide LLMs with the specified data. 5   § Define the aim and scope of the ontology: Explain the reasons for its development and the depth of the information it aims to encompass. § Ontology Requirements: Enumerate the necessary knowledge that must be represented and explain its significance. § Integrate data from PD cases. This data was specifically asked for from the LLM to give a full and accurate picture of the condition (i.e. make sure that PD tremor is properly represented in the ontology). § Formulate specific questions (competency questions) in natural language that the ontology should be able to answer, as defined by knowledge workers. 2. (LLM): Construct a domain ontology using the input provided previously, in specific syntax e.g., Turtle . This is a fully automated task performed by the LLM, asking it to act as an ontology engineer and a domain expert. 3. (Human): Compare the LLM-generated ontology with existing gold standard (or widely accepted) ontologies. This is a human based comparison performed either manually or assisted by ontology alignment-mapping tools e.g., LogMap <ref type="bibr" target="#b24">[25]</ref>. 4. (LLM): Perform a machine-based comparison of LLM-generated ontology against the gold standard ontology. This is a fully automated comparison of the two ontologies, asking LLM to act as an ontology engineer using an OM tool such as LogMap. 5. (Human): Develop a revised domain ontology by combining an existing ontology with the one generated by the LLM. 6. (LLM): Repeat step 4 (LLM-based evaluation of the developed ontology). 7. (Human): Evaluate the revised/refined ontology using OE tools. This step includes a comprehensive assessment of the engineered ontology to confirm that it fulfills the particular requirements and attains the intended level of validity.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Methodology Assessment Through Experiment</head><p>The results described in this section, supported by supplementary material placed at a GitHub repository<ref type="foot" target="#foot_3">5</ref> , focus on the complex process of creating ontologies for monitoring and alerting patients in PD. The conducted experimentation progresses through the two distinct phases presented in Section 3. This experiment evaluates the proposed research methodology by comparing the ontologies generated in the experiment with the gold standard ontology. It is essential to clarify that the metrics presented in this paper solely focused on the generated ontological classes. The validation involves both exact matching, where generated classes corresponded to entities in the gold standard ontology, and similarity matching, where classes were considered correct if they were semantically similar to the gold standard classes. This dual approach ensures a comprehensive evaluation of the LLM's performance, capturing both direct accuracies and contextually appropriate approximations. While our study did include calculations for object properties, unfortunately, due to space limitations, they were not included in this paper. Having said that, the results obtained for object properties were less than optimal, as evidenced by the observed low F1 scores as presented in the GitHub repository <ref type="foot" target="#foot_4">6</ref> . Ontological class definitions consistency and syntactical correctness were observed in all LLM and hybrid generated ontologies, apart from the ones generated by Llama2 (OS, CoT and X-HCOME) <ref type="foot" target="#foot_5">7</ref> . Llama2-generated ontologies included both syntactical errors and inconsistent definitions, and thus it failed to generate a valid ontology. Also, all the developed ontologies were validated with OOPS!, identifying only one minor pitfall (pitfall P36-URI, file extension) during the experimental process 7 .</p><p>Phase 1 experimentation. LLMs are initially given prompts with two methods. Oneshot prompting (OS): with this method, the LLMs were given a single, clear prompt that stated the aim and scope of the gold standard ontology without any additional information or background. The goal was to test LLMs' initial response effectiveness by generating accurate and relevant ontology from a single standalone prompt. Along with thus test, a focus on minimal human effort was given.</p><p>The following paragraph provides an example of an OS prompt: The first prompt cover the role and aim and scope of the ontology and is crucial as it sets the foundation for the ontology. The second prompt deals with the processing and utilization of the data collected as per the framework set up in the first prompt.</p><p>Phase 2 experimentation. Subsequently, we have developed and evaluated the X-HCOME methodology, a novel approach in OE, that seamlessly integrates the expertise of human experts (domain and ontology engineer) with the computational power of LLMs in domain knowledge acquisition and ontology engineering. At each stage of this iterative process, human domain experts critically examine and provide feedback on the ontologies generated by the LLMs. This collaborative working and human-machine teaming is central to the X-HCOME methodology, as it allows for the integration of expert knowledge and insights with the advanced data processing capabilities of LLMs. The experts' contributions are pivotal in identifying variations and complexities that might be overlooked by automated systems, ensuring that the resulting ontology is not only technically sound but also contextually rich and aligned with real-world applications.</p><p>Following is a presentation of the two phases' findings. Based on the data provided in Table <ref type="table" target="#tab_2">2</ref>, the chatGPT3.5 OS method identified 5 classes but had relatively low accuracy (Precision 40%, Recall 5%, F-1 score 9%). ChatGPT3.5 CoT achieved higher precision (67%) with limited recall (5%), identifying only 3 classes. ChatGPT4 OS improved, identifying 9 classes (Precision 56%, Recall 12%, F-1 score 20%), while ChatGPT4 CoT showed further enhancement with 6 classes (Precision 67%, Recall 10%, F-1 score 17%). Conversely, GEMINI OS had lower precision (8%) and recall (2%), identifying 13 classes, whereas GEMINI CoT identified 8 classes with better precision (63%) and recall (12%), mirroring ChatGPT4 OS's performance. To summarize, the CoT method generally returned higher precision than the OS method, indicating more accurate but fewer classes. Conversely, OS tended to identify more classes but with lower precision, suggesting a broader but less accurate approach to class identification. While CoT focused on the quality of classifications, OS emphasized quantity, leading to differences in their overall effectiveness in ontology creation.</p><p>For the X-HCOME method, the ChatGPT3.5 X-HCOME generated 25 classes with a Precision of 40%, a Recall of 24%, and an F-1 score of 30%, balancing the number of classes identified and accuracy. The ChatGPT4 X-HCOME generated 33 classes but with lower precision, reflected in a Precision of 30%, Recall of 24%, and an F-1 score of 27%. Remarkably, the GEMINI X-HCOME method produced the highest number of classes (50) with a Precision of 38%, a Recall of 46%, and an F-1 score of 42%, showcasing the best recall rate among the methods.</p><p>Syntactical errors were indicated by the Llama2 results. However, it is noted that its CoT and OS methods showed high Precision but were limited in overall performance due to the restricted number of classes identified.</p><p>Overall, the performance of the X-HCOME methodology was superior in all LLMs. This conclusion is drawn from its consistently higher number of classes identified and the overall better F-1 score when compared to the other methods (OS and CoT) for each LLM. GEMINI X-HCOME method appeared to be the most effective overall in the context of ontology creation. It produced the highest number of classes (50) and achieved the best recall rate (46%) among all the methods tested. Additionally, its F-1 score of 42% was the highest, suggesting a relatively better balance between precision and recall compared to other methodologies. The F-1 score for the object properties across all LLMs varied from 6% to 12%.<ref type="foot" target="#foot_6">8</ref>  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Further Experimentation</head><p>To better evaluate the generated ontologies, we further analyzed the results obtained for False Positives, serving as a domain experts, checking whether LLMs have discovered relevant domain knowledge that the gold standard ontology has not included (incomplete engineering due to human bias or other reasons). This analysis aimed to understand whether the generated classes, despite not matching entities within the gold standard ontology, could be reclassified as true positives, potentially improving the ontology. The integration of expert opinion in this case was crucial for expanding and enhancing the domain knowledge represented in the gold standard ontology. This method shows an ever-changing way of thinking about ontology construction-as a conversation between human and machine intelligence that goes back and forth. By embracing this perspective, this experiment holds the promise of significantly advancing the field. The ChatGPT3.5 CoT and OS methods had comparable results, with the CoT method showing slightly higher precision but equal recall and F-1 score as OS. For ChatGPT4, both CoT and OS showed similar trends, with CoT slightly outperforming OS in precision and recall (table <ref type="table" target="#tab_3">3</ref>).</p><p>Significantly, the X-HCOME method for both ChatGPT3.5 and ChatGPT4 displayed a marked improvement in precision and recall, notably reducing false positives after expert review. The GEMINI X-HCOME method stood out with exceptional precision and recall, indicating no false positives and a high rate of true positives. However, GEMINI's CoT and OS methods lagged considerably behind in these metrics. Llama2's CoT and OS methods achieved high precision but lower recall. Notably, Llama2 failed to create a consistent ontology without errors, which is a critical aspect in OE. In summary, the X-HCOME method demonstrated superior performance across all LLMs, including ChatGPT3.5, ChatGPT4, and GEMINI, particularly after human expert intervention. This methodology proved more effective in accurately classifying classes with minimal false positives, highlighting its robustness and efficiency in ontology creation tasks. Post-revision, X-HCOME emerges as a highly effective method for ontology generation, balancing class creation with accuracy. For instance, GEMINI X-HCOME generated classes like "Surgical Intervention," "Rigidity," and "Cognitive Impairment", that were absent in the gold standard ontology. This fact underscores its ability to uncover comprehensive knowledge in PD monitoring/alerting that experts alone might overlook. For patients who have undergone surgical interventions like deep brain stimulation, medication regimens may be altered significantly. The alert system needs to be adaptable to reflect these changes. To avoid false alerts about missed doses, the system should account for post-surgical patients reduced or different medication. Also, in patients experiencing significant rigidity, a missed dose of medication can lead to rapid symptom exacerbation. The alert system can be calibrated to be more sensitive and prompting in these cases, ensuring quick notification of a missed dose to prevent worsening of rigidity. Patients with more severe rigidity might receive early or more frequent reminders to take their medication to maintain optimal symptom control. Lastly cognitive impairment can make it challenging for patients to remember their medication schedules. In such cases, the alert system can include more robust, frequent, and clear reminders, possibly using different modalities (like visual or auditory cues) to ensure the patient is aware of the missed dose. Classes like these enhance the ontology's utility in developing sophisticated PD monitoring and alerting systems, ensuring a more rounded approach to patientcare and intervention.</p><p>Finally, the F1 score for the object attributes across all LLMs varied from 6% to 84%.<ref type="foot" target="#foot_7">9</ref> </p><p>Lastly, an additional experiment was carried out to assess the efficacy of the proposed approach after the X-HCOME methodology was applied. This involved using a modified version of the gold standard ontology, thereby altering the ground truth of the experiments in a controlled manner. We have removed the imported ontologies from the gold standard ontology in order to create a simplified/light version of it. Specifically we removed the SOSA<ref type="foot" target="#foot_8">10</ref> , the DAHCC <ref type="foot" target="#foot_9">11</ref> and the PMDO<ref type="foot" target="#foot_10">12</ref> ontologies. This "light" ontology excluded certain complexities found in the original (Wear4PDmove), enabling a focused comparison with a ground truth constructed solely by experts. The intention was to discern the alignment of LLM-extracted ontologies with a more streamlined expert-based conceptualization of the domain. Also, comparing the above methodologies to a "light" expert-based ground truth (ontology) facilitates a more direct evaluation of the LLMs' performance in capturing the essential elements of PD monitoring and alerting without extraneous informative details. This comparison can highlight the LLMs' effectiveness in essential knowledge capture and representation. To assess the accuracy and consistency of the constructed ontologies compared to this version of gold standard ontology, we have employed the metrics mentioned previously.</p><p>As seen in Table <ref type="table" target="#tab_5">4</ref>, while the ChatGPT3.5 and ChatGPT4 methods with CoT and OS approaches showed varying levels of success, their X-HCOME counterparts showed better F-1 score, indicating a better balance of precision and recall. Notably, GEMINI X-HCOME achieved the highest F-1 score of 36%, significantly outperforming other methods. This suggests that the X-HCOME method is particularly effective in achieving a balance between accuracy and comprehensiveness in ontology creation tasks. This indicates the X-HCOME method's enhanced ability to identify a broader range of relevant classes, showcasing its overall superiority in ontology creation tasks. The F1 score for the object attributes across all LLMs varied from 6% to 24%.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Discussion</head><p>The research study presented in this paper partially confirmed our initial hypothesis that LLMs can autonomously develop an ontology for PD monitoring and alerting patients when provided with domain-specific input (aim, scope, requirements, competency questions, data). While LLMs demonstrated the capability to construct an ontology, the comprehensiveness of these ontologies did not fully align with our expectations. LLMs have efficiently acquired knowledge from big data repositories and generated ontologies using various prompting engineering techniques, yet the resulting ontologies were not as comprehensive as anticipated. This suggests that while LLMs are effective in ontology creation, their output still requires further refinement to achieve comprehensive knowledge representation in specific domains like PD monitoring and alerting of patients.</p><p>On the other hand, our second hypothesis, which stated that combining human expertise with LLM capabilities improves the developed ontology's quality and applicability was confirmed for PD monitoring and alerting of patients. Our study demonstrated that the X-HCOME methodology, which is enhanced by the capabilities of LLMs, is a robust approach for developing quality ontologies in the PD domain. This methodology not only enhances the structural integrity of ontologies but also enriches them with a more extensive range of knowledge, ensuring their vitality and relevance to contemporary needs, while also showcasing notable time efficiency. Moreover, the collaboration between human expertise and advanced LLMs in OE holds great potential for future developments. It paves the way for more intelligent, adaptive, and comprehensive knowledge representation systems that can significantly contribute to the advancement of various fields, especially in complex areas like healthcare. Through expert revision, particularly evident in the significant improvements in precision and F-1 scores, our findings underscore the value of expert intervention in enhancing ontology generation, particularly in mitigating false positives. Notably, the X-HCOME method demonstrated excellence post-revision, showcasing its potential for ontology refinement.</p><p>However, biases such as interpretation bias resulting from the opinions and experiences of specific domain experts, as well as biases inherent in LLMs due to their training with unfair or biased algorithms and data, may be present in hybrid methods such as X-HCOME. These biases might affect how valid and correct the knowledge that comes from LLMs is. The results of experiments suggest that ontologies generated by LLMs using a well-defined collaborative OE methodology may have the potential to be comparable to those created solely by humans. This indicates the importance of considering hybrid approaches in OE, which enable collaboration between humans and machines, potentially enhancing efficiency in knowledgebased tasks for both parties involved. Moreover, another limitation of the current study is that it might have oversimplified the ontology-building process by using the number of classes generated as a crucial metric to evaluate ontology-building methodologies (OS, CoT, and X-HCOME). This perspective may have led to an oversight of other crucial aspects such as data/object properties and diverse axioms. These entities are essential for crafting a rich and expansive ontology. Unfortunately, they were not thoroughly investigated in this research, indicating a potential gap in fully realizing a comprehensive and detailed ontology development. While object properties were also calculated in the current, details of these findings are available in the associated GitHub repository <ref type="foot" target="#foot_12">14</ref> .</p><p>The promising results of X-HCOME in our study suggest its potential, yet they also underscore the need for significant refinement and enhancement before it can be considered a revolutionary methodology in OE. Given the complexities of ontology construction, X-HCOME requires further development for comprehensive and accurate ontology creation. Additionally, extensive practice with this methodology by ontology engineers and domain experts across various domains is essential to fully harness its capabilities and adapt it effectively to diverse knowledge areas.</p><p>Regarding future work, it would be intriguing to explore the development of a specialized GPT (Generative Pre-trained Transformer) model that is tailored specifically for ontology construction, utilizing the X-HCOME methodology. This could involve training a GPT on datasets that are representative of ontology structures and concepts, aligned with the principles and techniques of the X-HCOME approach. Such an attempt would not only harness the advanced capabilities of GPTs in understanding and generating complex language patterns but also integrate the methodological strengths of X-HCOME. As OE continues to evolve, the integration of methodologies like X-HCOME will play a pivotal role in shaping the future of knowledge representation, offering new possibilities for innovation and improvement in various domains.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 .</head><label>1</label><figDesc>Figure 1. Flowchart of a multi-phase experimentation assessing the construction and validation of ontologies using different methodologies (created with AI-Whimsical ChatGPT, 2023 2 ).</figDesc><graphic coords="5,104.65,85.05,382.80,452.80" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Summary of metrics for classes evaluation. This table presents the formulas for Precision, Recall, and the F-1-score, along with their definitions.</figDesc><table><row><cell>Formulas</cell><cell>Definitions</cell></row><row><cell>Precision = True Positives / (True</cell><cell>True Positives: classes correctly classified as</cell></row><row><cell>Positives + False Positives)</cell><cell>positive in alignment with the 'gold standard'</cell></row><row><cell></cell><cell>ontology,</cell></row><row><cell>Recall = True Positives / (True</cell><cell>False Positives: classes incorrectly classified as</cell></row><row><cell>Positives + False Negatives)</cell><cell>positive in alignment with the ''gold standard'</cell></row><row><cell></cell><cell>ontology</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head></head><label></label><figDesc>Chain-of-Thought prompting (CoT): Τhe CoT prompting method, which breaks down the OS prompt into two distinct prompts. The following paragraph provides an example of an</figDesc><table><row><cell>data of Parkinson disease patients through wearable sensors, analyze them in a way that</cell></row><row><cell>enables the understanding (uncover) of their semantics, and use these semantics to</cell></row><row><cell>semantically annotate the data for interoperability and interlinkage with other related</cell></row><row><cell>data."</cell></row><row><cell>Prompt 2: "You will reuse other related ontologies about neurodegenerative diseases. In the</cell></row><row><cell>process, you should focus on modeling different aspects of PD, such as disease severity,</cell></row><row><cell>movement patterns of activities of daily living and gait. Give the output in TTL format."</cell></row><row><cell>CoT prompt:</cell></row><row><cell>Prompt 1: "Act as an Ontology Engineer, I need to generate an ontology about Parkinson</cell></row><row><cell>disease monitoring and alerting patients. The aim of the ontology is to collect movement</cell></row></table><note>"Act as an Ontology Engineer, I need to generate an ontology about Parkinson disease monitoring and alerting patients. The aim of the ontology is to collect movement data of Parkinson disease patients through wearable sensors, analyze them in a way that enables the understanding (uncover) of their semantics, and use these semantics to semantically annotate the data for interoperability and interlinkage with other related data. You will reuse other related ontologies about neurodegenerative diseases. In the process, you should focus on modeling different aspects of PD, such as disease severity, movement patterns of activities of daily living and gait. Give the output in TTL format."</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2 .</head><label>2</label><figDesc>Comparative evaluation of methodologies used for ontology creation against the gold standard ontology.</figDesc><table><row><cell>Method</cell><cell>Number</cell><cell>of</cell><cell>Classes</cell><cell>True</cell><cell>Positives</cell><cell>False</cell><cell>Positives</cell><cell>False</cell><cell>Negatives</cell><cell>Precision</cell><cell>Recall</cell><cell>F-1 score</cell></row><row><cell>Gold-ontology</cell><cell></cell><cell>41</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>ChatGPT3.5 CoT</cell><cell></cell><cell>3</cell><cell></cell><cell>2</cell><cell></cell><cell>1</cell><cell></cell><cell>39</cell><cell></cell><cell>67%</cell><cell>5%</cell><cell>9%</cell></row><row><cell>ChatGPT3.5 OS</cell><cell></cell><cell>5</cell><cell></cell><cell>2</cell><cell></cell><cell>3</cell><cell></cell><cell>39</cell><cell></cell><cell>40%</cell><cell>5%</cell><cell>9%</cell></row><row><cell>ChatGPT3.5 X-</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>HCOME</cell><cell></cell><cell>25</cell><cell></cell><cell>10</cell><cell></cell><cell>15</cell><cell></cell><cell>31</cell><cell></cell><cell>40%</cell><cell>24%</cell><cell>30%</cell></row><row><cell>ChatGPT4 CoT</cell><cell></cell><cell>6</cell><cell></cell><cell>4</cell><cell></cell><cell>2</cell><cell></cell><cell>37</cell><cell></cell><cell>67%</cell><cell>10%</cell><cell>17%</cell></row><row><cell>ChatGPT4 OS</cell><cell></cell><cell>9</cell><cell></cell><cell>5</cell><cell></cell><cell>4</cell><cell></cell><cell>36</cell><cell></cell><cell>56%</cell><cell>12%</cell><cell>20%</cell></row><row><cell>ChatGPT4 X-</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>HCOME</cell><cell></cell><cell>33</cell><cell></cell><cell>10</cell><cell></cell><cell>23</cell><cell></cell><cell>31</cell><cell></cell><cell>30%</cell><cell>24%</cell><cell>27%</cell></row><row><cell>GEMINI CoT</cell><cell></cell><cell>8</cell><cell></cell><cell>5</cell><cell></cell><cell>3</cell><cell></cell><cell>36</cell><cell></cell><cell>63%</cell><cell>12%</cell><cell>20%</cell></row><row><cell>GEMINI OS</cell><cell></cell><cell>13</cell><cell></cell><cell>1</cell><cell></cell><cell>12</cell><cell></cell><cell>40</cell><cell></cell><cell>8%</cell><cell>2%</cell><cell>4%</cell></row><row><cell>GEMINI X-</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>HCOME</cell><cell></cell><cell>50</cell><cell></cell><cell>19</cell><cell></cell><cell>31</cell><cell></cell><cell>22</cell><cell></cell><cell>38%</cell><cell>46%</cell><cell>42%</cell></row><row><cell>Llama2 CoT</cell><cell></cell><cell>3</cell><cell></cell><cell>3</cell><cell></cell><cell>0</cell><cell></cell><cell cols="3">38 100%</cell><cell>7%</cell><cell>14%</cell></row><row><cell>Llama2 OS</cell><cell></cell><cell>2</cell><cell></cell><cell>2</cell><cell></cell><cell>0</cell><cell></cell><cell cols="3">39 100%</cell><cell>5%</cell><cell>9%</cell></row><row><cell>Llama2 X-</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>HCOME</cell><cell></cell><cell>32</cell><cell></cell><cell>4</cell><cell></cell><cell>28</cell><cell></cell><cell>37</cell><cell></cell><cell>13%</cell><cell>10%</cell><cell>11%</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 3 .</head><label>3</label><figDesc>Comparative evaluation of ontology creation methods' post expert review on False Positives.</figDesc><table><row><cell>Method</cell><cell></cell><cell>Number of</cell><cell>Classes</cell><cell>True</cell><cell>Positives</cell><cell>False</cell><cell>Positives</cell><cell>False</cell><cell>Negatives</cell><cell>Precision</cell><cell>Recall</cell><cell>F-1 score</cell></row><row><cell>Gold-ontology</cell><cell></cell><cell cols="2">41</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>ChatGPT3.5 CoT</cell><cell></cell><cell cols="2">3</cell><cell>2</cell><cell></cell><cell>1</cell><cell></cell><cell>39</cell><cell></cell><cell>67%</cell><cell>5%</cell><cell>9%</cell></row><row><cell>ChatGPT3.5 OS</cell><cell></cell><cell cols="2">5</cell><cell>2</cell><cell></cell><cell>3</cell><cell></cell><cell>39</cell><cell></cell><cell>40%</cell><cell>5%</cell><cell>9%</cell></row><row><cell>ChatGPT3.5</cell><cell>X-</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>HCOME</cell><cell></cell><cell cols="2">25</cell><cell>23</cell><cell></cell><cell>2</cell><cell></cell><cell>18</cell><cell></cell><cell>92%</cell><cell>56%</cell><cell>70%</cell></row><row><cell>ChatGPT4 CoT</cell><cell></cell><cell cols="2">6</cell><cell>4</cell><cell></cell><cell>2</cell><cell></cell><cell>37</cell><cell></cell><cell>67%</cell><cell>10%</cell><cell>17%</cell></row><row><cell>ChatGPT4 OS</cell><cell></cell><cell cols="2">9</cell><cell>5</cell><cell></cell><cell>4</cell><cell></cell><cell>36</cell><cell></cell><cell>56%</cell><cell>12%</cell><cell>20%</cell></row><row><cell>ChatGPT4</cell><cell>X-</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>HCOME</cell><cell></cell><cell cols="2">33</cell><cell>29</cell><cell></cell><cell>4</cell><cell></cell><cell>12</cell><cell></cell><cell>88%</cell><cell>71%</cell><cell>78%</cell></row><row><cell>GEMINI CoT</cell><cell></cell><cell cols="2">8</cell><cell>5</cell><cell></cell><cell>3</cell><cell></cell><cell>36</cell><cell></cell><cell>63%</cell><cell>12%</cell><cell>20%</cell></row><row><cell>GEMINI OS</cell><cell></cell><cell cols="2">13</cell><cell>1</cell><cell></cell><cell>12</cell><cell></cell><cell>40</cell><cell></cell><cell>8%</cell><cell>2%</cell><cell>4%</cell></row><row><cell cols="2">GEMINI X-HCOME</cell><cell cols="2">50</cell><cell>50</cell><cell></cell><cell>0</cell><cell></cell><cell>-9</cell><cell></cell><cell>100%</cell><cell>122%</cell><cell>110%</cell></row><row><cell>Llama2 CoT</cell><cell></cell><cell cols="2">3</cell><cell>3</cell><cell></cell><cell>0</cell><cell></cell><cell>38</cell><cell></cell><cell>100%</cell><cell>7%</cell><cell>14%</cell></row><row><cell>Llama2 OS</cell><cell></cell><cell cols="2">2</cell><cell>2</cell><cell></cell><cell>0</cell><cell></cell><cell>39</cell><cell></cell><cell>100%</cell><cell>5%</cell><cell>9%</cell></row><row><cell cols="2">Llama2 X-HCOME</cell><cell cols="2">32</cell><cell>26</cell><cell></cell><cell>6</cell><cell></cell><cell>15</cell><cell></cell><cell>81%</cell><cell>63%</cell><cell>71%</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head>Table 4 .</head><label>4</label><figDesc>Comparative evaluation of methods used for ontology generation against the simplified-/light version of the gold standard ontology.</figDesc><table><row><cell>Method</cell><cell>Number</cell><cell>of</cell><cell>Classes</cell><cell>True</cell><cell>Positives</cell><cell>False</cell><cell>Positives</cell><cell>False</cell><cell>Negatives</cell><cell>Precision</cell><cell>Recall</cell><cell>F-1 score</cell></row><row><cell>Simplified-Lite</cell><cell></cell><cell>27</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Gold standard</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>ontology</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>ChatGPT3.5 CoT</cell><cell></cell><cell>3</cell><cell></cell><cell>2</cell><cell></cell><cell>1</cell><cell></cell><cell>25</cell><cell></cell><cell>67%</cell><cell>7%</cell><cell>13%</cell></row><row><cell>ChatGPT3.5 OS</cell><cell></cell><cell>5</cell><cell></cell><cell>3</cell><cell></cell><cell>2</cell><cell></cell><cell>24</cell><cell></cell><cell>60%</cell><cell>11%</cell><cell>19%</cell></row><row><cell>ChatGPT3.5 X-</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>HCOME</cell><cell></cell><cell>25</cell><cell></cell><cell>5</cell><cell></cell><cell>20</cell><cell></cell><cell>22</cell><cell></cell><cell>20%</cell><cell>19%</cell><cell>19%</cell></row><row><cell>ChatGPT4 CoT</cell><cell></cell><cell>9</cell><cell></cell><cell>3</cell><cell></cell><cell>6</cell><cell></cell><cell>24</cell><cell></cell><cell>33%</cell><cell>11%</cell><cell>17%</cell></row><row><cell>ChatGPT4 OS</cell><cell></cell><cell>9</cell><cell></cell><cell>2</cell><cell></cell><cell>7</cell><cell></cell><cell>25</cell><cell></cell><cell>22%</cell><cell>7%</cell><cell>11%</cell></row><row><cell>ChatGPT4 X-</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>HCOME</cell><cell></cell><cell>33</cell><cell></cell><cell>6</cell><cell></cell><cell>27</cell><cell></cell><cell>21</cell><cell></cell><cell>18%</cell><cell>22%</cell><cell>20%</cell></row><row><cell>GEMINI CoT</cell><cell></cell><cell>9</cell><cell></cell><cell>2</cell><cell></cell><cell>7</cell><cell></cell><cell>25</cell><cell></cell><cell>22%</cell><cell>7%</cell><cell>11%</cell></row><row><cell>GEMINI OS</cell><cell></cell><cell>14</cell><cell></cell><cell>1</cell><cell></cell><cell>13</cell><cell></cell><cell>26</cell><cell></cell><cell>7%</cell><cell>4%</cell><cell>5%</cell></row><row><cell>GEMINI X-</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>HCOME</cell><cell></cell><cell>50</cell><cell></cell><cell>14</cell><cell></cell><cell>36</cell><cell></cell><cell>13</cell><cell></cell><cell>28%</cell><cell>52%</cell><cell>36%</cell></row><row><cell>Llama2 CoT</cell><cell></cell><cell>3</cell><cell></cell><cell>0</cell><cell></cell><cell>3</cell><cell></cell><cell>27</cell><cell></cell><cell>0%</cell><cell>0%</cell><cell>0%</cell></row><row><cell>Llama2 OS</cell><cell></cell><cell>2</cell><cell></cell><cell>1</cell><cell></cell><cell>1</cell><cell></cell><cell>26</cell><cell></cell><cell>50%</cell><cell>4%</cell><cell>7%</cell></row><row><cell>Llama2 X-</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>HCOME</cell><cell></cell><cell>34</cell><cell></cell><cell>3</cell><cell></cell><cell>31</cell><cell></cell><cell>24</cell><cell></cell><cell>9%</cell><cell>11%</cell><cell>10%</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_0"> OpenAI. 2023. "Whimsical Diagrams." ChatGPT Functionality. OpenAI. https://openai.com/chatgpt.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_1">https://oops.linkeddata.es</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_2">https://protege.stanford.edu</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_3">https://github.com/GiorgosBouh/Ontologies_by_LLMs</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_4">https://github.com/GiorgosBouh/Ontologies_by_LLMs</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_5">https://oops.linkeddata.es/catalogue.jsp</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_6">https://github.com/GiorgosBouh/Ontologies_by_LLMs</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="9" xml:id="foot_7">https://github.com/GiorgosBouh/Ontologies_by_LLMs</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="10" xml:id="foot_8">http://www.w3.org/ns/sosa/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="11" xml:id="foot_9">https://dahcc.idlab.ugent.be/Ontology/SensorsAndWearables/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="12" xml:id="foot_10">http://www.case.edu/PMDO</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="13" xml:id="foot_11">https://github.com/GiorgosBouh/Ontologies_by_LLMs</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="14" xml:id="foot_12">https://github.com/GiorgosBouh/Ontologies_by_LLMs</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Language Models are Few-Shot Learners</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">B</forename><surname>Brown</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ryder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Subbiah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kaplan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dhariwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Neelakantan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Shyam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sastry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Askell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Agarwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Herbert-Voss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Krueger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Henighan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Child</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ramesh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Ziegler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Winter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Hesse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Sigler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Litwin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chess</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Clark</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Berner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mccandlish</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Radford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Sutskever</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Amodei</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Adv Neural Inf Process Syst</title>
		<imprint>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="page" from="1877" to="1901" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">PaLM: Scaling Language Modeling with Pathways</title>
		<author>
			<persName><forename type="first">A</forename><surname>Chowdhery</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Narang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bosma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Mishra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Roberts</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Barham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">W</forename><surname>Chung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Sutton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gehrmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Schuh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Shi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tsvyashchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Maynez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Barnes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Tay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Shazeer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Prabhakaran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Reif</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hutchinson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Pope</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Bradbury</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Austin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Isard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Gur-Ari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Yin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Duke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Levskaya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ghemawat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Dev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Michalewski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Garcia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Misra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Robinson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Fedus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ippolito</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Luan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Lim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Zoph</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Spiridonov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Sepassi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Dohan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Agrawal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Omernick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Dai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">S</forename><surname>Pillai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pellat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lewkowycz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Moreira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Child</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Polozov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Saeta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Diaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Firat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Catasta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Meier-Hellstern</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Eck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Dean</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Petrov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Fiedel</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2204.02311</idno>
		<ptr target="https://doi.org/https://doi.org/10.48550/arXiv.2204.02311" />
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Ontologies: Principles, methods and applications</title>
		<author>
			<persName><forename type="first">M</forename><surname>Uschold</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gruninger</surname></persName>
		</author>
		<idno type="DOI">10.1017/s0269888900007797</idno>
		<ptr target="https://doi.org/10.1017/s0269888900007797" />
	</analytic>
	<monogr>
		<title level="j">Knowledge Engineering Review</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="93" to="136" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Sheth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Roy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gaur</surname></persName>
		</author>
		<title level="m">Neurosymbolic AI --Why, What, and How</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Peripheral neuropathy in Parkinson&apos;s disease: prevalence and functional impact on gait and balance</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">F</forename><surname>Corrà</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Vila-Chã</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sardoeira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Hansen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">P</forename><surname>Sousa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Reis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sambayeta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Damásio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Calejo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Schicketmueller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Laranjinha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Salgado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Taipa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Magalhães</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Correia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Maetzler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">F</forename><surname>Maia</surname></persName>
		</author>
		<idno type="DOI">10.1093/BRAIN/AWAC026</idno>
		<ptr target="https://doi.org/10.1093/BRAIN/AWAC026" />
	</analytic>
	<monogr>
		<title level="j">Brain</title>
		<imprint>
			<biblScope unit="volume">146</biblScope>
			<biblScope unit="page" from="225" to="236" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">The safety of dopamine agonists in the treatment of Parkinson&apos;s disease</title>
		<author>
			<persName><forename type="first">U</forename><surname>Bonuccelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ceravolo</surname></persName>
		</author>
		<idno type="DOI">10.1517/14740338.7</idno>
		<idno>.2.111</idno>
		<ptr target="https://doi.org/10.1517/14740338.7" />
	</analytic>
	<monogr>
		<title level="j">Expert Opin Drug Saf</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="111" to="127" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Wear4pdmove: An Ontology for Knowledge-Based Personalized Health Monitoring of PD Patients</title>
		<author>
			<persName><forename type="first">N</forename><surname>Zafeiropolos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bitilis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kotis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">CEUR Workshop Proc</title>
		<imprint>
			<biblScope unit="volume">3632</biblScope>
			<biblScope unit="page">4</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Uncovering the Semantics of PD Patients&apos; Movement Data Collected via off-the-shelf Wearables</title>
		<author>
			<persName><forename type="first">P</forename><surname>Bitilis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Zafeiropoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Koletis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kotis</surname></persName>
		</author>
		<idno type="DOI">10.1109/IISA59645.2023.10345958</idno>
		<ptr target="https://doi.org/10.1109/IISA59645.2023.10345958" />
	</analytic>
	<monogr>
		<title level="m">14th International Conference on Information, Intelligence, Systems and Applications</title>
				<meeting><address><addrLine>IISA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023. 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Graph Neural Networks for Parkinson&apos;s Disease Monitoring and Alerting</title>
		<author>
			<persName><forename type="first">N</forename><surname>Zafeiropoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bitilis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">E</forename><surname>Tsekouras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kotis</surname></persName>
		</author>
		<idno type="DOI">10.3390/s23218936</idno>
		<ptr target="https://doi.org/10.3390/s23218936" />
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page">8936</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">PDON: Parkinson&apos;s disease ontology for representation and modeling of the Parkinson&apos;s disease knowledge domain</title>
		<author>
			<persName><forename type="first">E</forename><surname>Younesi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Malhotra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gündel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Scordis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">T</forename><surname>Kodamullil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Page</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Müller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Springstubbe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Wüllner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Scheller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hofmann-Apitius</surname></persName>
		</author>
		<idno type="DOI">10.1186/S12976-015-0017-Y</idno>
		<ptr target="https://doi.org/10.1186/S12976-015-0017-Y" />
	</analytic>
	<monogr>
		<title level="j">Theor Biol Med Model</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">How to Train Data-Efficient LLMs</title>
		<author>
			<persName><forename type="first">N</forename><surname>Sachdeva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Coleman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W.-C</forename><surname>Kang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Hong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">H</forename><surname>Chi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Caverlee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Mcauley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">Z</forename><surname>Cheng</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Human-centered ontology engineering: The HCOME methodology</title>
		<author>
			<persName><forename type="first">K</forename><surname>Kotis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">A</forename><surname>Vouros</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10115-005-0227-4</idno>
		<ptr target="https://doi.org/10.1007/s10115-005-0227-4" />
	</analytic>
	<monogr>
		<title level="j">Knowl Inf Syst</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="109" to="131" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Automatic Product Ontology Extraction from Textual Reviews</title>
		<author>
			<persName><forename type="first">J</forename><surname>Oksanen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Cocarascu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Toni</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2021">2021</date>
			<publisher>Association for Computing Machinery</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">BERTMap: A BERT-Based Ontology Alignment System</title>
		<author>
			<persName><forename type="first">Y</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Antonyrajah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Horrocks</surname></persName>
		</author>
		<idno type="DOI">10.1609/aaai.v36i5.20510</idno>
		<ptr target="https://doi.org/10.1609/aaai.v36i5.20510" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI</title>
				<meeting>the 36th AAAI Conference on Artificial Intelligence, AAAI</meeting>
		<imprint>
			<date type="published" when="2022">2022. 2022</date>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="page" from="5684" to="5691" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Knowledge Base Construction from Pre-trained Language Models by Prompt learning</title>
		<author>
			<persName><forename type="first">X</forename><surname>Ning</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Celebi</surname></persName>
		</author>
		<ptr target="https://ceur-ws.org/Vol-3274/paper4.pdf" />
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Enhancing Entity Alignment Between Wikidata and ArtGraph Using LLMs</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Lippolis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Klironomos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">F</forename><surname>Milon-Flores</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jouglar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Norouzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Hogan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proc</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">3540</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Towards Ontology Construction with Language Models</title>
		<author>
			<persName><forename type="first">M</forename><surname>Funk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hosemann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Jung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Lutz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proc</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">3577</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Enhancing Knowledge Base Construction from Pre-trained Language Models using Prompt Ensembles</title>
		<author>
			<persName><forename type="first">F</forename><surname>Biester</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Gaudio</surname></persName>
		</author>
		<author>
			<persName><surname>Del</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Abdelaal</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proc</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">3577</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">Validating ChatGPT Facts through RDF Knowledge Graphs and Sentence Similarity</title>
		<author>
			<persName><forename type="first">M</forename><surname>Mountantonakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Tzitzikas</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Unifying Large Language Models and Knowledge Graphs: A Roadmap</title>
		<author>
			<persName><forename type="first">S</forename><surname>Pan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Luo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wu</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="1" to="29" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<title level="m" type="main">Gene Set Summarization using Large Language Models</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">P</forename><surname>Joachimiak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">H</forename><surname>Caufield</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">L</forename><surname>Harris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">J</forename><surname>Mungall</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<title level="m" type="main">Structured prompt interrogation and recursive extraction of semantics (SPIRES): A method for populating knowledge bases using zero-shot learning</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">H</forename><surname>Caufield</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hegde</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Emonet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">L</forename><surname>Harris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">P</forename><surname>Joachimiak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Matentzoglu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A T</forename><surname>Moxon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">T</forename><surname>Reese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Haendel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">N</forename><surname>Robinson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">J</forename><surname>Mungall</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="1" to="19" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<title level="m" type="main">Ontology engineering with Large Language Models</title>
		<author>
			<persName><forename type="first">P</forename><surname>Mateiu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Groza</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Pre-trained Embeddings for Entity Resolution: An Experimental Analysis</title>
		<author>
			<persName><forename type="first">A</forename><surname>Zeakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Papadakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Skoutas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Koubarakis</surname></persName>
		</author>
		<idno type="DOI">10.14778/3598581.3598594</idno>
		<ptr target="https://doi.org/10.14778/3598581.3598594" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the VLDB Endowment</title>
				<meeting>the VLDB Endowment</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="page" from="2225" to="2238" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">LogMap: Logic-based and scalable ontology matching</title>
		<author>
			<persName><forename type="first">E</forename><surname>Jiménez-Ruiz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Cuenca Grau</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-642-25073-6_18</idno>
		<ptr target="https://doi.org/10.1007/978-3-642-25073-6_18" />
	</analytic>
	<monogr>
		<title level="j">LNCS</title>
		<imprint>
			<biblScope unit="volume">7031</biblScope>
			<biblScope unit="page" from="273" to="288" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
