<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Knowledge Capturing Tools for Domain Experts</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Lars</forename><surname>Bröcker</surname></persName>
							<email>lars.broecker@iais.fhg.de</email>
							<affiliation key="aff0">
								<orgName type="institution">Fraunhofer IAIS Schloss Birlinghoven</orgName>
								<address>
									<postCode>53754</postCode>
									<settlement>Sankt Augustin</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Marc</forename><surname>Rössler</surname></persName>
							<email>marc.roessler@uni-due.de</email>
							<affiliation key="aff1">
								<orgName type="department">Computational Linguistics</orgName>
								<orgName type="institution">University of Duisburg-Essen</orgName>
								<address>
									<postCode>47048</postCode>
									<settlement>Duisburg</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Andreas</forename><surname>Wagner</surname></persName>
							<email>andreas.wagner@uni-due.de</email>
							<affiliation key="aff2">
								<orgName type="department">Computational Linguistics</orgName>
								<orgName type="institution">University of Duisburg-Essen</orgName>
								<address>
									<postCode>47048</postCode>
									<settlement>Duisburg</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Knowledge Capturing Tools for Domain Experts</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">9D5ABCB10B189B657F0AE87018A37000</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T20:49+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>H.3.1 [Information Storage and Retrieval]: Content Analysis and Indexing-linguistic processing</term>
					<term>I.2.4 [Artificial Intelligence]: Knowledge Representation Formalisms and Methods-semantic networks</term>
					<term>I.2.6 [Artificial Intelli-gence]: Learning-knowledge acquisition, concept learning</term>
					<term>I.2.7 [Artificial Intelligence]: Natural Language Processing-text analysis</term>
					<term>I.5.3 [Pattern Recognition]: Clustering Named Entity Recognition, Relation Discovery, Semantic Networks, Wiki Systems</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The success of the Semantic Web depends on the availability of content marked up using its description languages. Although the idea has been around for nearly a decade, the amount of Semantic Web content available is still fairly small. This is despite the existence of many digital archives containing lots of high quality collections which would, appropriately marked up, greatly enhance the reach of the Semantic Web. The archives themselves would benefit as well, by improved opportunities for semantic search, navigation and interconnection with other archives.</p><p>The main challenge lies in the fact that ontology creation at the moment is a very detailed and complicated process. It mostly requires the service of an ontology engineer, who designs the ontology in accordance with domain experts. The software tools available, be it from the text engineering or the ontology creation disciplines, reflect this: they are built for engineers, not for domain experts. In order to really tap the potential of the digital collections, tools are needed that support the domain experts in marking up the content they understand better than anyone else. This paper presents an integrated approach to knowledge capturing and subsequent ontology creation, called WIKIN-GER, that aims at empowering domain experts to prepare their content for inclusion into the Semantic Web. This is done by largely automating the process through the use of named entity recognition and relation discovery.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">INTRODUCTION</head><p>The Semantic Web can only flourish if enough content providers adopt it for the presentation of their content. This lack of adoption is the Achilles heel of the vision of the data web where humans and software agents can work side by side. The main reason for this lies right at the base of the Semantic Web: the creation of ontologies. The process needed to get to a working representation of a domain is too difficult for domain experts to do it on their own -a debilitating factor on the way to widespread adoption: the WWW did flourish simply due to the ease of marking up knowledge in HTML. This does not hold true for OWL or even RDF.</p><p>There are tools that deliver support in the process of creating an ontology, both from the domain of text engineering as well as from ontology engineering. But these tools are made for a selected audience: ontology engineers. This in itself is nothing bad, but it reduces the amount of growth of the Semantic Web to the availability (and affordability) of said engineers. Tools are needed that allow domain experts themselves to design and create ontologies tailored for their needs and domain corpora, if the Semantic Web is to come about on a grand scale.</p><p>But what is needed to create an ontology from a text corpus? First of all, an ontology can be seen as a graph structure, a semantic network. The nodes of this graph are the entities, i.e. the actors, topics and objects of the ontology, while the edges of the graph are the relations that exist between the entities. The task of automatically creating an ontology can be broken down into the following steps: first named entity recognition (NER) and second the detection of relations existing between those entities.</p><p>The detection and classification of proper names into predefined categories is called Named Entity Recognition (NER). The recognition of the categories PERSON, LOCATION and ORGANIZATION within the newspaper domain is especially well-studied as a part of the MUC-campaigns (Message Understanding Conferences) and can be conducted automatically with a performance beyond 0.9 F-measure for English texts <ref type="bibr" target="#b3">[4]</ref>. The detection of relations between the entities of a corpus is a younger discipline, usually concerned with binary relations. Experiments on English newspapers show performance around 0.75 F-measure <ref type="bibr" target="#b7">[8]</ref>. These advances facilitate a largely automated processing of text corpora into domain ontologies. This paper introduces an integrated web service-based framework called WIKINGER that does just that. This paper is structured as follows: Section 2 gives an overview of the WIKINGER framework, sections 3 and 4 describe our work on named entity extraction, while section 5 describes the relation discovery part of the process. After that, section 6 highlights relevant related work, and we close with remarks on future works and the conclusion in sections 7 and 8.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">WIKINGER -THE BIG PICTURE</head><p>WIKINGER <ref type="bibr" target="#b2">[3]</ref>, short for Wiki Next Generation Enhanced Repositories, aims at developing collaborative knowledge platforms for scientific communities. The collaboration is facilitated by selecting a Wiki as a presentation layer, and the knowledge contained can be organized via semantic relations. The resulting semantic Wiki can be extended, reorganized and commented on by all (registered) members of the particular scientific community. To setup and maintain the semantic network, NER-techniques are applied to the available domain-relevant documents (see section 3). The resulting annotations are the potential nodes of the semantic network that is constructed in a semi-automatic manner. The relations are proposed based on clusters of co-occurring entities (see section 5).</p><p>Figure <ref type="figure" target="#fig_0">1</ref> shows a view of the components that are part of the WIKINGER framework. It is built following a serviceoriented architecture, its modules are loosely coupled, which allows need-driven reconfiguration of the system. The system itself uses a linked set of data repositories to perform its duties. The resource layer at the bottom of fig. <ref type="figure" target="#fig_0">1</ref> shows a drastically simplified view of the outside world: it contains arbitrary data sources that can be imported into the first of the repositories, i.e. the document repository. This repository provides the other services of the system with a versioned corpus of documents to work on. The processing services (e.g. for NER, relation discovery and creation of the ontology) use this repository as a source only. They feed their results into the metadata repository. It is linked to the document repository to uphold references to the original and it also provides versioned storage of the data. This ensures that the original corpus remains unchanged. The final repository contains the semantic model of the corpus. It makes use of both the document repository as well as the metadata repository. At the moment, the application layer takes the form of a wiki system, but other applications can easily be envisioned.</p><p>The architecture of WIKINGER is motivated by the assumption that many nodes of a domain specific semantic network occur in domain relevant texts and that these occurrences are proper names or expressions which can be extracted with NER-techniques.</p><p>The pilot domain of WIKINGER is contemporary history with a focus on the history of Catholicism in Germany. For that domain, the traditional NER categories PERSON, LO-CATION, ORGANIZATION, and TIME/DATE expressions obviously carry crucial nodes for a domain specific semantic network. However, the domain experts desired additional categories, such as HISTORICAL-EVENT, BIOGRAPHIC-EVENT or ROLE. A ROLE is a function or a position a person holds (e.g. "bishop", "professor of theology") and is often part of a BIOGRAPHIC-EVENT, which may contain additional annotations such as LOCATION and TIME/DATE, as the following example shows:</p><formula xml:id="formula_0">&lt;BIO-EVENT&gt; &lt;DATE&gt;1936&lt;/DATE&gt; &lt;ROLE&gt;archbishop&lt;/ROLE&gt; of &lt;LOC&gt;Cologne&lt;/LOC&gt; &lt;/BIO-EVENT&gt;</formula><p>The HISTORICAL-EVENT describes events significant to the domain experts, such as the "Wall Street Crash of 1929", also called "Black Thursday". This category may contain embedded categories, too. The two event categories of the pilot domain are beyond the traditional NER task: Depending on the perspective, they either involve relation extraction or embedded categories. The corpus to annotate currently consists of approximately 150 monographs within a book series. The books were scanned and the text was OCRextracted. The annotations of the resulting corpus will be used as potential nodes of the semantic network to be created.</p><p>Since the book series has a consistent layout structure, it was possible to preserve some layout information, such as the distinction between footnotes and other text. This distinction is helpful in order to detect a text unit specific to the texts of our domain called a "biogram". A biogram usually is a footnote that is provided the first time a person is mentioned in the text and comprises a short biography. These biographies usually are short and concise and tend to follow a predetermined structure. For instance, most of the biograms start with the name of the person, and some biograms present the single pieces of information separated by a particular delimiter such as semicolon or comma. Thus, in most cases the person named at the beginning of a biogram is the one that the other annotations in that biogram relate to. While some of the information items also belong to persons that are related to the person described in the biogram (e.g. "his father was a &lt;ROLE&gt;prime minis-ter&lt;/ROLE&gt;") this assumption nevertheless holds true for the largest part of the corpus. This is very important for the relation discovery step, since all relations discovered in a specific biogram are linked implicitly to said person, although its participation in most of the relations is not readily apparent from their local contexts. Accordingly, they need to be  Processing these biograms results in a semantic network in OWL which contains any information that could be harvested automatically from all the biograms within the 150 monographs. This knowledge base constitutes a biographical database for the scientific domain, which, according to the historians working within the WIKINGER project, is a long time desideratum for the domain of contemporary history of Catholics in Germany. However, the tasks described are not limited to the pilot application of WIKINGER. Indeed, it has many features in common with a series of annotation tasks found in other domains as well. Our research within the WIKINGER project focuses on the application-oriented generalization of these challenges.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">NER</head><p>It is highly desirable to generalize successful NER approaches described in section 1 to a broader variety of semantic markup at phrase level (i.e. apart from "standard" categories such as PERSON, ORGANIZATION, or LOCATION) in order to support other NLP applications. However, this requires annotation components that can be extended to new categories and adapted to new domains and new languages. These tasks may have different characteristics than the classical MUC task: First, they may lack the clue of the distinctive capitalization for some semantic classes and some languages, such as German. Second, the categories of interest may neither be obvious nor easily understandable due to a highly specialized domain and language. A well-known example for such a task is the recognition of biomedical entities such as genes, proteins or cell tissue <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b8">9]</ref>. It is almost impossible for a non-expert in the biomedical domain to judge about the correctness of an annotation or even to figure out a definition of the classes to recognize. Additionally, capitalization is not a distinctive feature of the entities to detect. Furthermore, biomedical entities are no proper names in the linguistic sense since a mention of a particular protein refers to all instances of that protein and not to a particular instance.</p><p>The annotation task within WIKINGER has similar characteristics: the documents to be processed are specialized texts, thus the definition of the annotation categories has to be provided by the domain experts. Also, most of the texts are in German, so the capitalization is not a reliable clue to detect proper names. Furthermore, discussions with the domain experts have shown that some of the annotation tasks amount to information extraction in a more general sense, in particular involving relation extraction, even though on a local level. For example, the BIO-EVENT provided in section 2 establishes a relation between the person the respective biogram deals with, a role occupied by that person, a certain time, and a location. Although these annotation tasks significantly expand the annotation of proper names, we still consider them as a sophisticated form of NER. In other words, we basically employ approaches which have been successfully applied to NER.</p><p>In principle, two major kinds of NER approaches have been proposed in the literature: rule-based and machine learning (ML) approaches. Rule-based approaches employ a handcrafted set of rules which is fine-tuned to the particular application domain. The adaptation of such a rather complex rule set to new domains and/or languages brings about ex-tensive modification and maintenance efforts and requires therefore comprehensive knowledge about both the new domain and the proper design of the linguistic rule set. This means that domain experts need extensive support by computational linguists in order to port such a system to their domain. In contrast, adapting machine learning approaches to a new application domain requires the creation of domainspecific training data, i.e. manual annotation of domainspecific documents. Since this essentially requires domain (rather than linguistic) expertise, domain professionals need much less support by computational linguists (if any at all). Our experience within the WIKINGER project has shown that such support is necessary primarily for the initial task of defining a suitable set of semantic categories. During this definition stage, the communication between domain experts and linguists in essence consists in exchanging annotated examples. We believe that this example-based communication significantly facilitates portability, since concrete examples are much easier to create and understand than the explicit formulation of more or less complex and abstract (sub-)regularities. The same holds true for the annotation of the training data itself, which can be regarded as examplebased communication between domain experts and machine learning algorithms.</p><p>Consequently, in order to minimize the amount of "external help" specialists needed to set up the WIKINGER system for their domain, we decided to employ ML approaches for NER. In our current experiments, we are using Maximum Entropy modeling and support vector machines. (As implementations, we employ openNLP<ref type="foot" target="#foot_0">1</ref> and SVMstruct<ref type="foot" target="#foot_1">2</ref> , respectively.) However, we aim at providing a variety of ML algorithms which can either be employed independently or in combination to maximize performance. Regarding portability, it is crucial that the learning approaches employ domainindependent features and resources that can be easily adapted to a new domain or a new NER task. Furthermore, these methods have to be applied in a way that allows the acquisition of embedded annotations. "Standard" ML classifiers assign one class (in our case, a semantic category) to each instance to classify (in our case, a token) <ref type="foot" target="#foot_2">3</ref> . In embedded annotations, (parts of) entities may receive multiple classes simultaneously (e.g. in the example in section 2, "1936" is at the same time a DATE and part of a BIO-EVENT). To achieve such kind of concurrent classification, we run multiple classifiers, each one assigning different classes, and unify the results. For ML approaches which are restricted to binary classification (e.g. SVM), one classifier is required for each category. For ML approaches without this restriction (e.g. MaxEnt), classifiers assigning multiple classes can be built and combined in a more flexible way. Our experiments with MaxEnt models have shown that combining classifiers each of which assigns all categories except one, i.e. each of which "ignores" one particular class, yields higher performance than employing binary classifiers. In these experiments, we got F-measures (at token level) of up to 84.6% for persons, 87,1% for organizations, 94,8% for geographicpolitical entities, and 92,8% for roles.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">WALU</head><p>A prerequisite for enabling domain experts to create training data and control the process of training and (semi-)automatic semantic markup is the availability of a powerful and convenient tool. On the one hand, such a tool has to provide the necessary functionalities, i.e. manual annotation of documents, configuration and initiation of the training process, application of automatic annotation components, as well as inspection and correction of the resulting annotations. On the other hand, intuitive interfaces and convenient facilities supporting these functionalities while encapsulating their complexity are crucial to ensure usability for professionals of any domain. In addition, this tool has to be integrated into the overall WIKINGER infrastructure sketched in section 2. Currently there is no tool available that meets all these requirements (see section 6), at least not to our knowledge. Therefore, we are developing such a tool, which we call WALU (WIKINGER Annotations-und Lern-Umgebung = WIKINGER annotation and learning environment, see <ref type="bibr" target="#b15">[16]</ref>).</p><p>WALU supports manual annotation with a GUI that is easy to use. It offers a comfortable navigation through the annotations, and simple but effective annotation support such as the automatic adjustment of markup boundaries or a dynamic markup dictionary. This dictionary is created during the annotation process and is used to propose markup labels for text passages corresponding to dictionary entries. Using a context-sensitive menu, the annotator confirms or rejects these proposals and/or removes the entry from the dictionary. In our experience the immediate feedback of the dynamic markup dictionary also helps the domain experts to clarify the task of string-based identification of domainrelevant concepts. Additionally, WALU also provides an automatic annotator for strings referring to the category DATE which is based on regular expressions. This is a simple prototype of a series of automatic mechanisms that will be used to annotate all the available documents. Except a few annotators based on regular expressions to classify entities with unique patterns (such as email addresses and URLs), most of these annotators are based on machine learning algorithms that will be accessible via WALU.</p><p>Training the ML facilities mentioned in section 3 as well as their annotation of new text can be initiated via the WALU GUI. The annotation results can be displayed and manually corrected. Automatic annotations are displayed in a distinct way (only the lower half of the annotated tokens are marked) so that they can be discovered immediately by the user.</p><p>WALU is designed both as a part of the WIKINGER infrastructure and as a stand-alone tool. Web-service-based communication facilities allow WALU to load documents from the WIKINGER document repository and load/store corresponding annotations from/to the metadata repository. As a stand-alone tool, WALU currently is able to import text documents (other import formats will be captured later) and to export annotated documents in a straightforward XML standoff format. The transfer between the various different data formats is achieved via a special internal format we call 'WaRP (WALU Rich Paragraph) stream', which is also processed by the automatic annotation components. </p><formula xml:id="formula_1">¡¢ £¤¥¦ § §© £¦ §¦ ¡ §¦ £¦¦ ¤!£ § " §£ ¦ #£$ §" £ §¦ # ¡¦ % ¢ ¡ §¦ ¦ §&amp;' ( )0 £ ¡ §¦ " §£</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">SEMIAUTOMATIC RELATION DISCO-VERY</head><p>The algorithms and tools described in the preceding sections provide named entities for a variety of project-dependent concept classes. They will become the nodes of the semantic network that is to be built. The remaining part is the provision of edges connecting these nodes, which will be explained in this section. The common approach to this problem is to let domain experts come up with a small number of relations and then to model them in an ontology editor. This requires knowledge of both ontology creation and ontology editors, which tends to be a too high hurdle for domain experts. Instead, we propose to do it based on the content of the corpus in question. With the named entities given by the preceding steps, relation discovery applying statistical methods becomes feasible.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Algorithm</head><p>Figure <ref type="figure" target="#fig_1">2</ref> shows the workflow of our approach. The first step, NER, has been covered already. The next step consists of the application of an association rule mining algorithm on the annotated corpus that has been segmented on the sentencelevel. Only those sentences containing at least two entities are kept. Each sentence is represented by the set of entity classes appearing in it. These item sets serve as input for the apriori algorithm <ref type="bibr" target="#b0">[1]</ref>, that generates a set of association rules of the form a → b. Each rule carries two parameters, support (the amount of observations supporting it), and confidence (in our case #(a→b) #a</p><p>). Thresholds for these parameters can be used to influence the result of the algorithm.</p><p>The association rules can be ranked according to the two parameters. High support promises higher coverage, high confidence hints at a tighter correlation between the entity classes involved. Rules with more than one succedent tend to be more specialized, as evidenced by a higher confidence, and thus offer a higher potential information gain and they tend to be forgotten by the domain experts, when asked to come up with possible relations. The next step is a clustering phase. It takes an association rule as input. The sentences of the rule are preprocessed, i.e. the named entities are replaced with their respective classes. This is done to receive generalized patterns of the relations in the sentences. Only the part between the outermost named entities is taken and transformed into word vectors. These weights of the vectors are created using tf*idf.</p><p>The goal of the clustering phase is to receive relation clusters, i.e. clusters in which every vector symbolizes the same relation. Since the amount of relation clusters is not known beforehand, agglomerative clustering is applied. In this algorithm, every vector starts as its own cluster. Clusters are then merged, given they fulfill a certain clustering criterion that is defined on a distance measure. We use standard Cosine similarity as distance and allow both single and complete linkage as criteria. Given two clusters A and B and a distance threshold t, this translates to:</p><formula xml:id="formula_2">Single Linkage : ∃α ∈ A, β ∈ B : min(dist(α, β)) &lt; t Complete Linkage : ∃α ∈ A, β ∈ B : max(dist(α, β)) &lt; t</formula><p>Which method will be used depends on the corpus in question. Terse texts show better results with complete linkage, normal text performs better with single linkage.</p><p>The result of this step is a set of relation clusters for each association rule. User interaction is needed at this point, in order to review the results and to provide meaningful labels for the relations. They are not generated automatically at the moment, but schemes employing parts-of-speech analysis (e.g. using the verbs) are feasible.</p><p>The last step of the algorithm is the transformation of the entities and their relations into an ontology language. The transformation process is a straight-forward affair for entities, classes and binary relations, since those can be handled by corresponding constructs in RDF. The transformation of n-ary relations is slightly more complex, since it involves blank nodes that act as a hub for the attachment of binary relations to the various members of the relation. The resulting RDF represents the ontology for the domain corpus.</p><p>In the use-case of our project, we have to deal with a dynamic corpus, since the articles from the wiki are fed back into the system to be analyzed. This continually updates the semantic network and keeps it on par with the wiki. But an additional step is required: relation classification. The relation clusters that have been committed in the initialization phase of the system are used for this task. New instances of sentences are marked up with named entities and are then transformed into word vectors which can be classified against the relation clusters, and subsequently transformed into RDF. Since the provenance of each triple in the ontology is known, exchanges can be restricted to those triples that are affected.</p><p>Preliminary evaluation results of the algorithm show F-measures (F 1 = 2 * Recall * P recision Recall+P recision ) between 70% and 75% for clusters representing binary as well as n-ary relations. The algorithm usually creates more relation clusters than a human would, since humans tend to generalize the relations rather than to have a multitude of minuscule distinctions in their relation set. We have performed an evaluation of the performance of the algorithm against a part of the corpus relevant for the pilot application in the WIKINGER project. More details can be found in <ref type="bibr" target="#b1">[2]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2">User interface</head><p>In order to provide the domain experts with an interface that facilitates directing the relation discovery process, the Wikinger Relation Discovery GUI, short WiReD, has been developed. It allows to view the results of the different steps of the algorithms and to experiment with different settings for them. This encompasses the association rules generated by the apriori algorithm as well as the composition of the relation clusters generated by the clustering phase.</p><p>Association rules can be selected manually for clustering, clusters can be post-processed (merged with others, deleted, renamed) and finally selected for inclusion into the semantic network. The parameters for each algorithmic step are preset with reasonable defaults, but can be changed directly from within WiReD, thus allowing experiments on the data set. This may sound intimidating at first reading, but in practice there are never more than two parameters per step in the processing chain, four parameters in total.</p><p>When the experts have come to a final result, i.e. they have agreed upon a set of relations they want to see included in the ontology, the relation information is fed back into the WIKINGER framework. Here it is used for different purposes. First of all it can be used to transform the information associated with it -the entities and their relations -into the ontology format of choice. If the corpus is static, this concludes the work needed for the ontology. In the case of dynamic corpora, e.g. wiki systems, the relation information approved by the experts is used to automatically classify new patterns that enter the system. These basically follow the same steps of the algorithm, only now in a fully automated mode. The experts can change the relation set anytime they want using the WiReD GUI which results in a total recalculation of the ontology to reflect their desire for change.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">RELATED WORK</head><p>This section highlights related work in the areas touched by the work described in the sections above. We concentrate on annotation tools rather than individual NER algorithms, since the tools mentioned all encompass different approaches to NER. Following that, ontology learning environments are discussed, with a special regard to their use of relation discovery. Finally, algorithms partial to the discipline of relation discovery are discussed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.1">Annotation tools</head><p>As explained in section 4, the rationale behind WALU is its usability by professionals of any domain, in particular without computational or linguistic expertise. In this respect, WALU differs from other existing tools for semantic annotation, e.g. GATE <ref type="bibr" target="#b6">[7]</ref>, WordFreak <ref type="bibr" target="#b11">[12]</ref>, MMAX <ref type="bibr" target="#b12">[13]</ref>, or PALinkA <ref type="bibr" target="#b14">[15]</ref>. These tools are primarily intended for users with a background in (computational) linguistics. Consequently, they are either tailored to different, more complex tasks than WALU (e.g. PALinkA for discourse annotation), or are designed as highly multifunctional tools (e.g. GATE, WordFreak, or MMAX). This multifunctionality allows their flexible application with regard to specific and complex needs. However, the price of this flexibility is that these tools require extensive configuration efforts which significantly affects usability for non-experts in computational linguistics. In this respect, WALU complements the range of existing tools.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2">Ontology learning environments</head><p>As has been pointed out above, ontology learning environments usually are built as supporting tools for ontology engineers. Their task differs from the one tackled by the approaches in this paper insofar as the ontology engineer has the process-knowledge necessary for building ontologies. He usually has access to different domain experts, and thus needs only marginal software support. Named entity recognition is employed sometimes to facilitate populating the ontology, whereas relation discovery is not used extensively, at least not to our knowledge.</p><p>Text-To-Onto <ref type="bibr" target="#b10">[11]</ref> contains a module that calculates association rules to provide the engineer with an overview over possible interrelations between concept classes, but this approach is not followed further in the context of the application. Its successor, Text-2-Onto <ref type="bibr" target="#b4">[5]</ref>, employs a limited version of relation extraction, insofar as it searches for hyponym relation patterns (e.g. "x is a kind of y") in order to find additional instances of concept classes in a corpus. Relation discovery is not employed there.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.3">Relation Discovery</head><p>Hasegawa et al <ref type="bibr" target="#b7">[8]</ref> propose a system with a similar approach than the one presented here. They first perform NER on a text corpus, and then collect entity pairs from within sentences. These pairs are grouped by composition, the corresponding sentences are transformed into word vectors and a clustering step is performed on each of the groups. This results in a couple of relation clusters for each group. With some postprocessing (weeding out clusters below a certain size), they report F-measures of between 75% and 80% for selected clusters on a year of newspaper articles from The New York Times. In addition, they generate cluster labels by taking the words with the highest occurrence in each cluster. We believe that adding an association rule creation phase at the beginning helps in the selection of interesting combinations of relation candidates, even more so because we are not restricted to the detection of binary relations.</p><p>There are other approaches besides this one, that exploit syntactic structures and perform parts-of-speech analysis: Jiang et al. <ref type="bibr" target="#b9">[10]</ref> analyze sentence grammar trees, model candidate relations in RDF in order to capture their direction and extract from the RDF a set of generalized relations. Navigli et al. <ref type="bibr" target="#b13">[14]</ref> present an approach to ontology learning that exploits synsets from WordNet in order to disambiguate meaning and find relations that might hold between different entities from the sentences that explain the different synsets. But these approaches are dependent on deeper knowledge of the language of the text corpus. Approaches like Hasegawa's or ours only rely on statistics and the existence of annotated entities, thus they are language agnostic.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">FUTURE WORK</head><p>Regarding NER, we will implement an interface to the Weka library <ref type="bibr" target="#b16">[17]</ref>, which comprises a number of machine learning algorithms. We will investigate combinations of different ML approaches either sequentially (i.e. the output of one classifier is used as input to another one) or concurrently (i.e. several kinds of classifiers are run in parallel and a more-orless sophisticated voting mechanism -which might involve a further ML approach -decides on the final classification). Furthermore, we plan to provide an interface to the UIMA framework 4 . This way, further facilities for learning and preprocessing (e.g. morphological or syntactic analysis, which can provide useful information for semantic annotation as well as relation discovery) will become available to our framework. Since units from the UIMA framework can be provided as web services they can be added to complement the WIKINGER framework as needed.</p><p>Regarding relation discovery, we intend to apply our approach to other data sets, especially from the newspaper domain, in order to evaluate its performance on data sets that cover a wide range of topics, and to enhance the algorithm with a stage that extracts suitable labels for the relations and their members automatically.</p><p>The WIKINGER framework will be developed further, we intend to use it as a base platform for a variety of future projects.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8.">CONCLUSIONS</head><p>This paper described a new approach to semi-automatic knowledge capturing from large text corpora. The goal is to empower domain experts to create domain ontologies themselves, without being dependent on the availability of ontology engineers. This is to be achieved by automating process to a high degree, by employing named entity recognition (NER) and relation discovery. Domain experts are involved at those stages which require a substantial knowledge of the domain in question. Two software tools aiding in the process have been introduced that aid the domain experts in the task, WALU and WiReD. The former is a workbench for example-based NER, while the latter is a tool aiding in the relation discovery process.</p><p>Evaluation results for the different algorithmic solutions have been presented that show high values for F-measure for the automatic knowledge capturing methods.</p><p>All of this is part of a web service based architecture, the WIKINGER framework. It is used to create semantically enhanced collaborative knowledge platforms for scientific communities. The pilot application is a semantic wiki for the domain of contemporary history research regarding German catholicism.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="9.">ACKNOWLEDGMENTS</head><p>The work presented in this paper is being funded by the German Federal Ministry of Education and Research under research grant 01C5965. See http://wikinger-escience.de for further details regarding the project. The authors would like to thank Prof. Cremers from the University of Bonn and Prof. Hoeppner from the University of Duisburg-Essen for their helpful suggestions.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: The WIKINGER Framework: Component View</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Workflow of the algorithm</figDesc></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">http://maxent.sourceforge.net/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">http://svmlight.joachims.org/svm struct.html</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">Multiword NEs are recognized as a sequence of tokens receiving the same class.</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Fast algorithms for mining association rules</title>
		<author>
			<persName><forename type="first">R</forename><surname>Agrawal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Srikant</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 20th VLDB conference</title>
				<meeting>the 20th VLDB conference</meeting>
		<imprint>
			<date type="published" when="1994">1994</date>
			<biblScope unit="page" from="487" to="499" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Semiautomatic Creation of Semantic Networks</title>
		<author>
			<persName><forename type="first">L</forename><surname>Bröcker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Online-proceedings of PhD-symposium at ESWC 2007</title>
				<imprint>
			<date type="published" when="2007-06">June 2007</date>
		</imprint>
	</monogr>
	<note>no URL as of yet</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">WIKINGER -Wiki Next Generation Enhanced Repositories</title>
		<author>
			<persName><forename type="first">L</forename><surname>Bröcker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rössler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Wagner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Online Proceedings of the German E-Science Conference</title>
				<imprint>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m">Proceedings of the Seventh Message Understanding Conference</title>
				<editor>
			<persName><forename type="first">N</forename><forename type="middle">A</forename><surname>Chinchor</surname></persName>
		</editor>
		<meeting>the Seventh Message Understanding Conference<address><addrLine>Fairfax, VA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Text-2-Onto</title>
		<author>
			<persName><forename type="first">P</forename><surname>Cimiano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Völker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of NLDB 2005</title>
				<meeting>NLDB 2005</meeting>
		<imprint>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="227" to="238" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m">Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (JNLPBA-2004)</title>
				<editor>
			<persName><forename type="first">N</forename><surname>Collier</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Ruch</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Nazarenko</surname></persName>
		</editor>
		<meeting>the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (JNLPBA-2004)<address><addrLine>Geneva, Switzerland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">GATE, a General Architectur for Text Engineering</title>
		<author>
			<persName><forename type="first">H</forename><surname>Cunningham</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computers and the Humanities</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="page" from="223" to="254" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Discovering Relations among Named Entities from Large Corpora</title>
		<author>
			<persName><forename type="first">T</forename><surname>Hasegawa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sekine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Grishman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Annual Meeting of Association of Computational Linguistics</title>
				<meeting>the Annual Meeting of Association of Computational Linguistics</meeting>
		<imprint>
			<date type="published" when="2004">2004</date>
			<biblScope unit="page" from="415" to="422" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Overview of BioCreAtIvE: critical assessment of information extraction for biology</title>
		<author>
			<persName><forename type="first">L</forename><surname>Hirschman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Yeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Blaschke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Valencia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">BMC Bioinformatics</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">1</biblScope>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
	<note>Supplement</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Mining Generalized Associations of Semantic Relations from Textual Web Content</title>
		<author>
			<persName><forename type="first">T</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Tan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Knowledge and Data Engineering</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="164" to="179" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">The Text-To-Onto Environment, chapter 7 in Alexander Maedche: Ontology Learning for the Semantic Web</title>
		<author>
			<persName><forename type="first">A</forename><surname>Maedche</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2002">2002</date>
			<publisher>Kluwer Academic Publishers</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">WordFreak: an open tool for linguistic annotation</title>
		<author>
			<persName><forename type="first">T</forename><surname>Morton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lacivita</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology</title>
				<meeting>the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology<address><addrLine>Edmonton, Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">MMAX: A tool for the annotation of multi-modal corpora</title>
		<author>
			<persName><forename type="first">C</forename><surname>Müller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Strube</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd IJCAI Workshop on Knowledge and Reasoning in Practical Dialogue Systems</title>
				<meeting>the 2nd IJCAI Workshop on Knowledge and Reasoning in Practical Dialogue Systems<address><addrLine>Seattle, WA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Ontology learning and its application to automated terminology translation</title>
		<author>
			<persName><forename type="first">R</forename><surname>Navigli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Velardi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gangemi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Intelligent Systems</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="22" to="31" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">PALinkA: A highly customisable tool for discourse annotation</title>
		<author>
			<persName><forename type="first">C</forename><surname>Orasan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Fourth SIGdial Workshop on Discourse and Dialogue</title>
				<meeting>the Fourth SIGdial Workshop on Discourse and Dialogue<address><addrLine>Sapporo, Japan</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">WALU -Eine Annotations-und Lern-Umgebung für semantisches Tagging</title>
		<author>
			<persName><forename type="first">A</forename><surname>Wagner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rössler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Data Structures for Linguistic Resources and Applications</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Rehm</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Witt</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Lemnitzer</surname></persName>
		</editor>
		<meeting><address><addrLine>Tübingen</addrLine></address></meeting>
		<imprint>
			<publisher>Gunter Narr Verlag</publisher>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="263" to="271" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Data Mining: Practical machine learning tools and techniques</title>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">H</forename><surname>Witten</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Eibe</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2005">2005</date>
			<publisher>San Francisco</publisher>
		</imprint>
	</monogr>
	<note>2nd edition</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
