<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards Meta-Engineering for Semantic Wikis</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jochen Reutelshoefer</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Joachim Baumeister</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Frank Puppe</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Computer Science, University of Wurzburg</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Building intelligent systems is a complex task. In many knowledge engineering projects the knowledge acquisition activities can significantly bene t from a tool, that is tailored to the speci c project setting with respect to domain, contributors, and goals. Specifying and building a new tool from scratch is ambitious, tedious, and delaying. In this paper we introduce a wiki-based meta-engineering approach allowing for the smooth beginning of the knowledge acquisition activities going along with tool speci cation and tailored implementation. Meta-engineering proposes that in a wiki-based knowledge engineering project not only the content (the knowledge base) should be developed in evolutionary manner but also the tool itself.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        The development of knowledge-based systems still su ers from the Knowledge
Acquisition Bottleneck yielding high development costs with respect to the
knowledge acquisition e orts. Usually, a knowledge acquisition tool poses several
constraints to the engineering process. The key feature of such tools are prede ned
user interfaces, the degree of formalization, and the way of how the knowledge
is organized. However, often it appears, that for a given project, considering its
contributors, domain and goal, a more appropriate solution might be
imaginable, but not yet existing. Customizable and extensible (non wiki-based) tools
for building knowledge-based systems are available today (e.g., Protege [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]).
Another approach is building a customized knowledge acquisition tool from scratch.
However, both approaches do not allow for evolutionary incremental speci cation
and implementation of the tool in parallel with the (beginning) knowledge
acquisition activities, but require the speci cation and implementation in advance.
The speci cation of a knowledge engineering tool in advance bears another
challenge: At the beginning of the cooperation of knowledge engineers and domain
experts a sound speci cation is ambitious and risky. As the knowledge engineers
initially are not familiar with the domain and the contributors (experience level,
number, availability) to be able to de ne the best possible knowledge
formalization method. We propose an incremental and agile approach, that allows for the
smooth but immediate startup of knowledge formation and breaks down the
constraints and entry barriers to a minimum. The (semantic) wiki-based approach
demands from the contributors at the beginning only the capability of mastering
the basic wiki work ow, which is browsing and modifying wiki pages. Its only
constraint is, that the knowledge can be entered or imported as wiki content
and partitioned/organized into wiki pages. We argue, that this method retains
a high level of exibility being able to support a large number of requirements.
However, the gap between a wiki lled with domain knowledge and an executable
knowledge-based system, using a formal knowledge representation, is still large.
In this paper, we discuss how this gap (emerging on conceptual and on technical
level) can be bridged in an agile and incremental manner with reduced delay of
the knowledge acquisition phase, and at moderate (software) engineering costs.
The knowledge engineering tasks, we focus on, are the development of
decisionsupport systems where solutions, based on entered (formal) problem descriptions
in sessions, are proposed. We demonstrate the meta-engineering process for this
task by sketching its implementation by several case studies.
      </p>
      <p>The rest of the paper is organized as follows: In Section 2 the meta-engineering
process is explained in more detail, considering the conceptual and the technical
level. Further, we discuss in Section 3 how the Semantic Wiki KnowWE supports
the meta-engineering idea on the technical level. Demonstrating the applicability
of the proposed approach, we report on experiences made in di erent projects
in Section 4. We conclude the paper by giving a summary and pointing out
directions for future work.
2</p>
    </sec>
    <sec id="sec-2">
      <title>The Meta-Engineering Process</title>
      <p>The meta-engineering process proposes to model a knowledge acquisition
process, that is free from any knowledge engineering tool or knowledge
representation at the beginning. The initial phase tries to envision the optimal knowledge
formalization environment for the project without regarding any technical
constraints. The result is then developed towards a project tailored speci cation of
a knowledge engineering tool. We argue that a wiki poses very low constraints,
only demanding that the knowledge can be de ned in some textual form, and
because of this forms suitable basis for this approach. Thus, the question in the
initial phase is, how the knowledge can be entered in a wiki in a formalizable
way, regarding domain, contributors, startup knowledge, and goal. We call the
result of this process the wiki-based formalization architecture optimizing
criterias such as understandability, maintainability, and acquisition e ciency - yet
disregarding any technical constraints. Figure 1 shows the cooperative phases of
the meta-engineering process.</p>
      <p>At rst, in the experimental phase a small toy prototype is implemented
using formalization methods (markup, reasoner) already provided by the tool.
Even though, the available markups may be not optimal for this task, this phase
gives the domain specialists an impression of wiki-based knowledge acquisition.
Then, in the design phase possible markups and partitionings for the
knowledge are developed in an iterative manner, forming the wiki-based formalization
architecture. Small parts of the domain knowledge are entered in the wiki by
using the currently speci ed architecture. Although, the tool cannot compile and
process the knowledge at this point, discussing these prototypical knowledge
artifacts can catalyze the exchange of expertise. Knowledge engineers obtain an
impression of the knowledge that needs to be formalized, and domain specialists
experience the general idea of wiki-based knowledge formalization. This allows
for a better estimation of the formalization architecture criterias
understandability, maintainability, and acquisition e ciency. That way, the formalization
architecture can be revised and re ned iteratively in joint discussion sessions.
Due to the exibility of the wiki approach these phases can easily overlap
resulting in an agile speci cation and development process of the tool. The process
nally leads to the main phase when design and implementation activities get
nished, featuring a thoroughly tailored knowledge engineering environment.
2.1</p>
      <sec id="sec-2-1">
        <title>Conceptual Level: Bridging the Gap of Expertise</title>
        <p>As already stated in the introduction, it is often di cult to completely specify the
most appropriate acquisition method and tool at project startup. Often, either
the knowledge engineer or the domain specialist are not familiar enough with the
other discipline respectively at the beginning. The wiki-based meta-engineering
approach tries to overcome this gap of expertise by the two cooperative phases of
experimentation and design. The wiki-based formalization architecture, forming
the result of these phases, contains the following aspects:</p>
      </sec>
      <sec id="sec-2-2">
        <title>1. Identi cation and representation of important domain concepts:</title>
        <p>This task de nes how the domain concepts are represented in the wiki.
Informal "support knowledge" (e.g., textual descriptions, images, links) is added
to the concepts, describing the concept, de ning a common grounding of
their meaning, and documenting its role in the formal model of the domain.
When possible, the support knowledge is streamed into the wiki by reusing
legacy documents of the project context.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2. The distribution of the formal knowledge: The knowledge formaliza</title>
        <p>
          tion architecture de nes how the formal knowledge is organized in the wiki.
The derivation knowledge typically connects the input concepts ( ndings of
the problem description) with the requested output concepts. In general,
the derivation knowledge is distributed according to a domain-dependent
partitioning and attached to the wiki pages of the most related concepts.
However, there is not yet a canonical receipt to select or create the optimal
knowledge distribution for a given project in one step.
3. De nition of the markup: When de ning the appropriate markup general
design principles for domain speci c languages (DSL) should be adhered to.
Spinellis [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] emphasizes, for example, to include the domain experts closely
into the design process. Karsai et al. [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] report about design guidelines like
the reuse of existing language de nitions and type systems or limitation of
the number of language elements. In the context of wikis, also the intuitive
and seamless integration with the informal support knowledge has to be
considered. Hence, the documents are not exclusively created by using the
DSL, which is in this case only forming the markup for fragments of the
overall wiki content. Knowledge markups in wikis can be designed for use
at di erent (syntactical) granularities: For example, large sections like
tableor list-based markups or small relation atoms, that can be distributed as
single items within the (informal) page content, are possible [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. The rst
allows for a more comprehensive view on the formal knowledge fragments
in an aggregated form. The latter in general allows for a better integration
with the informal knowledge each relation being injected at the most suitable
location in the text. In this case, additional views should be generated from
the spread relations to provide concise overviews, for example by using inline
query mechanisms.
        </p>
        <p>In cooperative sessions the knowledge engineers together with the domain
specialists try out di erent possibilities for the described aspects. For each idea
some demo wiki pages, covering a very small part of the domain, are created,
disregarding that the knowledge is not (yet) processed by the wiki.</p>
        <p>When this iterative design process has lead to a promising result the
implementation phase of the meta-engineering process begins, aiming at modifying
the tool to be able to parse and compile the knowledge, according to the
designed formalization architecture. At this point, the already inserted knowledge
of the demo pages on the one hand can be kept, forming the rst part of the
knowledge base, and on the other hand serves as a speci cation for the markup.
2.2</p>
      </sec>
      <sec id="sec-2-4">
        <title>Technical Level: Bridging the Gap of Tool Support</title>
        <p>The design phase on the conceptual level identi es a speci cation of an
appropriate (wiki-based) knowledge formalization architecture, for which in most cases no
tool support is existing at that point. The gap between a standard wiki or even a
standard semantic wiki to the envisioned tool in most cases is still large, for
example if the support of production rules entered by a custom markup should be
supported. However, the general process of parsing and compilation the
knowledge in a wiki-based work ow is always similar. Figure 2 shows an outline of the
wiki-based knowledge formalization process chain from the knowledge source
(domain specialists or legacy documents) on the left to the productive running
system on the right. There are four main steps involved, which are discussed in
the following.
1. A wiki to create textual knowledge: This is the essential part of a
wiki application. The wiki interface is used to create textual documents. In
general, any standard wiki engine can be used to accomplish this task.
2. Parsers to create a pre-parsed representation of the textual
knowledge: To create formalized knowledge from text documents, markup needs
to be de ned. For the speci c markup the corresponding parser components
are integrated into the system. They create a (concrete) syntax tree of the
wiki pages (also containing large nodes of free text). In this parse tree the
structure of the formal relations, i.e., references on formal concepts and their
relations, are contained.
3. Compilation scripts to create executable knowledge: The compile
scripts transform the pre-parsed representation of the markup into an
executable knowledge format, that can be interpreted by the reasoners. The
compile scripts need to be de ned with respect to the markup and its
syntax tree representation and the target data structure de ned by the intended
reasoning engine.</p>
      </sec>
      <sec id="sec-2-5">
        <title>4. Reasoners to test the knowledge base: Any reasoner that can solve</title>
        <p>the intended reasoning task of the application can be integrated. For the
evolutionary development, testing of the knowledge base is necessary. Hence,
components for the execution of the reasoner with the knowledge base need
to be provided.</p>
        <p>In general, the steps of parsing (2) and compilation (3) could be considered
as one step in the process chain. However, separating them into two steps by
the use of some structured text-representation has important advantages:
Backlinks from the formal knowledge artifacts to the corresponding, original text
entities become possible. This allows to identify for each formal relation the
exact location in the text that it was generated from. One can make use of this
for the implementation of important tasks:
{ Explanation: Explanation components presenting the text slices, that are
responsible for the current reasoning result, can be created.
{ Validation: For many knowledge representations and reasoners validation
methods exist, detecting de ciencies like redundant or inconsistent
knowledge.</p>
        <p>Without the back-links the text location of the corresponding knowledge
artifacts can not be identi ed to correct or tidy the wiki content, being the source
of the compilation process. In general, these two techniques|explanation and
validation|are truly necessary to build up large well-formed knowledge bases
using an agile methodology. Further, algorithms for refactoring of the knowledge
heavily bene t from a pre-parsed text representation, and when exchanging the
target reasoning engine only the compile scripts need to be modi ed and the
parsing components remain untouched.</p>
        <p>To help bridging the technical gap spanned by the designed formalization
architecture and some existing tool, we propose the design of a framework. It
needs to allow for the easy integration of missing elements and to provide a
library of reusable components to ll the gaps in the formalization chain.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Meta-Engineering with KnowWE</title>
      <p>
        KnowWE [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] has been designed to support the implementation of tailored
formalization architectures in a process chain like sketched in 2. It provides a library
of components that can be reused in the contexts of di erent projects and allows
for the de nition and integration of custom components at low implementation
costs. KnowWE connects several free and open source software components to
provide the knowledge formalization capabilities sketched in Figure 2. As basic
wiki engine JSPWiki1 is used. We integrated the reasoning engines OWLIM2 for
RDF reasoning and d3web3 for diagnostic reasoning. To extend the system with
additional components (e.g., parser, compile scripts, renderers,...), we provide
a exible plugin mechanism based on JPF (Java Plugin Framework4). Besides
the interconnection of these components forming a semantic wiki with
problemsolving capabilities, the major technical contribution of KnowWE is the generic
typed data-structure for the pre-parsed representation of the textual content as
shown in Figure 2, called KDOM (Knowledge Document Object Model). The
wiki documents are parsed according to the extensible KDOM schema, where all
allowed textual entities are de ned in a declarative way. A detailed explanation
of the parsing algorithm creating the parse-tree of a document using a speci ed
KDOM schema can be found in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
3.1
      </p>
      <sec id="sec-3-1">
        <title>Parsing</title>
        <p>The KnowWE core library contains components to con gure a project speci c
KDOM schema. While parser components for common useful syntactical
structures such as tables, bullet lists, dash-trees, or XML are provided by the system,
for domain speci c languages own parser components need to be de ned. Figure 3
shows di erent applications of the dash-tree markup as decision tree, concept
hierarchy and property hierarchy.</p>
        <sec id="sec-3-1-1">
          <title>1 http://www.jspwiki.org</title>
        </sec>
        <sec id="sec-3-1-2">
          <title>2 http://www.ontotext.com/owlim/</title>
        </sec>
        <sec id="sec-3-1-3">
          <title>3 http://www.d3web.de</title>
        </sec>
        <sec id="sec-3-1-4">
          <title>4 http://jpf.sourceforge.net/</title>
          <p>Customized reuse of prede ned markup is possible by small modi cations
in the dash-tree KDOM schema components. Figure 4 shows the basic KDOM
schema of the dash tree markup. The blue nodes represent KDOM types from the
KnowWE core library, provided with the corresponding parsing component (i.e.,
regular expressions). Only small modi cations at the dash-tree leaf of the schema
are necessary to enable speci c parsing and compilation tasks using dash-trees.</p>
          <p>Recently, a bridge to the UIMA Framework5, which is an open source
framework for unstructured/semi-structured information processing, was integrated.
Currently, experiments extracting formal knowledge from legacy documents,
using di erent of the large number of UIMA analysis engines available, are run.
The con guration of tailored information extraction components as a
metaengineering task, alternatively to the development of markup, aims at
semiautomation of the knowledge acquisition process.
3.2</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>Compilation and Reasoning</title>
        <p>Compile scripts can be attached to the KDOM schema being executed
automatically after the corresponding subtree has been instantiated by the parsing
process of a page. They walk the KDOM parse-tree and instantiate the
executable knowledge in the corresponding repository, depending on the targeted
reasoning engine. By the use of the unique IDs of the nodes of the KDOM nodes,
the back-links described in Section 2 can be created.</p>
        <sec id="sec-3-2-1">
          <title>5 http://uima-framework.sourceforge.net/</title>
          <p>
            A general include mechanism, which is independent of markup or target
representation, allows for speci c compiling tasks. By using the include mechanism,
any composition of knowledge fragments from various pages can be assembled
to one wiki page for testing, generation of views, creation of variants, or export
of knowledge bases. The KnowWE core currently integrates (swift-) OWLIM for
RDF reasoning. Further, we integrated the d3web engine for diagnostic
problemsolving together with basic markups [
            <xref ref-type="bibr" rid="ref4">4</xref>
            ] for knowledge formalization in d3web.
The integration of the open source rule engine Drools6 is currently in progress.
For many applications these reasoners should provide su cient possibilities to
build at least a small prototypical executable knowledge base for demonstration
supporting the initial cooperative design phase. The potential later integration
of an optimized reasoning engine (e.g., that better scales on the speci c
reasoning task) only demands to modify the compile scripts and to provide an endpoint
for the execution of the engine for testing purposes.
3.3
          </p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>Refactoring</title>
        <p>Meta-engineering proposes an evolutionary approach with respect to the
(wikibased) knowledge formalization architecture implying experimentation with
different partitionings of the knowledge over the wiki pages and di erent markups.
Therefore, for transferring content from one page to another or transforming one
markup to another, automated support is crucial to prevent repetitions in the
knowledge acquisition activities. To allow for e cient scripting of these
transformations, we integrated a Groovy7 endpoint into the wiki, accessible only to
project admins. It allows to create and execute transformation scripts on the
6 http://labs.jboss.com/drools
7 http://groovy.codehaus.org
wiki content. The Groovy scripts makes use of a prede ned API to access, scan
and modify the KDOM data-structure. Figure 5 shows a refactoring script
embedded in the wiki. At the top of the wiki page a GUI widget for the execution
of the refactoring script is shown. Underneath, a Groovy script for renaming an
object is located. As these refactoring script operations are complex and
dangerous, they should be performed in o ine mode (i.e., blocking wiki access for
the general users at execution time).
We are currently evaluating the meta-engineering approach within several
industrial and academic projects using the KnowWE system. They are addressing
a wide range of di erent domains like biology, chemistry, medicine, history, and
technical devices. In some projects, the customizations only make small changes
to the existing features while in others large components are created. In the
following, we brie y introduce the projects and explain the employed
metaengineering methods.</p>
      </sec>
      <sec id="sec-3-4">
        <title>Clinical Process Knowledge with CliWE</title>
        <p>In another recent project, KnowWE is extended by diagnostic work ow
knowledge in the context of the CliWE project8. This project considers the
development of a medical diagnosis system for a closed-loop device. Documents
describing the domain knowledge already exist and these are imported into the wiki as
textual and tabular information about the domain. The particular wiki articles
are focusing on special aspects of the diagnosis task, for example the assessment
of the current patient's state. At prede ned milestones the knowledge in the wiki
is exported into a single knowledge base in order to evaluate the development
on real-time devices.</p>
        <p>
          In general, the core knowledge formalization methods from the d3web-plugin
are used. However, to implement closed-loop systems, the need for an additional
knowledge representation and reasoning support was identi ed at an early stage
of the project. Thus, a owchart-like visual knowledge representation has been
designed. The owcharts can be edited in the wiki using the integrated visual
owchart editor DiaFlux. To allow for modularization the owcharts can be
organized hierarchically. A (sub-) owchart can be included into another owchart
as one (box-) component by de ning and connecting the input/output interface.
Due to this hierarchical organization, partitioning of the di erent aspects of the
domain knowledge over the wiki, is possible. A rst prototype of this extension
is reported in Hatko et al. [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
4.2
        </p>
      </sec>
      <sec id="sec-3-5">
        <title>Fault Diagnosis for Special Purpose Vehicles</title>
        <p>Another project considers the development of a diagnostic system for special
purpose vehicles. The goal is to reduce the repair time and costs of the vehicles
by determining the faulty element by a cost-minimal sequence of (potentially
tedious) examinations. The system is build based on existing construction plans
and heuristic knowledge of experienced mechanics. After an analysis phase the
wiki formalization architecture has been de ned. It contains one structural model
of the vehicle, one state model of the current diagnosis session and for each
technical subcomponent fault causes and malfunctions. For each of this knowledge
base components own markup has been de ned allowing logical distribution
of the knowledge base over di erent wiki pages and seamless integration with
support knowledge, such as technical documents and construction plans. The
knowledge will be compiled into set-covering knowledge containing also the cost
values for any examination at some given state for a sophisticated interview
strategy calculated by an additional problem-solver. This wiki-based
formalization architecture has been de ned after an initial phase where one Excel-based
approach and one wiki based approach using existing markup have been
evaluated. Including a initial phase with iterative cooperative sessions and nally the
technical implementation of the de ned formalization architecture, the project
shows a successful application of the Meta-Engineering approach.</p>
        <sec id="sec-3-5-1">
          <title>8 CliWE (Clinical Wiki Environment) is funded by Dragerwerk, Germany and runs</title>
          <p>from 2009-2012.
4.3
The WISEC (Wiki for Identi ed Substances of Ecological Concern) project9
investigates the management and detection of substances with respect to its
bio-chemical characteristics. Here, substances of very high concern (SVHC)
under environmental protection considerations are investigated and managed using
the multi-modal approach of a Semantic Wiki: The knowledge about each
substance is represented by an wiki article containing informal descriptions of the
substance and its relations to external sources (via semantic annotations). The
overall knowledge base also integrates already known lists of critical substances
and explicit domain knowledge of a specialists combining the particular
characteristics of the criticality of substances.</p>
          <p>Tailored markups were created to capture the relation of the substances to
already known critical substance lists. Thus, a list of critical substances in the
wiki is still human-readable, but is also automatically compiled as a collection
of ontological properties relating the substance concepts with the list concept.
Furthermore, special properties (such as di erent toxic contributions) are also
parsed as formal properties of the list concepts. Due to the explicit representation
of the relational knowledge in OWL, di erent properties of substances can be
queried over the wiki knowledge base.
4.4</p>
        </sec>
      </sec>
      <sec id="sec-3-6">
        <title>Medical Decision-Support with CareMate</title>
        <p>The CareMate system is a consultation system for medical rescue missions, when
the problem de nition of a particular rescue service is complex and a second
opinion becomes important. The major goals of the project were the rated derivation
of suitable solutions and the implementation of an e cient interview technique
for busy rescue service sta in the emergency car. Thus, the user can be guided
through an interview focusing on relevant questions of the current problem.
With more questions answered the current ranking of possible solutions improves
in relevance, and the interview strategy targets the presentation of reasonable
follow-up questions.</p>
        <p>
          For the CareMate project, the core entities of the formalization architecture
are the cardinal symptoms, i.e., coarse ndings describing vaguely the problem
of the currently examined patient. The organization according to the cardinal
symptoms is motivated by the observation, that in practice the emergency sta
also tries to divide the problem by rst identifying the cardinal symptom.
Subsequently, the applicable domain knowledge can be easily partitioned with respect
to the cardinal symptoms. The domain specialist provided the domain knowledge
(interview strategy and solution derivation/rating) for each cardinal symptom in
form of MS-Visio diagrams. Each cardinal symptom is represented by a distinct
wiki article, and the corresponding derivation knowledge is de ned using the
knowledge formalization pattern heuristic decision tree [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. In Figure 6 the wiki
article of the cardinal symptom stomach pain ("Bauchschmerzen") is shown.
        </p>
        <sec id="sec-3-6-1">
          <title>9 in cooperation with the Federal Environment Agency (UBA), Germany</title>
          <p>Here, the wiki text describes that the decision tree logic was divided into two
decision trees handling the diagnosis of stomach pain for women and for men,
separately. For both decision trees an image is shown (can be enlarged on click),
that gives an overview of the general structure of the questionnaire and the
inference. The lower part of the browser window also shows an excerpt of the
formalized knowledge base, where rst the sex ("Geschlecht") of the patient is
asked.</p>
          <p>The CareMate system is commercially sold by the company Digitalys10 as
part of an equipment kit for medical rescue trucks.
4.5</p>
        </sec>
      </sec>
      <sec id="sec-3-7">
        <title>Biodiversity with BIOLOG</title>
        <p>The BIOLOG Europe project11 aims at integrating socio-economic and
landscape ecological research to study the e ects of environmental change on
managed ecosystems. To make the results of the research accessible for domain
10 http://www.digitalys.de
11 www.biolog-europe.org
specialists as well as diverse people with a di erent background, they decided
to build a knowledge system (decision-support system). BIOLOG Wissen
(BIOLOG Knowledge) is based on KnowWE and serves as a web-based application
for the collaborative construction and use of the decision-support system in the
domain of landscape diversity. It aims to integrate knowledge on causal
dependencies of stakeholders, relevant statistical data, and multimedia content. In
addition to the core formalization methods of the d3web-plugin, we introduced
some domain speci c features: One major challenge for the researchers in this
domain is to nd related work about similar studies. For this reason the research
community (of ecology) has de ned an extensive XML-schema for the
description of meta-data about ecological work called EML (Ecological Meta-Data
Language12). We de ned a sub-language of EML which is suited to support capturing
the relevant meta-data in this project. The research results and examinations of
the di erent BIOLOG sub-projects are provided in EML and are entered into
the wiki. Then, the EML data sets are visualized and can be accessed through
a (semantic) search interface. Figure 7 shows the BIOLOG wiki depicting (part
of) the visualization of an EML data set that describes work about perception
and appreciation of biodiversity ("Wahrnehmung und Wertschatzung
Biodiversitat").</p>
        <p>As the domain specialists are used to model concept hierarchies in the
mindmapping tool FreeMap, we integrated support for the FreeMap XML format in
the wiki. Thus, a hierarchy can be created or modi ed externally with FreeMap
and then pasted into the wiki to be translated into OWL-concepts.
12 http://knb.ecoinformatics.org/software/eml/eml-2.0.1/index.html</p>
        <p>
          To manage the publications of the project e ciently, we integrated support
of BibTeX data. The wiki serves as a bibliography data base for the publications
that have been created within the project scope.
The HermesWiki [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] is a semantic wiki in the domain of Ancient Greek History.
It is developed in cooperation the Department of Ancient Greek History of the
University of Wurzburg, Germany. Even though the HermesWiki does not
develop a decision support system the meta-engineering approach has been applied
successfully on both levels | conceptual and technical.
        </p>
        <p>{ Conceptual Level As the project at the beginning used a regular
(nonsemantic) wiki, at rst the knowledge acquisition process clearly was free
from any constraints due to knowledge formalization. After it was clear how
the domain experts structured the content in the wiki in a natural way, we
began to integrate formalization mechanisms tailored to the content already
given in the wiki and to the work ow of the contributors. For example, we
discovered that often multiple (related) time events were described on one
page and de ned a markup allowing to formalize a text paragraph as a time
event by adding a title, a time stamp, and references to (historical) sources.
{ Technical Level The HermesWiki has been implemented as a plugin for
KnowWE reusing as much of the provided core components as possible.
While some standard markups (e.g., annotation of pages being instance of
some class) could be reused others had to be added (e.g., for time events or
locations). Further, the dash-tree markup could be reused in di erent ways
to de ne hierarchical structures. While the dash-tree parser from the core
is used to parse the tree structure, the plugin only needs to specify how
to process the (dash-tree) nodes during the compile process with respect to
their father or children nodes.</p>
        <p>
          Further details about the formalization methods of the HermesWiki can be
found in Reutelshoefer et al. [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. There, also the use cases for the formalized
knowledge in the context of e-learning are described.
5
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusion</title>
      <p>
        In this paper we introduced the idea of meta-engineering as an alternative
approach to using 'out of the box' systems and building new ones from scratch.
We motivated how the meta-engineering approach can help to bridge the two
worlds of knowledge engineers and domain specialists and catalyzes the creation
of a project tailored knowledge acquisition tool. Semantic Wikis are the
appropriate technical platform to implement that approach, as a wiki builds a exible
basis and is customizable to a wide range of knowledge acquisition scenarios.
They further allow for initial design phases, where speci cation and knowledge
acquisition can run in parallel. We discussed how the Semantic Wiki KnowWE
supports the meta-engineering idea and we reported about several projects from
di erent domains where the method proved to be helpful for the overall
knowledge acquisition e orts. The introduced meta-engineering approach in principle
also can be applied by the use of other Semantic Wiki systems, which are also
designed with component based extension mechanisms, like for example
Semantic MediaWiki [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and KiWi [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. However, no other systems known provide
components for explicitly building intelligent decision-support systems. We still
need to gather more experiences on how to determine the most appropriate
wiki-based formalization architecture for a given project. Further, we will
improve the technical infrastructure of KnowWE to allow meta-engineering to be
applied with even lower implementation e orts.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Noy</surname>
            ,
            <given-names>N.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sintek</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Decker</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Crubezy</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fergerson</surname>
            ,
            <given-names>R.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Musen</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          :
          <article-title>Creating Semantic Web contents with Protege-2000</article-title>
          .
          <source>IEEE Intelligent Systems</source>
          <volume>16</volume>
          (
          <issue>2</issue>
          ) (
          <year>2001</year>
          )
          <volume>60</volume>
          {
          <fpage>71</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Spinellis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Notable Design Patterns for Domain Speci c Languages</article-title>
          .
          <source>Journal of Systems and Software</source>
          <volume>56</volume>
          (
          <issue>1</issue>
          ) (
          <year>February 2001</year>
          )
          <volume>91</volume>
          {
          <fpage>99</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Karsai</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krahn</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pinkernell</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rumpe</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schneider</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , Volkel, S.:
          <article-title>Design guidelines for domain speci c languages</article-title>
          . In Rossi,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Sprinkle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Gray</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Tolvanen</surname>
          </string-name>
          , J.P., eds.
          <source>: Proceedings of the 9th OOPSLA Workshop on DomainSpeci c Modeling (DSM09)</source>
          . (
          <year>2009</year>
          )
          <volume>7</volume>
          {
          <fpage>13</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Baumeister</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Reutelshoefer</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Puppe</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Markups for Knowledge Wikis</article-title>
          .
          <source>In: SAAKM'07: Proceedings of the Semantic Authoring, Annotation and Knowledge Markup Workshop</source>
          , Whistler, Canada (
          <year>2007</year>
          )
          <volume>7</volume>
          {
          <fpage>14</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Baumeister</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Reutelshoefer</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Puppe</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>KnowWE: A Semantic Wiki for Knowledge Engineering</article-title>
          . Applied Intelligence (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Reutelshoefer</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baumeister</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Puppe</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>A Data Structure for the Refactoring of Multimodal Knowledge</article-title>
          .
          <source>In: 5th Workshop on Knowledge Engineering and Software Engineering (KESE)</source>
          .
          <source>CEUR workshop preceedings</source>
          , Paderborn, CEURWS.org (
          <year>September 2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Puppe</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Knowledge Formalization Patterns</article-title>
          .
          <source>In: Proceedings of PKAW</source>
          <year>2000</year>
          ,
          <string-name>
            <given-names>Sydney</given-names>
            <surname>Australia</surname>
          </string-name>
          . (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Reutelshoefer</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lemmerich</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baumeister</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wintjes</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haas</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Taking OWL to Athens { Semantic Web technology takes Ancient Greek history to students</article-title>
          .
          <source>In: ESWC'10: Proceedings of the 7th Extended Semantic Web Conference</source>
          , Springer (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Hatko</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Belli</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baumeister</surname>
          </string-name>
          , J.:
          <article-title>Modelling Diagnostic Flows in Wikis</article-title>
          . In: LWA-2009
          <source>(Special Track on Knowledge Management)</source>
          ,
          <source>Darmstadt</source>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10. Krotzsch,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Vrandecic</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          , Volkel, M.:
          <article-title>Semantic MediaWiki</article-title>
          . In: The Semantic Web - ISWC
          <year>2006</year>
          . Volume
          <volume>4273</volume>
          of Lecture Notes in Computer Science., Heidelberg, DE, Springer (
          <year>2006</year>
          )
          <volume>935</volume>
          {
          <fpage>942</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11. Scha ert, S.,
          <string-name>
            <surname>Eder</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , Grunwald,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Kurz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Radulescu</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.:</surname>
          </string-name>
          <article-title>KiWi - A Platform for Semantic Social Software (Demonstration)</article-title>
          .
          <source>In: ESWC'09: The Semantic Web: Research and Applications, Proceedings of the 6th European Semantic Web Conference</source>
          , Heraklion, Greece (
          <year>June 2009</year>
          )
          <volume>888</volume>
          {
          <fpage>892</fpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>