<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Applying In-Memory Technology for Automatic Template Filling in the Clinical Domain</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Konrad Herbst</string-name>
          <email>k.herbst@stud.uni-heidelberg.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Cindy Fahnrich</string-name>
          <email>cindy.faehnrich@hpi.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mariana Neves</string-name>
          <email>mariana.neves@hpi.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Matthieu-P. Schapranow</string-name>
          <email>schapranow@hpi.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Hasso Plattner Institute Enterprise Platform and Integration Concepts Chair August-Bebel-Str.</institution>
          <addr-line>88, 14482 Potsdam</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Heidelberg Institute of Pharmacy and Molecular Biotechnology Im Neuenheimer Feld 364</institution>
          ,
          <addr-line>69120 Heidelberg</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <fpage>91</fpage>
      <lpage>102</lpage>
      <abstract>
        <p>We present a research prototype for systematic template lling based on in-memory database technology. Entity extraction and normalization is based on domain-speci c dictionaries and customized rules set building on top of related work of the medical eld. The prototype called HPI proves feasibility of in-memory technology to enhance work ows in the eld of e cient text processing and analysis. With our approach, the iterative process of dictionary and rule re nement for enhancing text analysis results shifts from a time-consuming task with long waiting hours to a continuous work ow. In the context of the challenge's task, our prototype achieves an overall average accuracy of 0.769 and an overall F1 measure of up to 0.323.</p>
      </abstract>
      <kwd-group>
        <kwd>Medical Reports</kwd>
        <kwd>Template Filling</kwd>
        <kwd>In-Memory Technology</kwd>
        <kwd>Entity Recognition</kwd>
        <kwd>Text Extraction</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Professional health care requires a constant documentation of all patient-related
data, such as history of clinical events. This clinical data is stored in a
humanreadable format, such as text les, since it supports the daily work of the clinical
personnel. This data is only available in an unstructured format, which makes
its automatic processing a complex task. However, for the sake of fault
prevention, comparison, performance optimization, and subsequent clinical research,
the important information must be e ciently extracted from the unstructured
data for further processing. This task requires methods from Information
Extraction (IE), which is a speci c subdomain of Natural Language Processing
(NLP).</p>
      <p>
        The second task of the 2014 CLEF eHealth challenge requires the extraction
of information from unstructured clinical data to ll speci c templates, i.e. xed
sets of di erent semantic classes depending on the IE purpose [
        <xref ref-type="bibr" rid="ref2 ref3 ref6">2, 3, 6</xref>
        ]. The
following classes are required to be identi ed: Negation Indicator (NI), Subject
Class (SC), Uncertainty Indicator (UI), Course Class (CC), Severity Class (SV),
Conditional Class (CO), Generic Class (GC), Body Location (BL), Doctime
Class (DT), and Temporal Expression (TE). Within these classes, values can be
stored either as recognized text span, i.e. where the entity was determined within
the input text, or as inferred concept normalization. A lexical cue value
describing the found occurrence of the entity within the input text can be determined
for all class types except for DT.
      </p>
      <p>
        We as team HPI participated in the context of a student internship in
this challenge. We designed a research system incorporating lasted In-Memory
Database (IMDB) technology to enable systematic lling of templates of the
required classes using unstructured data from Electronic Medical Records (EMR).
IMDB technology has proven to have major advances for analyzing big
enterprise and medical data, e.g. to support medical doctors in identifying better
treatments for cancer patients and other elds of life sciences [
        <xref ref-type="bibr" rid="ref13 ref14 ref8">13, 8, 14</xref>
        ]. Thus,
IMDB supports a) the interactive processing of EMR data, which b) enables
fast, iterative design of productive systems for TE and its analysis. We rely on
a columnar IMDB and make use of the built-in Text Analysis (TA)
functionality for our research prototype. Additionally, we complement data provided for
training with additional external data sources and extract relevant entities as
described in Sect. 2.1.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Methods</title>
      <p>In the following, we describe data used in our system, its architectural details,
and highlight the advantages of using IMDB technology.
2.1</p>
      <sec id="sec-2-1">
        <title>Data</title>
        <p>
          For the training phase, we used a data set of 300 documents taken from
version 2.5 of the Multiparameter Intelligent Monitoring in Intensive Care (MIMIC
II) database [
          <xref ref-type="bibr" rid="ref5 ref9">5, 9</xref>
          ]. This data is comprised of a corpus and annotations of
deidenti ed clinical reports from intensive care patients from the United States of
America (USA). These reports are classi ed into four types: discharge summary,
echo report, electrocardiogram report, and radiology report. All documents are
unstructured text documents, i.e. they are written in natural language without
speci c formatting.
        </p>
        <p>
          In addition to the training data, we integrated the SNOMED Clinical Terms
(SNOMED CT) data from the Uni ed Medical Language System (UMLS)
version 2013AB to improve our entity recognition capabilities for mentions of
diseases and body locations [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. From this database, we used all concepts with a
semantic type that is related to a disease, disorder, or body location. Tab. 2.1
provides a detailed overview of what concepts and semantic types we have
incorporated. These concepts sum up to a data set of &gt;183k concepts, i.e. entities
that can be used for entity recognition in the training data summing up to more
entries than the complete SNOMED CT data set.
and unstructured data within a single system as it has several building blocks
as presented by Plattner [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. In the following paragraphs, we introduce selected
building blocks and how we bene t from them for accomplishing our task.
Relevant Data Kept in Main Memory IMDB technology enables fast access of
required data directly from main memory. This contrasts to most traditional
approaches processing data from les that reside on disk space and must be loaded
into main memory. When thinking of the ever-increasing amounts of data, this
strategy will not be feasible anymore in the long run. Therefore, IMDB
technology o ers us an alternative processing strategy that addresses performance
requirements of our application.
        </p>
        <p>
          Lightweight Compression Those techniques refer to a data storage
representation that consumes less space than its original pendant. A columnar database
storage layout supports such lightweight compression techniques, e.g. dictionary
encoding which maps all unique values to a uniform format [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. For example,
suppose we have a list of people as data set where one column contains the gender.
For this column, there exist only two unique values, i.e. "male" and "female".
With dictionary encoding, these two values are mapped to integer
representations, e.g. "male"=1 and "female"=2, and stored in the column instead of the
original values. This requires less storage space and also reduces the amount of
data that has to be transferred from and to main memory.
        </p>
        <p>Multi-Core and Parallelization Modern system architectures are designed to
provide multiple CPUs with each of them having separate cores. This capacity
should be fully exploited by parallelizing application execution to achieve
maximum processing speed. The incorporated IMDB platform supports this and
provides built-in parallelization. With that, we do not need to apply parallelization
strategies on our own but still have maximum runtime performance in processing
our input data.</p>
        <p>Entity and Feature Extraction Any kinds of text, such as the medical reports that
have to be processed in this challenge, are considered as unstructured data. Thus,
it cannot be processed automatically unless a machine-readable data model
exists for automatic interpretation, e.g. a semantic ontology. Our incorporated
IMDB platform o ers a range of features for text processing, of which the
relevant ones for us are those for entity and feature extraction. Entity and feature
extraction refers to the identi cation of relevant keywords and names of entities
from documents. Dictionaries and individual extraction rules can customize this.
Dictionaries list one or more entity types, each of which containing any number
of entities that in turn contain a standard form name and any number of
synonyms. Extraction rules use formal syntax to de ne entities of a speci c type.
This allows formulating patterns that match tokens by using a literal string, a
regular expression, a word stem, or a word's part of speech.
2.3</p>
      </sec>
      <sec id="sec-2-2">
        <title>System Design</title>
        <p>
          Fig. 1 presents our system architecture in Functional Modeling Concepts (FMC)
notation [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. Medical reports as test data and a dictionary that has been
generated from the training data in advance serve as input for our system. This data
is imported once into our IMDB. The input template documents must now be
automatically lled with concrete values for cue and normalization attributes.
The system itself is divided into two components: Our IMDB platform, which
performs among others linguistic pre-processing tasks, e.g. entity extraction via
dictionaries, and a Python module for template lling.
        </p>
        <p>In-Memory Database Relevant data is imported into our IMDB. The data is
comprised of the medical reports whose templates must be lled, the SNOMED
CT subset, and a list for each slot type with entities that have been extracted
from the training data before. From this data, we create scienti c medical
dictionaries and add individual extraction rules to facilitate entity recognition and
extraction of the di erent slot types.</p>
        <p>
          Dictionaries We build customized dictionaries to identify slot types NI, SC,
UI, CC, SV, CO, GC, BL, and TE in the given medical reports. For extraction
and normalization of DD and BL slot types, we compile a dictionary based
on the imported SNOMED CT data set. Entities of remaining slot types are
extracted and normalized by a dictionary derived from training data. Fig. 2a
depicts such a dictionary in XML format. Entities can easily be organized into
categories, normalized by a standard form and enriched by additional variant
de nitions. The given example lists the semantic type Body Part, Organ, or
Organ Component in blue letters. Afterwards, the concept de nition with its
normalization, i.e., the standard form, is de ned in black letters. Finally, possible
entities owning the de ned normalization are listed in yellow letters. As a result,
the phrase skeletal muscle structure of abdomen has the normalization C0000739
and will be assigned to the BL slot type when detected in any text document.
CGUL Rules We de ne extraction rules in Custom Grouper User Language
(CGUL) to identify DT slot types [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. CGUL is a sentence-based language
that allows pattern matching by using character or token-based regular
expressions combined with linguistic attributes to de ne custom entity types. Fig. 2b
shows two example CGUL rules for extracting entities that have before and
before overlap as normalization. By using Part-of-Speech (POS) tags in the rules,
we can access and extract the grammatical tense of a sentence. In the given
examples highlighted in purple color, we want to identify structures that rst
contain a noun (Nn) after which comes a verb in either past (V-Past ) or past
participle (V-PaPart ) tense. Means to identify nouns, verbs, and tenses are
provided by default by our IMDB platform.
        </p>
        <p>
          Entity Recognition and Extraction With the created dictionaries and CGUL rules
at hand, we can trigger the actual process of entity extraction within our IMDB.
For that, we create a full text index on the medical reports for which we have to
ll out the templates [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. The full text index is automatically managed by our
IMDB, which performs linguistic processing, i.e., language and encoding
identi cation, segmentation, case normalization, stemming, and tagging, and entity
and fact extraction based on the provided dictionaries and CGUL rules [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. The
result of this process is a dedicated database table that contains the extracted
entities that have been found in the medical reports, their normalization, slot
type, and location within the document. These details can be directly used for
template lling.
        </p>
        <p>Template Filling Our template lling engine is based on Python v. 2.7.7 and
takes extracted entities together with their normalization, slot type, medical
report it occurred in, and location within the medical report, i.e. text spans,
as input. These details are associated to the corresponding templates, which
requires matching the text spans identi ed by our approach. The DD mentions
provided by default in the test data are used as "anchor" to determine entities of
the same template. If this has been accomplished, the templates are lled with
the corresponding cues and normalizations.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Conducted Experiments</title>
      <p>In the following, we present experiments conducted in terms of data and
evaluation metrics used. We provide experiment results according to the presented
metrics and discuss relevant ndings.
3.1</p>
      <sec id="sec-3-1">
        <title>Data and Metrics Used</title>
        <p>For evaluating the performance of our system, we used a test data set provided by
the challenge. Analogously to the initial training data, this data set is comprised
of a set of 133 medical reports with template documents assigned. In contrast
to the training data, the templates' attributes, i.e. cue and normalization values
for each slot type, are empty. In our experiments, we aim at lling both cue
and normalization values and by that to participate in tasks 2a and b of the
challenge.</p>
        <p>
          We use accuracy and F1 measure as common measures used in pattern
recognition and information retrieval for evaluation of the derived normalization and
cue values, respectively [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. We determine performance for the overall result set
and per slot type. Eq. 1 de nes the computation of accuracy for a given set
of normalization values N as fraction of the amount of slot values for which a
correct normalization has been derived and the overall amount of slot values for
which a normalization has been derived.
        </p>
        <p>Accuracy(N ) = jNcorrectj
jN j
(1)</p>
        <p>Eq. 2, Eq. 3, and Eq. 4 depict computation of F1 measure to assess quality of
the detected cue values, which is the harmonic mean of precision and recall. With
regards to examining performance for a concrete slot type or their overall set,
C is the set of all cue values detected by our approach, whereas Ctrue contains
all true cue values. Ccorrect is the set of all cue values that have been correctly
identi ed by our approach and is also expressed as Ccorrect = Ctrue \ C. The
de nition of the term "correct" varies for strict and relaxed evaluation. The
former checks if a derived cue value equals the correct one, whereas the latter
still considers a cue value as correct if it overlaps with the true value. Precision
depicts the fraction of retrieved instances that are relevant, i.e., in this context
how many of the true cue values have been identi ed by our approach. Recall
depicts the fraction of relevant instances that are retrieved, i.e. how many of
the cue values identi ed by our approach are contained in the set of "true" cue
values.</p>
        <p>F1(C) =
2</p>
        <p>Recall(C) P recision(C)</p>
        <p>Recall(C) + P recision(C)</p>
        <p>Recall(C) =
P recision(C) =</p>
        <p>jCcorrectj
jCcorrectj + jCtrue n Cj</p>
        <p>jCcorrectj
jCcorrectj + jC n Ctruej
(2)
(3)
(4)
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Results and Discussion</title>
        <p>The results achieved by our prototype are summarized in Tab. 3.2 and
Tab. 3.2 for tasks 2a and 2b, respectively. For many of the slots, our results
for task 2b, i.e. cue values derived, were quite lower than the ones obtained for
task 2a, i.e. normalized values derived. For instance, we achieved 90-100
percent of accuracy for the slot types CC and GC, but only 21 and 14 percent
F1-measure, respectively, for the relaxed evaluation of task 2b. Although this
is expected, as exact (or relaxed) mention spans are harder to be correctly
extracted than the corresponding normalized values, we still investigate possible
mistakes on the o sets in our submissions and future error analysis will shed
some light on the discrepancies between the results for both tasks.</p>
        <p>Our strategy for the BL slot, which had relied on the dictionaries derived
from the SNOMED CT terminology, achieved 50 percent of accuracy. A future
error analysis will also show whether false negatives were due to concepts that are
not present in the SNOMED CT terminology, to missing synonyms for existing
concepts or on the matching approach that was used. Nevertheless, the relaxed
evaluation of task 2b shows that our dictionary matching approach provides good
precision, i.e. 60 percent, given the complexity of the anatomical nomenclature.</p>
        <p>Extraction of values for slot type DT was a hard task and results were quite
low for all teams. This is because it requires a more careful analysis of the
language, such as analyzing verb tenses and time expressions. However, we believe
that our approach of using CGUL rules is appropriate for extracting this
information but more rules should be created for this purposes as well as a revision
of the existing ones.</p>
        <p>Therefore, the used dictionaries and rules instead of the underlying IMDB
system induce the presented results. If those dictionaries are re ned, e.g. by
including other data sources than SNOMED CT or adapting extraction rules,
we are convinced that the overall performance of our system will improve.</p>
        <p>However, the focus of this work is rather on showing the general
applicability and feasibility of in-memory technology for processes that involve processing
and analysis of unstructured text. One iteration to improve text analysis results,
starting with re ning dictionaries and ending with receiving the nal results, i.e.
the lled templates from the test data, takes minutes with our system instead
of hours or days with traditional approaches. This proves that in-memory
technology provides advantages also for the eld of information extraction and can
contribute to establishing e cient and alternative processing strategies in that
area.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusion</title>
      <p>The ShARe/CLEF eHealth challenge 2014 aims to facilitate the research on
information extraction within the biomedical domain. As follow-up to 2013's
challenge, participants were asked to identify semantically related mentions to
disorder mentions and ll out templates with normalization and cue values for
the detected entities.</p>
      <p>In the context of a student internship, we designed a research prototype for
entity extraction based on IMDB technology that proves feasibility for e cient
text processing. Evaluation results show that our rules and dictionaries currently
applied require optimization by re ning dictionaries or extraction rules. However,
our prototype allows us to extend existing extraction rules and dictionaries in
a constant manner and to verify them instantly. Thus, the task of iterative
improvement of text analysis results becomes a continuous process.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Christen</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goiser</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Quality and Complexity Measures for Data Linkage and Deduplication</article-title>
          .
          <source>In: Quality Measures in Data Mining</source>
          , pp.
          <volume>127</volume>
          {
          <fpage>151</fpage>
          . Springer (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Elhadad</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          , et al.:
          <article-title>The ShARe Schema for the Syntactic</article-title>
          and Semantic Annotation of Clinical Texts
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Kelly</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , et al.:
          <source>ShARe/CLEF eHealth Evaluation Lab</source>
          <year>2014</year>
          (
          <year>2014</year>
          ), Springer-Verlag
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4. Knopfel,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Grone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Tabeling</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          :
          <article-title>Fundamental Modeling Concepts: Effective Communication of IT Systems</article-title>
          . John Wiley &amp; Sons (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Mowery</surname>
            ,
            <given-names>D.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Velupillai</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>Task 2 Data Set of the ShARe/CLEF eHealth Challenge 2014</article-title>
          . http://clefehealth2014.dcu.ie/task-2/2014-dataset [retrieved: Jun,
          <year>2014</year>
          ] (
          <year>Jun 2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Mowery</surname>
            ,
            <given-names>D.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Velupillai</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          : Task 2: ShARe/CLEF eHealth Evaluation Lab
          <year>2014</year>
          . http://clefehealth2014.dcu.ie/task-2
          <source>[retrieved: Jun</source>
          ,
          <year>2014</year>
          ] (
          <year>Jun 2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Plattner</surname>
          </string-name>
          , H.:
          <article-title>A Course in In-Memory Data Management: The Inner Mechanics of In-Memory Databases</article-title>
          . Springer, 1st edn. (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Plattner</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schapranow</surname>
            ,
            <given-names>M.P</given-names>
          </string-name>
          . (eds.):
          <article-title>High-Performance In-Memory Genome Data Analysis: How In-Memory Database Technology Accelerates Personalized Medicine</article-title>
          . Springer-Verlag (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Saeed</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , et al.:
          <article-title>Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II): A Public-Access ICU Database</article-title>
          .
          <source>Clinical Care Medicine</source>
          <volume>39</volume>
          ,
          <volume>952</volume>
          {
          <fpage>960</fpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>SAP</surname>
            <given-names>AG</given-names>
          </string-name>
          :
          <article-title>SAP HANA SQL and System Views Reference</article-title>
          . http://help. sap.de/hana/SAP_HANA_
          <article-title>SQL_and_System_Views_Reference_en</article-title>
          .
          <source>pdf [retrieved: Jun</source>
          ,
          <year>2014</year>
          ] (
          <year>Jun 2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>SAP</surname>
            <given-names>AG</given-names>
          </string-name>
          :
          <article-title>Text Data Processing Language Reference Guide</article-title>
          . https://help.sap.com/businessobject/product_guides/boexir4/ en/sbo401_
          <article-title>ds_tdp_lang_ref_en</article-title>
          .
          <source>pdf [retrieved: Jun</source>
          ,
          <year>2014</year>
          ] (
          <year>Jun 2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>SAP AG: SAP HANA Text Analysis Language Reference Guide</surname>
            <given-names>V.</given-names>
          </string-name>
          <year>1</year>
          .0. http://help.sap.com/hana/SAP_HANA_
          <article-title>Text_Analysis_Language_ Reference_Guide_en</article-title>
          .
          <source>pdf [retrieved: Jun</source>
          ,
          <year>2014</year>
          ] (May
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Schapranow</surname>
            ,
            <given-names>M.P.</given-names>
          </string-name>
          , et al.:
          <article-title>Mobile Real-time Analysis of Patient Data for Advanced Decision Support in Personalized Medicine</article-title>
          .
          <source>In: Proceedings of the 5th Int'l Conf on eHealth, Telemed, and Social Medicine</source>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Schapranow</surname>
            ,
            <given-names>M.P.</given-names>
          </string-name>
          , Hager, F.,
          <string-name>
            <surname>F</surname>
          </string-name>
          ahnrich,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Ziegler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            ,
            <surname>Plattner</surname>
          </string-name>
          , H.:
          <article-title>InMemory Computing Enabling Real-time Genome Data Analysis</article-title>
          .
          <source>Advances in Life Sciences</source>
          <volume>6</volume>
          (
          <issue>1</issue>
          {2) (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15. U.S. National Library of Medicine:
          <article-title>Uni ed Medical Language System (UMLS)</article-title>
          . http://www.nlm.nih.gov/research/umls/ [retrieved: Jun,
          <year>2014</year>
          ] (
          <year>Jul 2013</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>