<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Unified Approach for Short Question Entity Discovery and Linking</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Qin Wei</string-name>
          <email>weiqin@putao.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>，Jiong Zhang</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>，Huimin Zhang</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Shanghai Putao Technology Co., Ltd.</institution>
          ,
          <addr-line>Shanghai 200233</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Shanghai for Science and Technology</institution>
          ,
          <addr-line>Shanghai 200093</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The problem of entity discovery and linking(EDL) in short questions aims to find the entities the questions focused on and disambiguate them, usually over linkeddata sources. For Chinese questions, the problem mainly involves three tasks: the segmentation of words and phrases in questions, the segment disambiguation and the mapping of mentions to semantic entities. In this paper, we propose an integer linear program (ILP) based method to solve the three tasks jointly. Our solution harnesses the rich feature types provided by the question context and the linked data source, CN-DBpedia in the experiment, to constrain our semantic-coherence objective function and a genetic algorithm(GA) is used to tune the parameters. In the evaluation of CCKS2017 shared task one, our approach achieves a f1 score of 0.804 in the mention discovery and 0.56 in the entity linking, and ranks 1st among all the 17 teams according to the f1 score of entity linking.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Entity Discovery and Linking (EDL) in Natural Language Processing (NLP) is the
task of matching entity mentions in texts to a unique identifier in linked-data sources,
such as CN-DBpedia, and it is becoming a hot topic as the linked data grows. Unlike
conventional tasks of Named Entity Recognition (NER), which focus on identifying
the occurrence of an entity and its type, but not the specific unique entity that the
mention refers to, EDL makes a further step in understanding texts, thus playing a
critical role in the construction of the upper applications, such as the Information
Retrieval (IR) and Knowledge Based Question Answering (KBQA) systems.</p>
      <p>EDL is commonly divided into two sub tasks: Mention Detection (MD) and Entity
Linking (EL). MD is concerned with identifying potential mentions of entities in the
text and EL involves mapping mentions to semantic entities. EDL is complex and
challenging due not only to the ambiguity of word and phrase senses but also entity
mentions, which are affected by the context of words and phrases, the similarity of
mentions and entities, the prior of entities, the coherence among entities and etc.</p>
      <p>
        Approaches have been proposed in literature to solve EDL. The majorities treat
the two tasks in EDL separately, which can be distinguished into two main types:
rule-based approaches and machine learning models. Rule-based approaches make
good efforts in linguistic analysis [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and can build practical systems in limited
periods, thus getting good performances in the early related shared tasks. Machine
learning models such as Maximum Entropy (ME) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], generative models [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], ranking
methods[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and etc., benefitting from the data explosion in the last decade, good at
balancing the precision and recall, is becoming more and more dominate. Suffering
from cascade errors, gaps to the theoretical best performance exist for these separate
approaches. Some jointly approaches are also reported [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], which are good at
taking the affecting factors of EDL to a single model but lack methods for model
parameter tuning.
      </p>
      <p>This paper presents our approach for CCKS2017 shared task, Question Entity
Discovery and Linking (QEDL) in Chinese. One more sub task is raised, the
segmentation of words and phrases in questions, compared to EDL, due to the
boundary lack of Chinese words. The three sub tasks are jointly solved by an Integer
Linear Program (ILP) based model tuning parameter by the Genetic Algorithm (GA).
An f1 score of 0.804 in the mention discovery and 0.56 in the entity linking have been
achieved in the evaluation.</p>
      <p>The paper is structured as follows. After describing the four steps of the online
predicting framework in section 2, we discuss the joint disambiguation step in detail
in section 3. Section 4 presents the offline parameter tuning of the online model. The
evaluation results are outlined in section 5. Finally, we review the main conclusions
and preview the future work.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Framework</title>
      <p>As shown in figure 1, given a Chinese short question, our online approach takes four
steps for QEDL: word detection, mention discovery, entity mapping and joint
disambiguation. A question sentence is processed as a sequence of characters, qNL =
(t0, t1,… tn) while a word is a contiguous subsequence of the character sequence, wij =
(ti, ti+1, … tj) , 0&lt;=i&lt;=j&lt;=n. The input question is handled by the pipeline in figure 1.
Words are detected by “jieba”, a common used Chinese text segmentation. In the cut
for search mode of “jieba”, all the word candidates are generated and put to a word
set. For sample question “李娜是在哪一年拿的澳网冠军”, the word set contains the
following candidates: “李娜”,“是”,“在”,“哪一年”,“一年”,“拿”,“的”,“澳网” and “冠
军”。</p>
      <sec id="sec-2-1">
        <title>Mention Discovery</title>
        <p>Mentions are discovered using CN-DBpedia, by querying the contiguous
subsequences of the questions. Subsequences are made mention candidates, which
will be added to the word set containing all the word candidates, by the existence of
the query results. For the sample question, the mention candidates, “李”，“李娜”，
“一”，“一年” and etc., are added to the word set, with duplication removed.
2.3</p>
      </sec>
      <sec id="sec-2-2">
        <title>Entity Mapping</title>
        <p>A mention candidate can be mapped to multiple semantic entities. During this step, a
semantic entity mapping space for the mentions is constructed. “李娜” in the above
sample question is mapped to “李娜（中国女子网球名将）”，“李娜（南京师范大
学讲师）”，“李娜（2016年陈可辛导演电影）” and etc., in the mapping space.
2.4</p>
      </sec>
      <sec id="sec-2-3">
        <title>Joint Disambiguation</title>
        <p>During this step, the word boundary determination, mention disambiguation and
semantic entity disambiguation are solved jointly by calculating a disambiguation
graph. For the sample question, by decoding the outcome graph, we can get entity
mentions as “李娜” and “澳网” and entity linking as “李娜（中国女子网球名将）”
and “澳大利亚网球公开赛”. The details of this step is described in section 3.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Joint Disambiguation</title>
      <p>As the disambiguation of one word, mention and entity can influence the others, a
disambiguation graph encoding all possible mappings is constructed. To simplify the
problem, we model the problem as an ILP, rather than graph models.
3.1</p>
      <sec id="sec-3-1">
        <title>Overlap-Word-Entity Graph</title>
        <p>A weighted, undirected Graph DG= (V, E) is defined with words Vw, entities Ve and
the word overlap constrains Vo as nodes. The graph for the sample question is shown
in figure 2, from which, we can see the edges, in which the word-entity edges tie
closely to the final predicting results, are indicated by solid and dashed lines
corresponding to 1 and 0 are assigned to variables in ILP.</p>
        <p>Features and Constraints are expressed by the weighted edges in the Graph and by
optimizing the final objective function, the best word and entity nodes are selected.
The features and constraints harnessed in the model are presented below.
4
Mention Prior (Prior(mention)): mention prior are related to the mention character
length, Part Of Speech (POS) of the duplicated word, whether it is covered by other
words and some linguistic features such as numbers, special characters are contained.
Entity Prior (Prior(entity)): entities have different priors and the number of
attributes, special attributes, special attribute values are chosen to signify the prior of
entities in CN-DBpedia.</p>
        <p>Entity Context (Sim(entity-context)): some context words indicate higher priorities
of entities which is captured by entity context similarity.</p>
      </sec>
      <sec id="sec-3-2">
        <title>Mention to Entity Similarity(Sim(M-E)): this feature captures the mention and</title>
        <p>entity name matching degree.</p>
        <p>Entity Coherence(Sim(coh)): some entities are high correlated, which usually are the
question topic related entities, by sharing the same category, quoting each other,
related to the same other entity and etc., expressed by the entity coherence similarity.
Overlap Constraints (Con (overlap)): unique character cannot exists in two words.</p>
      </sec>
      <sec id="sec-3-3">
        <title>Mention Entity Constraints (Con (mention-entity)): one mention corresponds to</title>
        <p>one entity at most.</p>
      </sec>
      <sec id="sec-3-4">
        <title>Mention Number Constraints (Con (mention-number)): short question usually</title>
        <p>concerns one topic and the related mentions are limited, so we added mention number
constrain, which penalize more as the number of mentions increases.</p>
      </sec>
      <sec id="sec-3-5">
        <title>Over-all Objective Function</title>
        <p>Our framework combines mention prior, entity prior, entity-context prior, similarity
between entity and mention, and coherences between entities into a combined
objective function.</p>
        <p>k1 p1 k 2 ki p 2
   t * prior( m i )      t * prior( e ji ) 
i 1 t 1 i 1 j 1 t 1
k 2 ki p3 k 2 ki p 4
    t * sim( e ji , cxt( m i))      t * sim( m i, e ji) 
i 1 j 1 t 1 i 1 j 1 t 1
k 2 ki k 2 k n p5 4
      t * sim( e mn , e ji )   c t * (sum( m i )  t  1))  max!
i 1 j 1 n  i 1m 1 t 1 t  2
Subject to:
k 1, if j  scope( m i )
 I( m i , j )  1， I( m i , j )  
i 1  0 , other
(1)
(2)
1,
I( e i ), I( m i )  </p>
        <p> 0 ,
0  I( e i )  I( m i )  1，
other
p 1 p 2 p 3 p 4 p 5 4
Where  ( t ) 2   ( t ) 2   ( t ) 2   ( t ) 2   ( t ) 2   ( c t ) 2  1 ,
t 1 t 1 t 1 t 1 t 1 t  2
cxt(mi) denotes the context of mentions, sum(mi) represents the number of all the
mentions, do penalize when the number of mentions is over 2, 3 or 4, each feature
correspond to one coefficient, which changing by the GA tuning. scope(mi) is the
start position and end position interval of mi in the question sentence. Section 3.2
gives details on each of these components.
if e i or m i exists
3.4</p>
      </sec>
      <sec id="sec-3-6">
        <title>ILP Processing</title>
        <p>We use binary variables to indicate whether the nodes and edges are selected and
integrate the features and constraints to the ILP objective function and constraints as
shown in equation 1 and make the objective function linear with introducing some
new variables and the spinoff constraints. It seems a sophisticated ILP, but for the
questions are short, it is within the regime of modern ILP solvers. In Our Experiment,
we use Pulp and achieved run-times, usually less than one second.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Parameter Tuning</title>
      <p>
        The parameters in the objective function in ILP, about 30 in quantity, are optimized
by GA, a random search and optimization method based on natural selection and
genetic mechanism of the living beings [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], without calculating the gradients. As the
target parameters are floats, real number coding method are elected.
      </p>
      <p>The CCKS2017 shared task one use f1 score in mention discovery and entity
linking, We use score in equation (5) as our fitness function in GA, which grows
bigger as the f1 scores increase.</p>
      <p>score _ e | A  B | /( alpha * | A  B |  | B |) * beta (3)
score _ l | A  B | /( alpha * entityDiff ( A, B ) | B |) * beta (4)
score  gamma * score _ e　 (1.  gamma) * score _ l (5)
Where, alpha is the recognition rate adjustment coefficient, beta is the bonus
coefficient,</p>
      <p>gamma is the fitting coefficient and entityDiff ( A, B) is the count of
different elements in set A and B .
5</p>
    </sec>
    <sec id="sec-5">
      <title>Evaluation</title>
      <p>As Shown in Table 1, our model makes reasonable predictions in the QEDL. Further
Experiments show the f1 score of 0.804 in the mention discovery and 0.56 in the
entity linking are achieved on the test data, and over 0.10 are obtained on both scores
over the second team in the shared task.
6</p>
    </sec>
    <sec id="sec-6">
      <title>Conclusion and Future Work</title>
      <p>In this paper, an ILP based approach has been proposed for CCKS2017 shared task
one. The approach harnesses the rich feature types provided by the question context
and the linked data source to constrain the semantic-coherence objective function
using a GA to tune the parameters, achieving the best performance in the task. Future
work includes considering additional feature mining, improving online model
calculating efficiency and its universality in other corpus.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Soraluze</surname>
          </string-name>
          ,
          <string-name>
            <surname>Ander</surname>
          </string-name>
          , et al.
          <article-title>"Mention detection: First steps in the development of a Basque coreference resolution system</article-title>
          .
          <source>" KONVENS</source>
          . (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Zhekova</surname>
            , Desislava, and
            <given-names>Sandra</given-names>
          </string-name>
          <string-name>
            <surname>Kübler</surname>
          </string-name>
          .
          <article-title>"UBIU: A language-independent system for coreference resolution</article-title>
          .
          <source>" Proceedings of the 5th International Workshop on Semantic Evaluation. Association for Computational Linguistics</source>
          .(
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <surname>Heeyoung</surname>
          </string-name>
          , et al.
          <article-title>"Stanford's multi-pass sieve coreference resolution system at the CoNLL-2011 shared task." Proceedings of the fifteenth conference on computational natural language learning: Shared task</article-title>
          .
          <source>Association for Computational Linguistics</source>
          .(
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Sil</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Florian</surname>
            <given-names>R</given-names>
          </string-name>
          .
          <article-title>The IBM systems for English entity discovery and linking and Spanish entity linking</article-title>
          at TAC 2014[C]//Text Analysis Conference (TAC), Gaithersburg, Maryland, USA. (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Han</surname>
            <given-names>X</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sun L</surname>
          </string-name>
          .
          <article-title>A generative entity-mention model for linking entities with knowledge base[C]//Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1</article-title>
          . Association for Computational Linguistics,
          <year>2011</year>
          :
          <fpage>945</fpage>
          -
          <lpage>954</lpage>
          .(
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Shen</surname>
            <given-names>W</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Han</surname>
            <given-names>J</given-names>
          </string-name>
          .
          <article-title>Entity linking with a knowledge base: Issues, techniques, and solutions</article-title>
          [J].
          <source>IEEE Transactions on Knowledge and Data Engineering</source>
          ,
          <volume>27</volume>
          (
          <issue>2</issue>
          ):
          <fpage>443</fpage>
          -
          <lpage>460</lpage>
          .(
          <year>2015</year>
          ),
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Yahya</surname>
          </string-name>
          ,
          <string-name>
            <surname>Mohamed</surname>
          </string-name>
          , et al.
          <article-title>"Natural language questions for the web of data."</article-title>
          <source>Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics</source>
          , (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Hoffart</surname>
          </string-name>
          ,
          <string-name>
            <surname>Johannes</surname>
          </string-name>
          , et al.
          <article-title>"Robust disambiguation of named entities in text."</article-title>
          <source>Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics</source>
          , (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>Lingen</given-names>
            <surname>Ji</surname>
          </string-name>
          . :Survey On Genetic Algorithm.
          <source>Computer Applications and Software</source>
          ,
          <volume>21</volume>
          (
          <issue>2</issue>
          ),
          <fpage>69</fpage>
          -
          <lpage>73</lpage>
          .(
          <year>2004</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Jike</surname>
            <given-names>Ge</given-names>
          </string-name>
          , Yuhui Qiu,
          <string-name>
            <surname>Chunming Wu</surname>
          </string-name>
          , &amp;Guolin Pu.
          <source>Survey On Genetic Algorithm. Computer Application Research</source>
          ,
          <volume>25</volume>
          (
          <issue>10</issue>
          ),
          <fpage>2911</fpage>
          -
          <lpage>2916</lpage>
          . (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>