<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Information Extraction from Microblog for Disaster Related Event</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Chintak Mandalia, LDRP-ITR</institution>
          ,
          <addr-line>Gandhinagar, Gujarat</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Prasenjit Majumder, Dhirubhai Ambani Institute of Information and Communication Technology</institution>
          ,
          <addr-line>Gandhinagar, Gujarat</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Rishab Singla, Dhirubhai Ambani Institute of Information and Communication Technology</institution>
          ,
          <addr-line>Gandhinagar, Gujarat</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Sandip Modha, Dhirubhai Ambani Institute of Information and Communication Technology</institution>
          ,
          <addr-line>Gandhinagar, Gujarat</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2017</year>
      </pub-date>
      <abstract>
        <p>This paper presents the participation of Information Retrieval Lab(IRLAB) at DAIICT Gandhinagar ,India in Data challenge track of SMERP 2017. This year SMERP Data challenge track has offered a task called Text Extraction on the Italy earthquake tweet dataset, with an objective to retrieve relevant tweets with high recall and high precision. In this task, three runs were submitted by us and we describe the different approaches adopted. Initially, we have performed query expansion on the topics using Wordnet. In the first run, we have ranked tweets using cosine similarity against the topics. In the second run, relevance score between tweets and the topic is calculated using Okapi BM25 ranking function and in the third run relevance score is calculated using language model with Jelinek-Mercer smoothing .</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Microblogs like Twitter can play a very important role in any disaster related event.
Twitter has a massive registered user base. As of 2016, Twitter1 had more than 319
million monthly active users. On the day of the 2016 U.S. presidential election,
Twitter proved to be the largest source of breaking news, with 40 million tweets sent by 10
p.m. (Eastern Time) that day. Twitter enables humans to act as a social sensor to the
real world. It allows its registered users to post short texts called tweets having upto
140 characters.</p>
      <p>https://en.wikipedia.org/wiki/Twitter</p>
      <p>
        Many incidents in the past have proved that social media is the first medium
through which news related to a disaster like earthquakes reach the people. Recently,
many earthquake incidents have been reported first on Twitter and then on any other
media [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Twitter can be effectively accessed by an NGO/Government agency to
assess the ground reality of the disaster area to assist in their rescue operations.
      </p>
      <p>
        The motivation of the data challenge track is to promote development of IR
methodologies that can be used to extract important information from social media during
emergency events, and to arrange for comparative evaluation of the methodologies
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The Data challenge track offered two tasks namely Text retrieval in two levels.
The track organizers have provided tweet-id of the first day of Italy earthquake in the
first level. In the second level, tweet-ids of tweet posted during second day of Italy
earthquake, were provided. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] Track organizer also provided the topics in TREC
style for which we have to extract and summarize relevant tweets.
      </p>
      <p>The aim of Text Retrieval sub track is to retrieve top relevant tweets with respect
to each of the specified topics with high precision and high recall. The paper is
organized as follow; we will discuss related work in section 2. In section 3 we describe
tweet dataset. In section 4, we describe the problem statement. In section 5 we discuss
our methodology. In section 6, we will present the results and analysis. In section 7
we draw conclusions and discuss future work.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        We started our work by referring TREC MICROBLOG 2015 papers. TREC2 has
started Microblog track since 2011 with objective to explore new IR methodology on
short text. CLIP[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] has trained their Word2vec model using 4 years tweet corpus.
They used Okapi BM25 relevance model to calculate the score. To refine the scores
of the relevant tweets, tweets were rescored using the SVM rank package using the
relevance score of the previous stage.
      </p>
      <p>
        University of waterloo [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] implemented the filtering tasks, by building a term
vector for each user profile and assigning different weights to different types of terms. To
discover the most significant tokens in each user profile, they calculated pointwise KL
divergence and ranked the scores for each token in the profile.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Tweet Dataset</title>
      <p>
        SMERP 2017 Track organizers have provided dataset of tweets-ids posted on Twitter
during the earthquake in Italy in August 2016 along with a Python script that can be
used to download the tweets using the Twitter API [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The text retrieval track is
offered in two levels, tweets posted on first day and day two and three of Italy
earthquake will be considered in level-1 and level 2 dataset respectively. They have
provided 52469 tweet ids in level-1 and 19751 tweet ids in level-2 along with 4 topics in
the TREC format.
4
      </p>
    </sec>
    <sec id="sec-4">
      <title>Problem Statement</title>
      <p>Given topics Q = {SMERP-T1, SMERP-T2, SMERP-T3, SMERP-T4}, and Tweets
Dataset T = {T1, T2,..,Tn} from the dataset, we have to design a ranking function R:
(Q,T) → {R1,..Rn} which ranks tweets against given topic based upon the relevance
score. Ri = {T1,…Tn} where Ri is the set relevant tweet against ith profile.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Our Methodology</title>
      <p>Track organizers have given 4 topics according TREC format which consists of title
description and narrative. Essentially these topics are our query and will be used
interchangeably throughout the paper. In this section, we describe our approach.
5.1</p>
      <sec id="sec-5-1">
        <title>Topic Preprocessing</title>
        <p>Topics consist of title which describe the general information need, description and
narrative which are sentence and paragraph long content which describe the overall
picture.</p>
        <p>&lt;top&gt;
&lt;num&gt; Number: SMERP-T1
&lt;title&gt; WHAT RESOURCES ARE AVAILABLE
&lt;desc&gt; Description:
Identify the messages which describe the availability of some resources.
&lt;narr&gt; Narrative:</p>
        <p>A relevant message must mention the availability of some resource like food,
drinking water, shelter, clothes, blankets, blood, human resources like volunteers,
resources to build or support infrastructure, like tents, water filter, power supply, etc.
Messages informing the availability of transport vehicles for assisting the resource
distribution process would also be relevant. Also, messages indicating any services
like free wifi, sms, calling facility etc. will also be relevant. In addition, any message
or announcement about donation of money will also be relevant.However, generalized
statements without reference to any resource would not be relevant.</p>
        <p>To covert topic into query, we have first removed stopwords. We run Stanford
POS tagger3 on topics. All keyword with the noun and verb labels are extracted and
added to the query. We believe that the topic are extremely vague so human
intervention is required to build the query
5.2</p>
      </sec>
      <sec id="sec-5-2">
        <title>Query Expansion</title>
        <p>We have used lexical database WordNet4 for query/topic expansion. It puts
english words into sets of synonyms called synsets. For each term in a query, we have
extracted top 2 synonyms from WordNet and added to the query. We have set equal
term weight for original term and the expanded term.
5.3</p>
      </sec>
      <sec id="sec-5-3">
        <title>Tweet Preprocessing</title>
        <p>After downloading the tweets, non-English tweets were filtered out. Tweet includes
smiles, hashtags, and many special characters. We did not consider retweets or tweets
with only hashtags, emoticons or special characters. Also, we ignored tweets with less
than 5 words and removed all the stopwords and non-ASCII character from the tweet.
5.4</p>
      </sec>
      <sec id="sec-5-4">
        <title>Relevance Score</title>
        <p>We have submitted two runs in the first level and three runs in the second level for the
Text Retrieval track with different retrieval techniques. Further, we will discuss each
technique.</p>
      </sec>
      <sec id="sec-5-5">
        <title>Relevance score using Cosine similarity.</title>
        <p>In the first run, we used cosine similarities between tweets and expanded topic to
calculate relevance score.
3 http://nlp.stanford.edu:8080/parser/
4 https://wordnet.princeton.edu/
SMERP ECIR-2017, p. 4</p>
      </sec>
      <sec id="sec-5-6">
        <title>Tweet Relevance score using Okapi BM25 model</title>
        <p>In the second run, to calculate relevance score between tweets and expanded query,
we have used. Score is defined as follows.</p>
        <p>We have set BM25 model parameter b=0.75,k1=0.2.</p>
      </sec>
      <sec id="sec-5-7">
        <title>Tweet Relevance score using Language Model.</title>
        <p>In the third run, we have indexed all the tweets in Lucene5.Language model with
Jelinek-Mercer smoothing was used to retrieve relevant tweets depending on the
query. We set a threshold for finding out if a tweet is relevant to a particular topic.
The relevance threshold set was 24. The parameter λ was set to 0.1.
SMERP Track organizers have used standard TREC metrics like
Bpref,Precision@20,Recall@1000 and MAP to evaluate the runs submitted by all
teams. Bpref is used as a primary metric to rank all teams. Table 1 and Table 2 show
our result in both levels. In level 1, we have achieved higher Recall@1000 compared
to top team dcu_ADAPT_run2. However, our Bpref was substantially lower than
dcu_ADAPT_run2. In the second run, we have achieved Precision@20,Recall@1000
and MAP better than dcu_ADAPT_run2 but we have reported Bpref substantially
lower. We will investigate poor Bpref in future.
In this paper, we have applied three different retrieval technique namely Okapi
BM25, cosine similarities and language model with Jelinek-Mercer smoothing for
extraction. Our results show that BM25 model outperforms the other methods in
terms of Bpref, Precision@20, Recall@1000 and mean average precision(MAP). We
have also concluded that our system has reported poor Bpref score in both the levels
which will be investigated further. We also note that topics are more like a question
so we have to consider text features like Named entity and verb phrase or relation in
the ranking score in addition to raw tweet text. Further on, a ranking system based on
deep neural network and logistic regression could be looked at for better results.
8</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>SMERP</surname>
          </string-name>
          <article-title>ECIR 2017 guidelines</article-title>
          , http://www.computing.dcu.ie/~dganguly/smerp2017/
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Bagdouri</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oard</surname>
            ,
            <given-names>D.W.</given-names>
          </string-name>
          : CLIP at TREC 2015:
          <article-title>Microblog and LiveQA</article-title>
          . In :TREC (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Tan</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roegiest</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Clarke</surname>
            ,
            <given-names>C.L.</given-names>
          </string-name>
          : University of Waterloo at
          <article-title>TREC 2015 Microblog Track</article-title>
          . In : TREC (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Tan</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roegiest</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Clarke</surname>
            ,
            <given-names>C.L.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Lin</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Simple dynamic emission strategies for microblog filtering</article-title>
          .
          <source>In : Proc. 39th International ACM SIGIR conference on Research and Development in Information Retrieval</source>
          , pp.
          <fpage>1009</fpage>
          -
          <lpage>1012</lpage>
          . ACM (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Sakaki</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Okazaki</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Matsuo</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Earthquake shakes Twitter users: real-time event detection by social sensors</article-title>
          .
          <source>In: Proc. 19th international conference on World wide web</source>
          , pp.
          <fpage>851</fpage>
          -
          <lpage>860</lpage>
          . ACM (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>