<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Summarizing Disaster Related Event from Microblog</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Chintak Soni, LDRP-ITR</institution>
          ,
          <addr-line>Gandhinagar, Gujarat</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Prasenjit Majumder, Dhirubhai Ambani Institute of Information and Communication Technology</institution>
          ,
          <addr-line>Gandhinagar, Gujarat</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Rishab Singla, Dhirubhai Ambani Institute of Information and Communication Technology</institution>
          ,
          <addr-line>Gandhinagar, Gujarat</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Sandip Modha, Dhirubhai Ambani Institute of Information and Communication Technology</institution>
          ,
          <addr-line>Gandhinagar, Gujarat</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2017</year>
      </pub-date>
      <abstract>
        <p>The Information Retrieval Lab at DA-IICT India participated in text summarization of the Data Challenge track of SMERP 2017. SMERP 2017 track organizers have provided the Italy earthquake tweet dataset along with the set of topics which describe important information required during any disaster related incident. The main goal of this task is to gather how well the participant's system summarizes important tweets which are relevant to a given topic in 300 words. We have anticipated Text summarization as a clustering problem. Our approach is based on extractive summarization. We have submitted runs in both the levels with different methodologies. We have done query expansion on the topics using Wordnet. In the first level, we have calculated the cosine similarity score between tweets and expanded query. In the second level, we have used language model with Jelinek-Mercer smoothing to calculate relevance score between tweets and expanded query. We have selected tweets above a relevance threshold which are the initial candidate tweets for the summarization of each query. To ensure novelty, Jaccard Similarity is used to create a cluster for each topic. We have reported results in terms of ROGUE-1, ROGUE-2, ROGUE-L and ROGUE-SU4.</p>
      </abstract>
      <kwd-group>
        <kwd>Microblog</kwd>
        <kwd>Information Retrieval</kwd>
        <kwd>Disaster</kwd>
        <kwd>Wordnet</kwd>
        <kwd>BM25</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Microblogs, like Twitter, provide a unique crowdsourcing platform where people
across the world can post their opinions or observations about real world events.
Twitter is the real time data source which has massive user-generated content. Since
tweets are posted by multiple users with diverse views, many tweets have redundant
content. Due to enormous volume of the tweets, tweet visualization is the biggest
challenge. We can address this challenge by creating a summary from relevant tweet
with respect to given topic.</p>
      <p>The aim of the Text summarization Data Challenge Track is to evaluate and
benchmark different summarization systems on standard social media dataset. The
text summarization track is offered in two levels. In the first level, tweets which are
posted on the first day of the earthquake in Italy were provided. Tweets posted on
second and third day of the Italy earthquake were provided in the second level.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>Summarization methods can be divided into two types (i) Extractive Summarization
(ii) Abstractive summarization. We have focused on extractive summarization.
Basically, Extractive Summarization methods are further divided into 3 types which are
(i)) graph based (ii) cluster based (iii) Centroid based.</p>
      <p>
        TREC1 has started Microblog track since 2011 with an adhoc retrieval task and
converged it into real time summarization in 2016. CLIP [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] used a word embedding
technique to expand query. They have used BM25 model to calculate relevance score
between tweets and query. For summarization, they used jaccard similarity across
relevant tweets. Luchenet. al [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] used simple keyword matching technique which
assigns more weight to the original term compared to the expanded term. For
summarization, they have used simple word overlap.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Problem Statement</title>
      <p>Given topics Q = &lt;SMERP-T1, SMERP-T2, SMERP-T3, SMERP-T4&gt;, and Tweets
DataSet T = &lt;T1, T2,..,Tn&gt; from the dataset, we need to compute the relevance score
between tweets and topics in order to create topic-wise summary S =
&lt;SQ1....SQn&gt;.Where SQi is the set of topic-wise relevant and novel tweets. We can
model topic specific summary as below.</p>
      <sec id="sec-3-1">
        <title>1 http://trec.nist.gov/</title>
        <p>SQ1=&lt;T1,T2,…..,Tn&gt; where Ti,Tj Є T</p>
        <p>For given topic, Relevance score between tweet and topic must be greater than
specified threshold Trel. In addition to this, these tweets should be novel i.e. similarity
between all tweet of the summary should less that the novelty threshold Tnov .if any
tweet Ti is included in the summary for a particular topic then it should satisfy the
following constraints.
 Length of summary of profile(SQi) &lt;= 300 word
 Relevance score(ti, Qi) &gt;Trel
 Sim(ti ,tj) &lt;Tnov for all tj Є SQi
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Our Methodology</title>
      <p>4 topics have been provided in TREC format by the track organizers. The topics
consist of a title, description and a narrative. The topics might be referred to as queries in
the paper. Further, we elaborate our approach.
4.1</p>
      <sec id="sec-4-1">
        <title>Topic Preprocessing</title>
        <p>Topics consist of a title in which the general information needed is given. A
description, which is sentence long and a narrative, the content of which is paragraph long
gives an elaborate picture of the topic.</p>
        <p>&lt;top&gt;&lt;num&gt; Number: SMERP-T4
&lt;title&gt;WHAT ARE THE RESCUE ACTIVITIES OF VARIOUS NGOs /
GOVERNMENT ORGANIZATIONS
&lt;desc&gt; Description:</p>
        <p>Identify the messages which describe on-ground rescue activities of different
NGOs and Government organizations.</p>
        <p>&lt;narr&gt; Narrative:</p>
        <p>A relevant message must contain information about relief-related activities of
different NGOs and Government organizations engaged in rescue and relief operation.
Messages that contain information about the volunteers visiting different geographical
locations would also be relevant. Messages indicating that organizations are
accumulating money and other resources will also be relevant. However, messages that do not
contain the name of any NGO / Government organization would not be relevant.</p>
        <p>The topic to query conversion starts with removal of stopwords. We run Stanford
POS tagger2.The noun and verb labeled keywords are extracted and added to the
query. We believe that topics are vague so by human intervention, the query is built.
4.2</p>
      </sec>
      <sec id="sec-4-2">
        <title>Topic Expansion</title>
        <p>We have used a lexical database WordNet3 for topic expansion which puts
English words into sets of synonyms, synsets. The top two synonyms are extracted and
added to the query using Wordnet.
4.3</p>
      </sec>
      <sec id="sec-4-3">
        <title>Tweet Filtering</title>
        <p>After downloading the tweets, only English tweets were worked on. Further, retweets
and tweets with only hashtags, emoticons or special characters were not considered.
Also, tweets with less than 5 words were ignored. We removed all the stopwords and
non-ASCII character from the tweets.
4.4</p>
      </sec>
      <sec id="sec-4-4">
        <title>Relevance Score</title>
        <p>We have used cosine similarity to calculate the relevance score between tweet and
expanded query in the first level. In the second level we have retrieved relevant tweets
using language model with Jelinek-Mercer smoothing with parameter λ=0.1.
4.5</p>
      </sec>
      <sec id="sec-4-5">
        <title>Novelty Detection</title>
        <p>Tweets are posted by many users at different times from different parts of the world.
To create the text summary from the tweets is a challenging task. Ideally, summary
should include all relevant tweets with constraint that it should not include redundant</p>
        <sec id="sec-4-5-1">
          <title>2 http://nlp.stanford.edu:8080/parser/ 3 https://wordnet.princeton.edu/</title>
          <p>information. Tweet summarization is a multiple document summarization problem.
Each tweet can be considered as a single document.</p>
          <p>To create the summary, we have selected top tweets from each topic whose
relevance score is greater than specified relevance threshold Trel. We have empirically set
value of Trel. Now for the next eligible tweet, we calculate it's similarity with tweets
already added in the summary so as to ensure novelty between them. Again a
Jaccardthreshold tnov=0.6 was decided empirically and tweets below it were added into
summary. Lower the similarity score, greater is the dissimilarity ensuring more novelty.</p>
          <p>In both levels, our summarization method remain same. However, we have used
different tweet retrieval techniques. SMERP 2017 track organizers have considered
ROUGE-L as primary metric to evaluate performance of all the runs. The following
tables show our results in comparison with the top run.
In this paper, we have implemented a method based on extractive summarization.
Table 1 and Table 2 show that our results are comparatively lower than IIEST. In the
future we will investigate our underperformance and will carry out post-hoc/ error
analysis. We would like to design a summarization system based on deep neural
network and logistic regression.
7</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>SMERP</surname>
          </string-name>
          <article-title>ECIR 2017 guidelines</article-title>
          , http://www.computing.dcu.ie/~dganguly/smerp2017/
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Bagdouri</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oard</surname>
            ,
            <given-names>D.W.</given-names>
          </string-name>
          : CLIP at TREC 2015:
          <article-title>Microblog and LiveQA</article-title>
          . In :TREC (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Tan</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roegiest</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Clarke</surname>
            ,
            <given-names>C.L.</given-names>
          </string-name>
          : University of Waterloo at
          <article-title>TREC 2015 Microblog Track</article-title>
          . In : TREC (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Tan</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roegiest</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Clarke</surname>
            ,
            <given-names>C.L.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Lin</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Simple dynamic emission strategies for microblog filtering</article-title>
          .
          <source>In : Proc. 39th International ACM SIGIR conference on Research and Development in Information Retrieval</source>
          ,pp.
          <fpage>1009</fpage>
          -
          <lpage>1012</lpage>
          . ACM (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Sakaki</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Okazaki</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Matsuo</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Earthquake shakes Twitter users: real-time event detection by social sensors</article-title>
          .
          <source>In: Proc. 19th international conference on World wide web</source>
          , pp.
          <fpage>851</fpage>
          -
          <lpage>860</lpage>
          . ACM (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>