<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Extractive Text Summarization using Meta-heuristic Approach</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Doppalapudi Venkata Pavan Kumar</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Srigadha Shreyas Raj</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pradeepika Verma</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sukomal Pal</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Indian Institute of Technology (BHU)</institution>
          ,
          <addr-line>Varanasi</addr-line>
          ,
          <country country="IN">INDIA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>TIH, Indian Institute of Technology</institution>
          ,
          <addr-line>Patna</addr-line>
          ,
          <country country="IN">INDIA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper describes a system for Indian Language Extractive Summarization which has been developed under the shared task ILSUM of FIRE 2022[1][2] Conference by SUMIL22 team. This system works by picking up sentences directly from the text using a ranking function to form the corresponding summary. In our approach, for each sentence, we first calculated various text features such as sentence position, sentence length, sentence similarity, frequent words, and sentence numbers. Then, these text features along with their optimized weights are used for ranking the sentences, and then the summary is generated by selecting top-ranked sentences. For optimizing the weights of the text features, we have used a population based meta-heuristic approach, Genetic Algorithm (GA). We submitted three runs and got an F-score of 0.3843 for ROUGE-1, 0.2584 for ROUGE-2, 0.1997 for ROUGE-3 and 0.2190 for ROUGE-4 in the best run.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Extractive Text Summarization</kwd>
        <kwd>Genetic Algorithm</kwd>
        <kwd>Text Features</kwd>
        <kwd>ROUGE</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Information has become the most important part of our life. People generally use a large
number of sources from news articles to social media posts to know the information.
Generating automatic summaries of larger texts will be useful for compressing the larger texts of
information into smaller texts and also saves a lot of time. Our project focuses on automatic
text summarization of news data. Text summarization is the creation of a shorter, precise, and
more relevant summary of a larger text. Automatic summarization methods will be able to
handle the ever-increasing amounts of internet text data, allowing users to find and consume
important information more quickly.</p>
      <p>Abstractive and extractive summarizations are the two broad types of automatic text
summarization. Abstractive summarization methods attempt to create a summary by trying to interpret
the text using advanced natural language methods to create a unique and more concise text of
parts that may not show up in the original article, that symbolizes the most critical information
from the original text, going to require paraphrasing of sentences and combining knowledge
from the total article to generate summaries similar to what a human-written abstract does. An
adequate abstractive summary is linguistically fluent and enwraps the core information of the
input. On the other hand, extractive summarization methods create the summary by picking
up sentences directly from the text, based on a ranking function to form a relevant summary.
This method identifies essential sections of the text, cropping out and joining together parts of
the content to produce an abridged version. We have used Extractive text summarization for
English language in our approach.</p>
      <p>The rest of the article is organized as follows. We review the state-of-the-art approaches
related to extractive and abstractive document summarization in Section 2. Then, we presented
our proposed approach in the Section 3. Section 4 discussed the results and analysis part of the
work followed by conclusion and future scope in Section 5.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Literature Survey</title>
      <p>
        A brief literature survey[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] of extractive and abstractive summarization has been discussed as
follows.
      </p>
      <p>
        Text Rank[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] is an unsupervised method to summarize text documents. It is based on graphs.
Authors considered sentences of the document as the nodes of the graphs and similarity between
the sentences is represented as the edges of the graphs. The sentences of the documents are
transformed into the vectors using diferent techniques like bag-of-words, Tf-Idf, word2vec etc.
Moreover, sentences are ranked using cosine similarity and the top sentences are extracted to
form the summary. Next, Erkan and Radev[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] proposed a Graph-based summarization system
known as LexRank. This system identifies the most central sentences in the cluster of sentences,
which gives us suficient information regarding the main topic of the cluster. The sentences of
the articles are converted into vectors by Tf-Idf technique and thereafter sentence vectors are
clustered in order to extract the summary of the article.
      </p>
      <p>
        Further, a heuristic Approach for Telugu text summarization with improved sentence ranking
had been proposed by Mamidal et al.[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. In this work, extractive text summarization for
Telugu languague had been done using optimized sentence ranking. This sentence scoring
mechanism was based on the event and named entity scores. Scoring of sentences was done by
applying statistical measures on events and named entities features. Next, Verma and Om[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]
proposed a maximum coverage and relevancy with minimal redundancy based multi-document
summarization system. Here, summarization is done by generating a single document from all
the documents and then scoring each sentence based on diferent text features and their optimal
weights assigned by using Shark Smell Optimization (SSO). Moreover, Verma et al.[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ][
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] also
proposed some other efective methods for document summarization.
3. Text Summarization using Proposed approach
In the proposed work, the task of summarization has been done into four stages that are
Preprocessing of text input, Text features analysis, optimal weight assignment of the text feature,
and summary generation. These steps are described as follows:
      </p>
      <sec id="sec-2-1">
        <title>3.1. Preprocessing</title>
        <p>
          Preprocessing is the first process to be done in our method. It covers removing of stop words,
punctuations, unwanted data from the articles and doing sentence tokenization, word
tokenization. Stop words are the words that are commonly occurred and do not have a specific
significance meaning. Sentence tokenization divides the data into sentences and word
tokenization divides the data into words. Stop words removal will remove all the stop words from the
data after word tokenization. We have collected stop words for English language from Natural
Language Tool Kit (NLTK) library. We have used NLTK library for word tokenization and
for sentence tokenization, we used RegexpTokenizer from NLTK by writing our own regular
expression.
3.2. Text feature’s analysis
We have used various text features for scoring the sentences of each news articles. Let ,
represent the ℎ sentence of ℎ news article. And  represent ℎ text feature. Then  (, )
represent the score of ℎ text feature for a sentence , . The text features used in our approach
are as follows[
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>3.2.1. Sentence Position(1):</title>
        <p>
          In a article, sentences present at the start and end are more relevant than those present in the
middle of the article[
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. Since, the primary cause and the crucial details are present at the start
and conclusions are present at the end of the articles. So, we assign the score to a sentence
based on the equation:
3.2.2. Sentence Length(2):
Sentences with longer lengths are more informative than sentences with fewer words. So, we
assign less weight to sentences with shorter lengths. We calculate the score of each sentence by
taking the ratio of the number of words in the sentence to the number of words in the longest
length sentence.
        </p>
        <p>2(, ) = 1 −
|() − | , ||</p>
        <p>(|, |)
1(, ) =
⃒⃒ 2 − ⃒⃒

2
(1)
(2)
(3)
(4)</p>
      </sec>
      <sec id="sec-2-3">
        <title>3.2.3. Sentence Similarity(4):</title>
        <p>
          Sentences which are like other sentences and have more common information are considered
more important. We calculate this feature score by taking the ratio of the intersection of words
in two sentences and the length of comparing sentence[
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
        </p>
        <p>4(, ) =
∑︀||
′=1,̸=′ ((, , ,′ ))
||</p>
      </sec>
      <sec id="sec-2-4">
        <title>3.2.4. Sentences with Frequent words(5):</title>
        <p>
          Words which are more frequent in the text are more important. Hence, the sentences containing
the frequent are also given more weightage than other sentences[
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. For our approach, we have
taken top 20% words having maximum frequency as frequent words.
        </p>
        <p>5(, ) = 5(, ) =
∑︀∈, ( )
| |</p>
      </sec>
      <sec id="sec-2-5">
        <title>3.2.5. Sentences with Numerical Data(6):</title>
        <p>Sentences containing numerical data are regarded as more informational as numbers describe
more analytical information.</p>
        <p>6(, ) =
∑︀∈, ( )
||
(5)</p>
      </sec>
      <sec id="sec-2-6">
        <title>3.3. Optimal weights assignment to text features</title>
        <p>After computing the scores of each sentence for every text feature, a population based
metaheuristic approach Genetic Algorithm (GA) assigns the weights to these text features.</p>
        <sec id="sec-2-6-1">
          <title>3.3.1. Overview of Genetic Algorithm</title>
          <p>
            Genetic Algorithm is a model or abstraction of evolution through natural selection based
on Charles Darwin’s theory of natural selection. It is a form of evolution that occurs on
computer[
            <xref ref-type="bibr" rid="ref13">13</xref>
            ]. It was proposed by John Holland and his collaborators in the 1960s and 1970s.
Genetic Algorithms are adaptive and can be used to solve both constraint and non-constrained,
search and optimization problems[
            <xref ref-type="bibr" rid="ref14">14</xref>
            ]. The genetic algorithm is based on the population’s
chromosome’s genetic structure and behaviour. It will be helpful while working with large and
complex datasets. Genetic algorithms are built on the following principles.
          </p>
          <p>• Each chromosome represents a potential solution. As a result, the population is made up
of collection of chromosomes.
• Each chromosome in the population is assigned a fitness value. As a result, higher fitness
the better the solution is.
• The best chromosomes out of the existing chromosomes in the population are used for
reproducing the following generation ofsprings.
• Because of crossover, the ofspring created will inherit characteristics from both parents.</p>
          <p>A minor change in the structure of a chromosome will be called mutation.</p>
          <p>The main advantages of GA over general optimization algorithms are, it supports multiple
objective optimization, it is good for noisy environments and has the ability to deal with complex
problems and parallelism. The basic steps present in GA are initialization, fitness evaluation,
selection, crossover, mutation, and termination. The fitness function varies with respect to
the problem. The steps to obtain the optimized weights of the text features are as follows (we
consider our method as a maximization problem)</p>
        </sec>
        <sec id="sec-2-6-2">
          <title>3.3.2. Initialization of population</title>
          <p>
            First, we generate random weights in (0,1) as the initial weights for the text features. These
are generated in the form of solutions  = {1, 2, 3, ...., } [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ] , where “n” is the number
of solutions in each generation and each solution  can be further represented as  =
{,1, ,2, ....., , } [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ], where N = number of dimensions for each solution. Each element of
a solution  can be represented as ,, where  = {1, 2, ...., } and  = {1, 2, ....,  }.[
            <xref ref-type="bibr" rid="ref7">7</xref>
            ]
We have used the python library ‘numpy.random’ to generate these initial set of population.
          </p>
        </sec>
        <sec id="sec-2-6-3">
          <title>3.3.3. Fitness evaluation</title>
          <p>
            Now, we have to evaluate the fitness of each solution. So, for calculating the fitness of each
solution, we have to rank the sentences in each article, with each element of the solution as the
weight of the corresponding text feature[
            <xref ref-type="bibr" rid="ref7">7</xref>
            ]. After ranking the sentences, select the top twenty
percent ranked sentences in each article as the generated summary. Now, compare the generated
summary with reference summary for each article using a fitness function and the sum of results
obtained is the fitness value of the given solution. We have also optimized the fitness evaluation
by passing the fitness values of previous generation to the fitness function, as it saves time
by not calculating the fitness of the solutions passed from previous generation to this generation.
Sentence Ranking: For scoring the sentences of an ℎ article, having feature’s scores of 
with a solution , we use following equation.
          </p>
          <p>(, ) = ∑︁ , × ((, ))</p>
          <p>=1</p>
          <p>Where, , represents the ℎ dimension in the ℎ solution. (, ) represents the ℎ
feature score for the ℎsentence in the ℎ article.</p>
          <p>After ranking the sentences, select the top twenty percent ranked sentences in each article as
the generated summary. We will extract the sentences for each article for every solution. Now,
these extracted sentences are compared with the use of a fitness function.</p>
          <p>Fitness Function: Here, we use ROUGE metrics ROUGE-1 and ROUGE-2 for
calculating the similarity between the generated summary and the reference summary. For calculating
the fitness of a solution, we have to calculate the sum of similarity between the generated
summary and the reference summary for every article.</p>
          <p>=1
  = ∑︁  1 [′ ′] +  2 [′ ′]
2
Where, e is the total number of articles present and  1 [′ ′] is the F1-score of the
ROUGE-1 of ℎ article.</p>
        </sec>
        <sec id="sec-2-6-4">
          <title>3.3.4. Selection</title>
          <p>After calculating the fitness values of each solution, now we have to select the parents from
the set of solutions to produce the ofsprings. We have to select half of the solutions for
mating-pool. For selecting the parents, we have used elitist based roulette wheel selection.
First, it selects the top 25 percent of the solutions as parents based on their fitness values. Then,
it selects another 25 percent of the solutions from the remaining 75 percent solutions with the
use of roulette wheel selection.</p>
        </sec>
        <sec id="sec-2-6-5">
          <title>3.3.5. Crossover</title>
          <p>
            Crossover, which is also known as recombination is used for combining the information of two
solutions to produce a new ofspring. There are many diferent types of crossovers available.
But we have used simple single-point crossover. First, we select two parents from mating-pool
and select a crossover point. A new ofspring is produced by taking data before crossover point
from first parent and combining it with data after crossover point from second parent.[
            <xref ref-type="bibr" rid="ref15">15</xref>
            ]
          </p>
        </sec>
        <sec id="sec-2-6-6">
          <title>3.3.6. Mutation</title>
          <p>Mutation is performed on the newly created ofsprings to bring some changes in them. It is used
to bring diversity from one generation of solutions to the next generation solutions. We have
(6)
(7)
applied mutation on two diferent points for each solution in the ofsprings. For each ofspring,
we select a point in first half of the ofspring and select a random value in (-0.25,0.25) and add to
the data at the selected point. Now, again for each ofspring, we select a point in second half of
the ofspring and select a random value in (-0.25,0.25) and add to the data at the selected point.
After performing mutation, all the new ofsprings and the parents in the mating-pool will be
combined to form the solutions for next generation.</p>
        </sec>
        <sec id="sec-2-6-7">
          <title>3.3.7. Termination</title>
          <p>Once the number of predefined iterations are completed, the best solution having the highest
iftness value will be selected as the final weights of the text features.</p>
        </sec>
      </sec>
      <sec id="sec-2-7">
        <title>3.4. Summary generation</title>
        <p>Once the optimal weights for the text features are obtained with the use of genetic algorithm,
we generate the final summary. For producing the summary, we first score the sentences in
each article with the optimized weights of the features. If the GA gives the optimal weights for
text features at solution q = O, then the score of , is
(, ) =

∑︁
=,=1
, × ((, ))
(8)
After ranking the sentences, the top twenty percent ranked sentences in each article are extracted
as our summary.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Results and Analysis</title>
      <p>In the English dataset, for each sentence in each of the articles, we calculate the values of all
the text features. Then, we initialize 20 random individual population as the weights of the
text features. Now, the GA is run for 200 generations. The optimized weights obtained using
GA for the above described text features (in Section 3.3) are 0.86276778, 0.514368, 0.3003737,
0.49962974 and 0.01303291 respectively, which are shown in Figure-5.</p>
      <p>The Recall-Oriented Understudy for Gisting Evaluation (ROUGE) scoring algorithm calculates
the similarity between a text and a collection of reference articles. It determines the quality of
the generated text. Rouge Score is the oficial metric for the ILSUM track.</p>
      <p>By using these weights, we calculated the weighted average of the text feature scores of each
sentence. We took the top twenty percentage ranked sentences in each article as the summary
generated. The f-measure of ROUGE-1, ROUGE-2 and ROUGE-4 values of the summary
generated and our rank for the English test dataset is shown in table-1. The detailed evaluation
results of English test dataset containing F1-Score, Precision and Recall for ROUGE-1, ROUGE-2,
ROUGE-3 and ROUGE-4 is shown in table-2. From the obtained evaluation results, it can be
understood that our technique does not give great results as expected at least upto 50% F1-Score.
This is mainly because we produce 20% of the sentences as our summary but the actual summary
can be any number of sentences and also during the sentence tokenization, we used a regular
expression which reduced the length of the sentences than expected as compared to expected
sentences.</p>
    </sec>
    <sec id="sec-4">
      <title>5. Conclusion</title>
      <p>This work reports for the shared task of the Automatic Text Summarization for English Language
Text -FIRE 2022. We have experimented with several types of algorithms for the shared task,
including abstractive text summarization like BERT, BART, etc. For extractive text summarization,
we have performed sentence ranking using text features. The text features are weighted using
genetic algorithm, which gave us the better results. However the scores can be improved further
by using slightly tweaking the genetic algorithm, by training for many more generations and by
increasing the text features. In future, we can use this approach to summarize the documents
written in other Indian languages such as Hindi, Bengali, Tamil, Telugu etc.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgement</title>
      <p>This work is supported by TIH, Indian Institute of Technology, Patna and in lined with it’s
theme.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Satapara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <article-title>Findings of the first shared task on indian language summarization (ilsum): Approaches, challenges and the path ahead</article-title>
          ,
          <source>in: Working Notes of FIRE 2022 - Forum for Information Retrieval Evaluation</source>
          , Kolkata, India, December 9-
          <issue>13</issue>
          ,
          <year>2022</year>
          , CEUR Workshop Proceedings, CEUR-WS.org,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Satapara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <article-title>Fire 2022 ilsum track: Indian language summarization</article-title>
          ,
          <source>in: Proceedings of the 14th Forum for Information Retrieval Evaluation</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Verma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Om</surname>
          </string-name>
          ,
          <article-title>A comparative analysis on hindi and english extractive text summarization</article-title>
          ,
          <source>ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP) 18</source>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>39</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R.</given-names>
            <surname>Mihalcea</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Tarau</surname>
          </string-name>
          , Textrank:
          <article-title>Bringing order into text</article-title>
          ,
          <source>in: Proceedings of the 2004 conference on empirical methods in natural language processing</source>
          ,
          <year>2004</year>
          , pp.
          <fpage>404</fpage>
          -
          <lpage>411</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G.</given-names>
            <surname>Erkan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. R.</given-names>
            <surname>Radev</surname>
          </string-name>
          ,
          <article-title>LexRank: Graph-based lexical centrality as salience in text summarization</article-title>
          ,
          <source>Journal of Artificial Intelligence Research</source>
          <volume>22</volume>
          (
          <year>2004</year>
          )
          <fpage>457</fpage>
          -
          <lpage>479</lpage>
          . URL: https://doi.org/10.1613%
          <fpage>2Fjair</fpage>
          .1523. doi:
          <volume>10</volume>
          .1613/jair.1523.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>K. K.</given-names>
            <surname>Mamidala</surname>
          </string-name>
          , et al.,
          <article-title>A heuristic approach for telugu text summarization with improved sentence ranking</article-title>
          ,
          <source>Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12</source>
          (
          <year>2021</year>
          )
          <fpage>4238</fpage>
          -
          <lpage>4243</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>P.</given-names>
            <surname>Verma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Om</surname>
          </string-name>
          ,
          <article-title>Mcrmr: Maximum coverage and relevancy with minimal redundancy based multi-document summarization</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>120</volume>
          (
          <year>2019</year>
          )
          <fpage>43</fpage>
          -
          <lpage>56</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>P.</given-names>
            <surname>Verma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Verma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pal</surname>
          </string-name>
          ,
          <article-title>An approach for extractive text summarization using fuzzy evolutionary and clustering algorithms</article-title>
          ,
          <source>Applied Soft Computing</source>
          <volume>120</volume>
          (
          <year>2022</year>
          )
          <fpage>108670</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>P.</given-names>
            <surname>Verma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Verma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pal</surname>
          </string-name>
          ,
          <article-title>A fusion of variants of sentence scoring methods and collaborative word rankings for document summarization</article-title>
          ,
          <source>Expert Systems</source>
          (
          <year>2022</year>
          )
          <article-title>e12960</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>P.</given-names>
            <surname>Verma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Om</surname>
          </string-name>
          ,
          <article-title>A novel approach for text summarization using optimal combination of sentence scoring methods</article-title>
          ,
          <source>Sa¯dhana¯ 44</source>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Fattah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ren</surname>
          </string-name>
          , Automatic text summarization,
          <source>World Academy of Science, Engineering and Technology</source>
          <volume>37</volume>
          (
          <year>2008</year>
          )
          <fpage>192</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>P.</given-names>
            <surname>Achananuparp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <article-title>The evaluation of sentence similarity measures</article-title>
          ,
          <source>in: International Conference on data warehousing and knowledge discovery</source>
          , Springer,
          <year>2008</year>
          , pp.
          <fpage>305</fpage>
          -
          <lpage>316</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Forrest</surname>
          </string-name>
          ,
          <article-title>Genetic algorithms: principles of natural selection applied to computation</article-title>
          ,
          <source>Science</source>
          <volume>261</volume>
          (
          <year>1993</year>
          )
          <fpage>872</fpage>
          -
          <lpage>878</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>D.</given-names>
            <surname>Beasley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. R.</given-names>
            <surname>Bull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. R.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <article-title>An overview of genetic algorithms: Part 1</article-title>
          , fundamentals, University computing 15 (
          <year>1993</year>
          )
          <fpage>56</fpage>
          -
          <lpage>69</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Umbarkar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. D.</given-names>
            <surname>Sheth</surname>
          </string-name>
          ,
          <article-title>Crossover operators in genetic algorithms: a review</article-title>
          .,
          <source>ICTACT journal on soft computing 6</source>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>