<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Topic-aware video summarization technique for product reviews exploiting the BERTopic and BART models</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yu-Jin Ha</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gun-Woo Kim</string-name>
          <email>gunwoo.kim@gnu.ac.kr</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of AI Convergence Engineering, Gyeongsang National University</institution>
          ,
          <addr-line>Jinju</addr-line>
          ,
          <country country="KR">Korea</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>School of Computer Science/Department of AI Convergence Engineering, Gyeongsang National University</institution>
          ,
          <addr-line>Jinju</addr-line>
          ,
          <country country="KR">Korea</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Recently, there has been a growing trend of consumers seeking product information from video platforms such as YouTube. However, when viewing multiple review videos about the same product, viewers often encounter redundant information, resulting in wasted time. To address these issues, we use BERTopic to eliminate repetitive video content and address the problem of missing subtopics, which has been a limitation of traditional video summarization methods. Subsequently, the topic-aware video contents are summarized using the BART model. The ROUGE metric was used to evaluate the model proposed in this paper, and the experimental results showed improved results compared to previous research.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Multi video summarization</kwd>
        <kwd>Product review summarization</kwd>
        <kwd>BERTopic</kwd>
        <kwd>BART</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>1. Introduction</p>
      <p>The increase in the use of the YouTube platform as a main source of information mentioned
earlier is the same for product reviews. Product reviews are a way for buyers to learn about the
pros and cons of a product and its features before making a purchase. If a product review is long,
it takes a lot of time to read through the content of the entire review. For this reason, there is a
need for product review summaries that minimize the length of product reviews while preserving
their content. [12], [13] With the increase of product review videos on the YouTube platform,
there has been an increase in the number of multiple product review videos for a single product.
Multi-video product review summaries provide product reviews from different perspectives for
buyers who want to purchase a product and help sellers or companies to improve their quality of
products and services [14].</p>
      <p>In this paper, we aim to identify the semantic meanings and relationships of product review
videos through subtitle summaries and provide product reviews from different perspectives
through multi-video product review summaries.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Works</title>
      <p>Extractive summaries are summaries that attempt to extract the entire summarized document
into sentences, phrases, and words from the source document. Generative summaries are
summaries in which the summarized document generates a summary using words, sentences,
and phrases that are not present in the original document. [15]In this paper, we use extractive
summarization to preserve the review video creator's intent by using sentences extracted from
the original document to increase the reliability of the review.</p>
      <p>Topic modeling is a statistical model that identifies the topics of a set of documents in the field
of machine learning and natural language processing. It analyzes large amounts of textual data to
help identify the embedded semantic structure of the text and summarize its key content.[16]In
this paper, we use topic modeling to identify various embedded topics and subtopics.</p>
      <p>Ansamma et al. (2017) [17]maximized relevance and minimized redundancy by representing
them as word vectors, and multi-document extractive summarization using Latent Semantic
Analysis (LSA) and Non-Negative Matrix Factorization (NMF) as objective functions. Alrumiah et
al. (2022) [18]is a single summary of lecture video subtitles using LDA. By generating a keyword
list using LDA and extracting sentences from the original document that contain at least one word
from the keyword list, they improved the summarization performance in terms of length and
quality compared to previous studies. Miller et al. (2019) [19]summarized a single lecture video
using BERT. This is the first study in which a large language model (LLM) is used for video
summarization, and the LLM model, BERT, is used to embed the input sentences. Then, the n most
centroid sentences were selected using k-means to summarize the lecture video subtitles. The
performance evaluation was compared with TextRank, but since there was not Golden summary,
it was not evaluated using evaluation metrics.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed Method</title>
      <sec id="sec-3-1">
        <title>3.1. Preprocessing</title>
        <p>In the data preprocessing stage, we remove emoticons, onomatopoeia, and onomatopoeia, which
are unnecessary information in this summary. For onomatopoeia and onomatopoeia containing
more than one consonant and vowel, we removed all but one word. If there were missing values
and subtitles other than Korean, the columns were deleted. In addition, we removed stop words,
which are information that does not add meaningful information to the sentence. We used the list
of Korean stop words provided by NLTK.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Topic Modeling</title>
        <p>BERTopic is a popular BERT-based algorithm for topic modeling. It utilizes Transformer and
cTF-IDF to generate dense clusters while maintaining important words in the topic. The process
of BERTopic includes three steps: Document Embedding, Document Clustering, and Topic
representation. Each step is designed to be independent and can be customized according to the
purpose [20].</p>
        <p>In this paper, when comparing topic modeling algorithms, we found that BERTopic shows
equally good results in Topic Coherence and Topic Diversity. This means that it generates topics
with diversity while maintaining the coherence of the document's topics. In addition, BERTopic
has the advantage of requiring relatively little time even as the size of the vocabulary increases.
For these reasons, we chose BERTopic for this paper.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Summarization</title>
        <p>BART is a denoising autoencoder based on the Transformer architecture.[21]The corrupted text
is input to the encoder and the text representation learned by the encoder is sent to the decoder,
which recovers the original undamaged text. BART is a model that can be trained by combining
natural language understanding and generation, and performs well on summarization tasks.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Video Making</title>
        <p>This is the process of converting summarized text to video. The process extracts the number of
the video where the text is located and the timeline of the video where the text is located. The
video is then cut to include the summarized content. The process is iterated over the number of
summarized texts, and then the summarized videos are joined together into a single video. Figure
3 is an image that captures the summarized video created through this process.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experiments</title>
      <sec id="sec-4-1">
        <title>4.1. Dataset</title>
        <p>For Korean data, there is no existing product review data for multi-video summarization, thus we
created a dataset in this paper. The dataset is composed of 271 Korean product review videos of
22 different products such as cell phones, game consoles, and robot vacuum cleaners. Shown in
Figure 4 below is the shape of the dataset. Each column of the data is in the format of product
name, video title, video link, video subtitle, time of video matching with subtitle, and answer data.
The answer data was generated randomly.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Evaluation metric</title>
        <p>For evaluation metrics, we used Topic Coherence and Topic Diversity to evaluate Topic modeling,
and Rouge for summary evaluation.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.2.1. Topic Coherence</title>
        <p>Topic Coherence evaluates Normalized Pointwise Mutual Information (NPMI) [22], a method
proposed by Röder et al. (2015) [23]. The evaluated result has a value between -1 and 1. The
closer it is to 1, the more the coherence of the topic.</p>
        <p>⃗ ( ′) = { ∑</p>
        <p>(  ,   ) }
  ∈ ′
 =1,…,| |</p>
        <p>Let  ′be a vector representing the similarity between words in the corpus.  ⃗ ( ′) computes
the similarity of each word in  ′ to the other words and represents it as a vector.</p>
        <p>NPMI is a metric that measures the relevance of two words   ,   .  (  ,   ) is the probability
that two words   ,   co-occurence.  (  ) is the probability of   occurrence and  (  ) is the
probability of   occurrence.  represents an exponent and serves to weight the NPMI values
exponentially.</p>
        <p>Φ  (⃗⃗ , ⃗⃗⃗ ) =</p>
        <p>| |
∑</p>
        <p>=1   ∙  
‖⃗⃗ ‖2 ∙ ‖⃗⃗⃗ ‖2
 represents the dimension of the vector. ‖⃗⃗ ‖
2 represents the L2 norm of vector ⃗⃗ , ‖⃗⃗⃗ ‖2
⃗⃗ , ⃗⃗⃗ by dividing the dot product of the vectors by the L2 norm of the vectors.
represents the L2 norm of vector ⃗⃗⃗ , and Φ  (⃗⃗ , ⃗⃗⃗ ) computes the similarity between two vectors</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.2.2. Topic Diversity 4.2.3. Rouge</title>
        <p>Dieng et al. (2020) [24] proposed a metric to measure the diversity of vocabulary or words in the
topics of a certain topic model. The metric is the proportion of unique words, excluding duplicate
words, that exist in the topics for all topics, and the measure ranges from 0 to 1. 0 represents
duplicate topics and 1 represents various topics.</p>
        <p>|{</p>
        <p>}| is the size of the union of unique words in all topics. topk is the selected
number of top topics, and topics is the number of total topics.


=
|{

∙ |
}|
|</p>
        <p>(  ,   ) =
ROUGE − N =
∑ ∈</p>
        <p>∑ −
∑ ∈
 
∑ −
∈ 

∈</p>
        <p>ℎ( − 
( − 
)</p>
        <p>
          )
(
ROUGE (Recall-Oriented Understudy for Gisting Evaluation) (Lin, 2004) is a co-occurrence
statistical measure of N-grams and is to be defined as follows (
          <xref ref-type="bibr" rid="ref5">5</xref>
          ). . 
number of N-grams that are included in both summarized results and reference summary.
        </p>
        <p>
          ) is the number of N-grams included in the reference summary. These ROUGE
metrics can generate three scores: Recall, Precision, and F-measure [25].


ℎ(
− 
) is the
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
(
          <xref ref-type="bibr" rid="ref4">4</xref>
          )
(
          <xref ref-type="bibr" rid="ref5">5</xref>
          )
4.3. BERTopic
We compared the performance difference between embedding models using TC (Topic
Coherence) and TD (Topic Diversity), as the performance difference exists depending on the
embedding model of BERTopic. As shown in Table 1, the multilingual model of SBERT,
distiluesbase-multilngual-cased-v1, recorded the best score, thus we compared the topic model with this
embedding model.
        </p>
        <p>We calculated the TC and TD of three models, the BERTopic model with
distilues-basemultilngual-cased-v1 as the embedding model, the Combined Topic Models (CTM) model, and the
LDA model, and found that BERTopic shows the best performance, as shown in Table 2. This
shows that the BERTopic model generates various topics and coherent topics in one topic.
We calculated the TC and TD of three models, the BERTopic model with
distilues-basemultilngual-cased-v1 as the embedding model, the Combined Topic Models (CTM) model, and the
LDA model, and found that BERTopic shows the best performance, as shown in Table 2. This
shows that the BERTopic model generates various topics and coherent topics in one topic.
4.4. Bart
Summarization step uses BART to generate a summary for each topic generated by BERTopic.
After sorting the sentences with the highest cosine similarity, we extracted the top n sentences to
summarize each topic. In this paper, we set n to 3 arbitrarily. The performance comparison was
performed with TextRank [26] and Bert-extractive-summarizer [19]. The results can be seen in
Table 3. In precision, TextRank and Bert-extractive-summarizer showed good performance. But,
in recall and f1-measure, we can show that the proposed model has the best performance. This
means that the proposed model performs the most equally well compared to the other models.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>In this paper, we reduce the duplication of data by grouping the same or similar information into
one topic through Topic Modeling. In addition, we compared models such as CTM and LDA, and
selected BERTopic with the best topic diversity and topic cohesion to generate coherent and
diverse topics. This process reduces wasted time searching for information, provides users with
multiple perspectives on the product, and helps eliminate information loss in long videos. We
used BART for summarization. In comparison to the previous research such as Text Rank and
Bert-extractive-summarizer, ROUGE showed the best performance in recall and F1 measures.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>This research was supported by Basic Science Research Program through the National Research
Foundation of Korea (NRF), funded by the Ministry of Education, Science and Technology
(NRF2021R1G1A1006381).
[17] John, Ansamma, P. S. Premjith, and M. Wilscy. "Extractive multi-document summarization using
population-based multicriteria optimization." Expert Systems with Applications 86 (2017): 385-397.
[18] S. S. Alrumiah and A. A. Al-Shargabi, "Educational videos subtitles’ summarization using latent
dirichlet allocation and length enhancement," Computers, Materials &amp; Continua, vol. 70, no.3, pp.
6205–6221, 2022.
[19] Miller, Derek. "Leveraging BERT for extractive text summarization on lectures." arXiv preprint
arXiv:1906.04165 (2019).
[20] Grootendorst, Maarten. "BERTopic: Neural topic modeling with a class-based TF-IDF
procedure." arXiv preprint arXiv:2203.05794 (2022).
[21] Lewis, Mike, et al. "Bart: Denoising sequence-to-sequence pre-training for natural language
generation, translation, and comprehension." arXiv preprint arXiv:1910.13461 (2019).
[22] Bouma, Gerlof. "Normalized (pointwise) mutual information in collocation extraction." Proceedings of</p>
      <p>GSCL 30 (2009): 31-40.
[23] Röder, Michael, Andreas Both, and Alexander Hinneburg. "Exploring the space of topic coherence
measures." Proceedings of the eighth ACM international conference on Web search and data mining.
2015.
[24] Dieng, Adji B., Francisco JR Ruiz, and David M. Blei. "Topic modeling in embedding
spaces." Transactions of the Association for Computational Linguistics 8 (2020): 439-453.
[25] Lin, Chin-Yew. "Rouge: A package for automatic evaluation of summaries." Text summarization
branches out. 2004.
[26] Mihalcea, Rada, and Paul Tarau. "Textrank: Bringing order into text." Proceedings of the 2004
conference on empirical methods in natural language processing. 2004</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Parabhoi</surname>
          </string-name>
          ,
          <string-name>
            <surname>Lambodara</surname>
          </string-name>
          , et al.
          <article-title>"YouTube as a source of information during the Covid-19 pandemic: a content analysis of YouTube videos published during January to March</article-title>
          <year>2020</year>
          .
          <article-title>" BMC Medical Informatics and Decision Making 21.1 (</article-title>
          <year>2021</year>
          ):
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Khatri</surname>
          </string-name>
          ,
          <string-name>
            <surname>Priyanka</surname>
          </string-name>
          , et al.
          <article-title>"YouTube as source of information on 2019 novel coronavirus outbreak: a cross sectional study of English and Mandarin content." Travel medicine</article-title>
          and
          <source>infectious disease 35</source>
          (
          <year>2020</year>
          ):
          <fpage>101636</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Vazarkar</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Manjusha</surname>
          </string-name>
          , “
          <article-title>Video to text summarization system using multimodal LDA,”</article-title>
          <source>Journal of Seybold</source>
          , vol.
          <volume>15</volume>
          , no.
          <issue>9</issue>
          , pp.
          <fpage>3517</fpage>
          -
          <lpage>3523</lpage>
          ,
          <year>2020</year>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Lei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Yi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S. Z.</given-names>
            <surname>Li</surname>
          </string-name>
          , “
          <article-title>Online content-aware video condensation,”</article-title>
          <source>in IEEE Conference on Computer Vision and Pattern Recognition</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Y. J.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Grauman</surname>
          </string-name>
          , “
          <article-title>Discovering important people and objects for egocentric video summarization,”</article-title>
          <source>in IEEE Conference on Computer Vision and Pattern Recognition</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>H.</given-names>
            <surname>Kang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Matsushita</surname>
          </string-name>
          , and
          <string-name>
            <given-names>X.</given-names>
            <surname>Tang</surname>
          </string-name>
          , “
          <article-title>Space-time video montage,”</article-title>
          <source>in IEEE Conference on Computer Vision and Pattern Recognition</source>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Pritch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rav-Acha</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Peleg</surname>
          </string-name>
          , “
          <article-title>Nonchronological video synopsis and indexing</article-title>
          ,
          <source>” IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          , pp.
          <fpage>1971</fpage>
          -
          <lpage>1984</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Lu</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Grauman</surname>
          </string-name>
          , “
          <article-title>Story-driven summarization for egocentric video</article-title>
          ,”
          <source>in IEEE Conference on Computer Vision and Pattern Recognition</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>V. B.</given-names>
            <surname>Aswin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Javed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Parihar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Aswanth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. R.</given-names>
            <surname>Druval</surname>
          </string-name>
          et al.,
          <article-title>“NLP-driven ensemble-based automatic subtitle generation and semantic video summarization technique,”</article-title>
          <source>in Advances in Intelligent Systems &amp; Computing</source>
          , vol.
          <volume>1133</volume>
          , Singapore: Springer, pp.
          <fpage>3</fpage>
          -
          <lpage>13</lpage>
          ,
          <year>2021</year>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <surname>Bo</surname>
          </string-name>
          , et al.
          <article-title>"Video caption detection and extraction using temporal information</article-title>
          .
          <source>" Proceedings 2003 International Conference on Image Processing (Cat. No. 03CH37429)</source>
          . Vol.
          <volume>1</volume>
          . IEEE,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Narwal</surname>
            , Pulkit,
            <given-names>Neelam</given-names>
          </string-name>
          <string-name>
            <surname>Duhan</surname>
          </string-name>
          , and Komal Kumar Bhatia.
          <article-title>"A comprehensive survey and mathematical insights towards video summarization</article-title>
          .
          <source>" Journal of Visual Communication and Image Representation</source>
          <volume>89</volume>
          (
          <year>2022</year>
          ):
          <fpage>103670</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Pawar</surname>
          </string-name>
          ,
          <string-name>
            <surname>Priya</surname>
          </string-name>
          , et al.
          <article-title>"Online product review summarization." 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS)</article-title>
          . IEEE,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Boorugu</surname>
            , Ravali, and
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Ramesh</surname>
          </string-name>
          .
          <article-title>"A survey on NLP based text summarization for summarizing product reviews." 2020 Second International Conference on Inventive Research in Computing Applications (ICIRCA)</article-title>
          . IEEE,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Zhao</surname>
            , Qingjuan,
            <given-names>Jianwei</given-names>
          </string-name>
          <string-name>
            <surname>Niu</surname>
          </string-name>
          , and Xuefeng Liu.
          <article-title>"ALS-MRS: Incorporating aspect-level sentiment for abstractive multi-review summarization." Knowledge-Based Systems (</article-title>
          <year>2022</year>
          ):
          <fpage>109942</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Widyassari</surname>
            ,
            <given-names>Adhika</given-names>
          </string-name>
          <string-name>
            <surname>Pramita</surname>
          </string-name>
          , et al.
          <article-title>"Review of automatic text summarization techniques &amp; methods."</article-title>
          <source>Journal of King Saud University-Computer and Information Sciences 34.4</source>
          (
          <year>2022</year>
          ):
          <fpage>1029</fpage>
          -
          <lpage>1046</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Vayansky</surname>
          </string-name>
          , Ike, and Sathish AP Kumar.
          <article-title>"A review of topic modeling methods</article-title>
          .
          <source>" Information Systems</source>
          <volume>94</volume>
          (
          <year>2020</year>
          ):
          <fpage>101582</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>