<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>A. M. E.: Video Summarization Based on a
Fuzzy Based Incremental Clustering. International Journal of Electrical and Computer En</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1109/FUZZY.2006.1681764</article-id>
      <title-group>
        <article-title>Online Video Summarization with the Kohonen SOM in Real Time</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Simon Kuznets Kharkiv National University of Economics</institution>
          ,
          <addr-line>9a Nauka Ave., Kharkiv 61166</addr-line>
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2006</year>
      </pub-date>
      <volume>78</volume>
      <issue>4</issue>
      <fpage>16</fpage>
      <lpage>21</lpage>
      <abstract>
        <p>In this paper, we propose an algorithm for the creation of an automatic video summary. It is based on the Kohonen's Self-Organizing Map as a method for training and clustering of frame features in online mode. The decision about whether the frame should be in summary depends on the stability of the last sequential clustering results. Three-way matching of images between automatic summary and corresponding user one is proposed and tested. Open Video and SumMe datasets were used for accuracy and performance comparison. It is shown, that the proposed approach can achieve real time summarization combining with its online properties without the requirement to see the whole video. The accuracy (measured by F1 scores) of the proposed approach can compete with batch processing methods. We also compared the performance to the state-of-the-art existing methods of online real time processing.</p>
      </abstract>
      <kwd-group>
        <kwd>video summarization</kwd>
        <kwd>keyframe</kwd>
        <kwd>self-organizing map</kwd>
        <kwd>clustering</kwd>
        <kwd>image features</kwd>
        <kwd>matching</kwd>
        <kwd>summary</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction and the related work</title>
      <p>In recent years tremendous development of the information, computer and
communication technologies made humans impossible to process all available and appearing
data themselves. Especially, this is true for video content, e.g. every minute 300-500
hours of video (by different sources) are uploaded to YouTube, additionally, users
watch over a billion hours every day.</p>
      <p>
        Video summarization is a process of selecting some specific subset of keyframes
(still images) or keyshots (small sequences of frames) from a video stream which
preserves the main idea of video [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. A summary should keep important frames of the
initial video that creates the core summarization challenge – the same frames may be
important and unimportant at the same time for different users, making in such a way
the summary of video to be a quite subjective term.
      </p>
      <p>
        A lot of researches [
        <xref ref-type="bibr" rid="ref2 ref3 ref4 ref5">2–5</xref>
        ] distinguish all summarization methods into two classes:
unsupervised [
        <xref ref-type="bibr" rid="ref6 ref7 ref8">6–8</xref>
        ] and supervised [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref9">9–12</xref>
        ]. A brief description of some known
methods is below.
      </p>
      <p>Copyright © 2020 for this paper by its authors. Use permitted under Creative
Commons License Attribution 4.0 International (CC BY 4.0).</p>
      <p>
        Some researchers [
        <xref ref-type="bibr" rid="ref13 ref14">13–15</xref>
        ] used self-organizing maps or close approaches.
Hierarchical Growing Cell Structures (GCS) methods are proposed in [
        <xref ref-type="bibr" rid="ref13 ref14">13, 14</xref>
        ] as an
extension of the Kohonen's Self-Organizing Maps (SOM) that allows to build a flexible
structure without knowing the number of classes a priori. A graphical user interface to
investigate the construction of two-dimensional SOM is proposed in [15].
Unfortunately, these researches don't contain significant modeling results using at least a
dataset of medium size.
      </p>
      <p>
        One of the most popular summarization approach is VSUMM [
        <xref ref-type="bibr" rid="ref7">7, 16</xref>
        ]. The authors
proposed the method based on the extraction of color features from video frames and
unsupervised classification. Additionally, a new measure to compare automatic and
user-defined summaries (Comparison of User Summaries, CUS) was presented.
      </p>
      <p>
        The variety of other video summarizing methods, based on generative adversarial
networks [17], long short-term memory networks [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ], attention-based [18, 19]
and deep learning approaches [20], using of text annotations both with visual features
[21], fuzzy-based incremental clustering [22] were proposed before.
      </p>
      <p>
        Paper [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] describes the idea to generate video summaries in online mode
immediately without seeing the entire video in quasi real time. This method includes the
building of the dictionary via group sparse coding for some initial video frames,
following by the reconstruction attempt of unseen frames. If reconstruction error is
significant enough, a dictionary is updated and the current frame is added to summary.
      </p>
      <p>Ideas proposed in [23] extend dictionary learning with the prediction of
interestingness using global camera motion analysis and colorfulness. This approach seems
not to be designed for online processing but focused on real time processing mainly.</p>
      <p>
        The implementations of the abovementioned approaches [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and [23] seem not to
be available, as well as test videos for [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>The contributions of the paper include:
 video summarization method based on the self-organizing maps, that can work in
online mode (without seeing the whole video) in real time;
 new three-stage matching of two sets of frames, that includes both keypoint and
raw image pixels comparison;
 selection of the keyframes based on Kohonen's SOM clustering stability;
 we performed the quality assessment of the proposed online summary generation
method using the dataset, which was created by volunteers in online fashion also.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Problem statement</title>
      <p>Formally the problem is to create the summary of the video as the set of still
keyframes. We want to generate summary frames on the fly in online mode without
seeing the whole video. The summarization method should be fast enough to fit real time
processing requirement. We want to generate such a summary that is close to the
corresponding one created by the human in the same online conditions.</p>
    </sec>
    <sec id="sec-3">
      <title>Self-organizing maps</title>
      <p>Kohonen's self-organizing map (SOM) [24] is one of the most popular neural network
unsupervised clustering approaches. This type of network preserves the topology
between input and output values and allows to map multidimensional input into
lowdimensional (typically 1D or 2D) outputs.</p>
      <p>The important property of the Kohonen network is that it is capable of online data
processing when input samples come one by one followed by immediate clustering
(classification) decision.</p>
      <p>Training of Kohonen SOM network implies the update of all weights after the
processing of each training sample one by one (online training) and may be done
according to stages below. Let n be the quantity of outputs (known a priori) and we denote
the quantity of features as m .</p>
      <p>1. Initializing of weights for each neuron with the small random weights.
2. Selection for input features vector x .</p>
      <p>3. Train the features vector while error for it is bigger than training epsilon
(0.0001):</p>
      <p>3.1 Find the closest (Best Matching Unit, BMU)  neuron in terms of some
distance, e.g. Euclidean.</p>
      <p>3.2 Update weights wij for all n neurons according to (1):</p>
      <p>wij  wij  (t) (xi , ,t)(xi  wij )
where  (t)  0.1e0.001t is the learning rate, t – training step,  (xi , , t)  e
current neuron number i at the training step t , where d  r  rxi
the value of the neighborhood function between best matching unit vector  and
is the distance
between the coordinates of best matching unit r
and current one rxi ,
 (t)  n20.002t is the radius of Gauss function, x – current input vector.
4. Repeat step 2 for all features vector increasing t each time.</p>
      <p>One of the most important parameters to be set up for the usage of the SOM is the
size of a one-dimensional or two-dimensional map. In this work, we used a
onedimensional map with 20 possible clusters to reach the required real time
performance. Our experiments showed that the bigger quantity of clusters requires more
training time, the lesser quantity doesn't catch the difference between frames well enough.
The successful choice of this parameter allows to balance the quality and the
performance of the method being proposed.</p>
      <p>The training stage of the SOM continues until all clusters are trained and the
quantity of already processed feature vectors is less than 1000.
d 2</p>
      <p>(1)
 (t) –</p>
    </sec>
    <sec id="sec-4">
      <title>Features and keyframes selection</title>
      <sec id="sec-4-1">
        <title>Image features</title>
        <p>The selection of features is an important step to represent the specific properties of
each frame. The best choice from our point of view is such features, which can be
calculated quickly and/or in parallel to preserve real-time processing.</p>
        <p>We selected the common color features, obtained from floating non-intersection
windowing of an image. Averaged R, G and B color components in range [0;1] are
used as features from the single window.</p>
        <p>We used square windows with size w  16 or w  32 in experiments and rescaled
images accordingly to preserve the same length of features vector. The processing of
building a feature vector for the entire image was performed in two parallel
independent threads and merged at the end.
4.2</p>
      </sec>
      <sec id="sec-4-2">
        <title>The selection of keyframes</title>
        <p>We will define the keyframe as a frame that varies slightly during some quantity T of
previously examined frames in a video stream. We define the quantitative measure of
the variability in the two-step procedure.</p>
        <p>The first step includes the clustering of a frame by SOM. So, we cluster the video
stream frame by frame, counting the quantity of frames Q , belonging to the same
cluster in a row. When this quantity becomes bigger than the predefined threshold T0 ,
we consider this part of a video stream as stable enough and select keyframe
candidate from the middle with index k *  i  Q / 2 , where i – is the number of the frame
being processed.</p>
        <p>At the next step we compare keyframe candidate k * with previously added frame
in order to avoid the addition of the very similar, belonging to the same cluster but
having some frame with another cluster in between accidently. If frame candidate k *
*
and previously added frame k prev are similar, we replace previous frame with a frame
from an updated index: k *  (i  Q / 2  k *prev ) / 2  k *prev . If they are not similar,
candidate frame k * is confirmed as keyframe.</p>
        <p>Measuring the difference between frames k1 and k 2 is based on the average
difference between corresponding feature vectors f 1 and f 2 in LAB color space ( c1 and
c2 ):
d 
1 m</p>
        <p> E00 (ci1,ci2 )
m i1
(2)
where E00 (ci1, ci2 ) – is the CIEDE2000 difference between two colors [25]. We
claim images to be similar if d  1.5 , where 1.5 is the just noticeable difference
(JND) for this metric according to [26].</p>
        <p>So, we present the entire scheme of the suggested summarization method in Fig. 1.
We denote as T0  42 the quantity of frames, required to be classified as the same
class in a row with SOM.
In order to check the quality of summarizing with the suggested approach we need to
compare two sets of images, the first one contains images from our automatic
summary, the second (ground truth or etalon) one contains images, proposed by humans.</p>
        <p>We implemented the match between two sets in three consecutive stages.</p>
        <p>The first stage includes the rough estimate of match candidate images with Fast
Retina Keypoint (FREAK, [27]) descriptor. FREAK is built on FAST keypoint
detector [28] and requires initial threshold t to be set up as a required difference between a
central pixel and surrounding to identify central pixel as a corner. The bigger value t
leads to the less quantity of keypoint being detected. We found t  20 to be a good
default value for experiments.</p>
        <p>FREAK descriptors contain 512 bits, we compare them in cascades 128 bits each
as proposed in [27]. A comparison of chains requires another threshold F0 , which
allows some differences to appear, as the same bit arrays even for close images is the
rare case. If corresponding bit chains in descriptors have more differences compared
to the threshold, descriptors are considered being different and comparison stops. In
our experiments we found threshold F0  32 to be the good choice (25% bit values
may differ between descriptors).</p>
        <p>We suggest that two images probably match if the overall quantity of matched
descriptors between them is greater than the quantity of non-matched.</p>
        <p>When the list of matching candidates is ready, we compare each pair of candidates
using comparison (2). We perform these comparisons on feature vectors, built on full
size images with window size w  32 .</p>
        <p>The impact of different F0 values on the matching results is shown in Fig. 2. The
first row contains some frames from the user summary. Three rows below contain
some frames from an automatic summary and a corresponding F0 value (24, 32 or
40). As one can see from Fig. 2, we have one successful match for F0  24 , three
correct matches for F0  32 . In the last case when F0  40 we have five matches,
one of which is absolutely false (third frame) and one is quite subjective to make the
decision about the match (last frame).</p>
        <p>The last stage is independent of the previous ones and is used for the frames, for
which FREAK descriptor is not effective enough. We process the closest (in the scope
of Manhattan distance between frame numbers) frames from automatic and user
summaries, which were not matched before. Some frames may be skipped earlier
because they contain not enough FREAK descriptors. We analyze them with
searching the difference between average R, G and B values of the entire full size image.
Images are assumed to be similar if the average difference of all three R, G and B is
less than 20.
6</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Measuring quality of the automatic summary</title>
      <p>
        We will estimate the quality of the suggested approach with measuring the CUS
values like it was suggested in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and F1 scores, like it is described in detail in [18] and
applied in [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ] and others.
      </p>
      <p>
        Two summary quality CUS A (accuracy) and CUS E (error) metrics are proposed
in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]:
      </p>
      <p>CUS A  nmAS , CUSE  nmAS ,
nUS nUS
(3)
where nmAS is the quantity of matching keyframes from automatic summary,
nmAS is the quantity of non-matching keyframes from automatic summary,
nUS
is the total quantity of keyframes from user summary.</p>
      <p>F1 score is calculated as F1  2PR /(P  R) , where P is the precision, F is the
recall, calculated on the true positives (quantity of matched frames), false positives
(quantity of frames which are present if automatic summary but absent in user one),
false negatives (quantity of frames which are present in user summary but missing in
automatic).
7</p>
    </sec>
    <sec id="sec-6">
      <title>Experiments</title>
      <p>
        SumMe [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and TVSum [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] are the most popular datasets in video summarizing, but
not suitable for us. They have importance assigned to each frame as a result of the
user summary performed on the entire video. This is convenient to build etalon
summaries from this user summaries just solving 0/1 knapsack problem for a fixed length
of the summary, typically 15%. In our case we make the decision in online mode, so
we don’t know the final length naturally. Despite we could extend out an automatic
summary to make the required length, criteria of frame selection in batch and online
modes are very different.
      </p>
      <p>
        So, we used dataset, gathered from Open Video dataset by [
        <xref ref-type="bibr" rid="ref7">7, 16</xref>
        ], that contains 50
color videos in MPEG-1 format (30 fps, 352x240 pixels) approximately 75 minutes in
total.
      </p>
      <p>User summaries were generated by 2 volunteers in online mode. We asked them to
select important (from their point of view, no explanations required) frames while
looking video-sequences without sound. The modification of previously selected
summary frames was not allowed. Users made the decision about the importance of
the frame without knowing the content of the entire video.</p>
      <p>
        The specific of human behavior, called chronological bias, is described in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The
authors say that humans sometimes claim frames that appear earlier to be more
important just because of chronologically, regardless of frame content. This effect leads to
the selection of very close frames as important ones. To avoid the influence of
chronological bias, we apply the elimination of duplicates in user summaries applying
matching of frames, described above.
      </p>
      <p>After we compared automatic summaries using the proposed approach with user
summaries, created by volunteers in online mode and cleaned from duplicates, we got
such average values: CUS A  0,58 , CUS E  0,38 , F1  0,61 . Feature vector of
length 210 was built on the size of images that two times less than original ones.</p>
      <p>
        We have recalculated these scores for OV [16, 29], DT [30] and VSUMM [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]
methods using our matching approach, results are presented in Table 1 and they are
close to ones, shown in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. As one can see, our approach has much bigger CUS E
value compared to the same value for our user summaries, created in online mode
( CUS E  0,38 ). That means, that the rules for creation of user summaries (online
without seeing the entire video or the selection of keyframes after watching entire
video) matters.
      </p>
      <p>The total duration of the 50 videos [16] from Open Video dataset is about 75
minutes, we built the summary for all videos in 17 minutes. The average length of the
summary is 0.3% of the entire video. One-dimensional map with 20 clusters was
used.</p>
      <p>
        We used SumMe [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] dataset to evaluate the processing speed. The duration of 25
videos from the SumMe dataset is approximately 70 minutes, time of processing, the
quantity of keyframes in summary and length of the features vector are shown in
Table 2.
      </p>
      <p>First column contains the name of video and its duration in minutes and seconds.
Values in the second column were calculated for the frames with size two times less,
than original, in the third column – three times less. Values in the last column were
calculated with specific downscaling factor for each video that limits the quantity of
features (maximum allowed width of the frame is 200, height is 140). The empty
value in the last column means that the result of processing corresponds to one of the
values from previous columns.</p>
      <p>Values in the last column in Table 2 show the ratio between the best processing
time and the total time of a video. The processing of four videos (Air_Force_One,
Eiffel Tower, Notre_Dame and Scuba) didn't satisfy real time requirement. The length
of the summary decreases significantly with decreasing of feature vector length for
one video (Bearpark_climbing).</p>
      <p>We tested also the performance of the proposed online summarization method on
the two long movies. First one contains 260222 frames and lasts 3 hrs. 00 min. 53 sec.
(10853 seconds in total), the second one has 203882 frames and lasts 2 hrs. 21 min.
and 43 sec. (8503 seconds in total).</p>
      <p>The online summarization of the first movie took 3800 seconds for 336 features
per frame, the length of the summary was 705 frames. Second video required 2209
seconds to build summary containing 585 frames using 252 features per frame.
8</p>
    </sec>
    <sec id="sec-7">
      <title>Conclusion</title>
      <p>We proposed an approach to generate the video summary in the form of keyframes,
which are still images. It is based on the clustering of separate frames in online mode
without the analysis of the whole video using Kohonen's self-organized maps. The
frame from the video stream is selected as a summary frame if some quantity of
frames T0 / 2 after it and before were classified as the same cluster.</p>
      <p>The proposed method was tested on Open Video dataset and the performance is
tested on SumMe dataset. We showed that the quality is comparable to some batch
video summarization methods, and the performance combined with the flexibility in
the selection of the quantity of frame features allows achieving real time processing in
most cases. The quality of the suggested method is better compared to the user
summary created in online mode.</p>
      <p>The investigation of the dependency of quality and performance on the size of
SOM map may be the topic of future research.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Otani</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nakashima</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rahtu</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Heikkilä</surname>
          </string-name>
          , J.:
          <article-title>Rethinking the Evaluation of Video Summaries</article-title>
          .
          <source>In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <fpage>15</fpage>
          -
          <lpage>20</lpage>
          June 2019, pp.
          <fpage>7588</fpage>
          -
          <lpage>7596</lpage>
          (
          <year>2019</year>
          ) doi: 10.1109/CVPR.
          <year>2019</year>
          .00778
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Panda</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Das</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ernst</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roy-Chowdhury</surname>
            ,
            <given-names>A.K.</given-names>
          </string-name>
          :
          <article-title>Weakly Supervised Summarization of Web Videos</article-title>
          .
          <source>In: 2017 IEEE International Conference on Computer Vision</source>
          (ICCV),
          <fpage>22</fpage>
          -29
          <source>October</source>
          <year>2017</year>
          , pp.
          <fpage>3677</fpage>
          -
          <lpage>3686</lpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Pan</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zheng</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            , R., Han,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Qu</surname>
            ,
            <given-names>X.:</given-names>
          </string-name>
          <article-title>A bottom-up summarization algorithm for videos in the wild</article-title>
          .
          <source>EURASIP Journal on Advances in Signal Processing</source>
          , vol.
          <year>2019</year>
          , #
          <volume>15</volume>
          (
          <year>2019</year>
          ) doi: 10.1186/s13634-019-0611-y
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Rochan</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Video Summarization by Learning from Unpaired Data</article-title>
          .
          <source>In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <fpage>15</fpage>
          -
          <lpage>20</lpage>
          June 2019, pp.
          <fpage>7894</fpage>
          -
          <lpage>7903</lpage>
          (
          <year>2019</year>
          ) doi: 10.1109/CVPR.
          <year>2019</year>
          .00809
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Cai</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zuo</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Davis</surname>
            ,
            <given-names>L.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
          </string-name>
          , L.:
          <article-title>Weakly-supervised Video Summarization using Variational Encoder-Decoder and Web Prior</article-title>
          . In: Ferrari V.,
          <string-name>
            <surname>Hebert</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sminchisescu</surname>
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weiss</surname>
            <given-names>Y</given-names>
          </string-name>
          . (eds) Computer Vision - ECCV
          <year>2018</year>
          .
          <source>ECCV 2018. Lecture Notes in Computer Science</source>
          , vol.
          <volume>11218</volume>
          , pp.
          <fpage>193</fpage>
          -
          <lpage>210</lpage>
          (
          <year>2018</year>
          ) doi: 10.1007/978-3-
          <fpage>030</fpage>
          -01264-9_
          <fpage>12</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Song</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vallmitjana</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stent</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jaimes</surname>
            <given-names>A</given-names>
          </string-name>
          .:
          <article-title>TVSum: Summarizing Web Videos Using Titles</article-title>
          .
          <source>In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <fpage>7</fpage>
          -
          <lpage>12</lpage>
          June 2015, pp.
          <fpage>5179</fpage>
          -
          <lpage>5187</lpage>
          (
          <year>2015</year>
          ) doi: 10.1109/CVPR.
          <year>2015</year>
          .7299154
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>De Avila</surname>
            ,
            <given-names>S. E. F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lopes</surname>
            ,
            <given-names>A. P. B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>da Luz</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Jr.</surname>
          </string-name>
          ,
          <string-name>
            <surname>Araujo</surname>
            ,
            <given-names>A.D.L.</given-names>
          </string-name>
          :
          <article-title>Vsumm: A mechanism designed to produce static video summaries and a novel evaluation method</article-title>
          .
          <source>Pattern Recognition Letters</source>
          , vol.
          <volume>32</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>56</fpage>
          -
          <lpage>68</lpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xing</surname>
            <given-names>E.P.</given-names>
          </string-name>
          :
          <article-title>Quasi Real-Time Summarization for Consumer Videos</article-title>
          .
          <source>In: 2014 IEEE Conference on Computer Vision and Pattern Recognition</source>
          ,
          <fpage>23</fpage>
          -
          <lpage>28</lpage>
          June 2014, pp.
          <fpage>2513</fpage>
          -
          <lpage>2520</lpage>
          (
          <year>2014</year>
          ) doi: 10.1109/CVPR.
          <year>2014</year>
          .322
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Gygli</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grabner</surname>
            , H.,
            <given-names>Van Gool L.</given-names>
          </string-name>
          :
          <article-title>Video summarization by learning submodular mixtures of objectives</article-title>
          .
          <source>In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source>
          ,
          <fpage>7</fpage>
          -
          <lpage>12</lpage>
          June 2015, pp.
          <fpage>3090</fpage>
          -
          <lpage>3098</lpage>
          (
          <year>2015</year>
          ) doi: 10.1109/CVPR.
          <year>2015</year>
          .7298928
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Gygli</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grabner</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riemenschneider</surname>
            , H.,
            <given-names>Van Gool L.</given-names>
          </string-name>
          :
          <article-title>Creating summaries from user videos</article-title>
          . In: Fleet D.,
          <string-name>
            <surname>Pajdla</surname>
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schiele</surname>
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tuytelaars</surname>
            <given-names>T</given-names>
          </string-name>
          . (eds) Computer Vision - ECCV
          <year>2014</year>
          .
          <source>ECCV 2014. Lecture Notes in Computer Science</source>
          , vol
          <volume>8695</volume>
          , pp.
          <fpage>505</fpage>
          -
          <lpage>520</lpage>
          (
          <year>2014</year>
          ) doi: 10.1007/978-3-
          <fpage>319</fpage>
          -10584-0_
          <fpage>33</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chao</surname>
          </string-name>
          , WL.,
          <string-name>
            <surname>Sha</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grauman</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Video Summarization with Long ShortTerm Memory</article-title>
          . In: Leibe B.,
          <string-name>
            <surname>Matas</surname>
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sebe</surname>
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Welling</surname>
            <given-names>M</given-names>
          </string-name>
          . (eds) Computer Vision - ECCV
          <year>2016</year>
          .
          <source>ECCV 2016. Lecture Notes in Computer Science</source>
          , vol.
          <volume>9911</volume>
          , pp.
          <fpage>766</fpage>
          -
          <lpage>782</lpage>
          (
          <year>2016</year>
          ) doi: 10.1007/978-3-
          <fpage>319</fpage>
          -46478-7_
          <fpage>47</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Mahasseni</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lam</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Todorovic</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Unsupervised video summarization with adversarial LSTM networks</article-title>
          .
          <source>In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          ,
          <fpage>21</fpage>
          -
          <issue>26</issue>
          <year>July 2017</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          (
          <year>2017</year>
          ) doi: 10.1109/CVPR.
          <year>2017</year>
          .318
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Koprinska</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Clark</surname>
          </string-name>
          , J.:
          <article-title>Video Summarization and Browsing Using Growing Cell Structures</article-title>
          .
          <source>In: International Joint Conference on Neural Networks (IJCNN)</source>
          ,
          <fpage>25</fpage>
          -
          <issue>29</issue>
          <year>July 2004</year>
          , pp.
          <fpage>2601</fpage>
          -
          <lpage>2606</lpage>
          (
          <year>2004</year>
          ) doi: 10.1109/IJCNN.
          <year>2004</year>
          .1381056
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Koprinska</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Clark</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Carrato</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>VideoGCS - A Clustering-Based System for Video Summarization and Browsing</article-title>
          .
          <source>In: The Proceedings of the 6th COST 276 Workshop</source>
          , Thessaloniki, Greece, May
          <year>2004</year>
          , pp.
          <fpage>34</fpage>
          -
          <lpage>40</lpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>