<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshops, Los Angeles,
USA, March</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Analysis of Seven Editing Bias in Movie Trailer Based on Editing Features</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Honoka Kakimoto</string-name>
          <email>dmi91695@kwansei.ac.jp</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yukiko Kawai</string-name>
          <email>kawai@cc.kyoto-su.ac.jp</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yuanyuan Wang</string-name>
          <email>y.wang@yamaguchi-u.ac.jp</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kazutoshi Sumiya</string-name>
          <email>sumiya@kwansei.ac.jp</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Kwansei Gakuin University</institution>
          ,
          <country country="JP">Japan</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Kyoto Sangyo University</institution>
          ,
          <addr-line>Japan, Osaka Universit</addr-line>
          ,
          <country country="JP">Japan</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Yamaguchi University</institution>
          ,
          <country country="JP">Japan</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>20</volume>
      <issue>2019</issue>
      <abstract>
        <p>Recently, movie trailers are created for a certain target audience. There are few diversity of movie trailers because the types of efects in a trailer are limited. Therefore, it is dificult to edit a trailer that caters to the diferent preferences of various users. To solve this problem, we define seven editing biases by images, audios, and captions for the trailers based on definitions in related work and editing features of a movie, and we propose the method analyzing degrees of seven editing biases in a movie trailer. We then propose user interface representing the result of analysis of the trailer based on its editing features.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>CCS CONCEPTS</title>
      <p>• Human-centered computing → Information visualization.
Movie trailer; editing features; video analysis</p>
    </sec>
    <sec id="sec-2">
      <title>INTRODUCTION</title>
      <p>Currently, movie trailers are edited using various elements, such
as sound efects, lines, and captions. However, four trailers at most
are produced for each movie. Moreover, the length of each trailer is
at most only a few minutes. Because a trailer is edited for a certain
target audience, the scenes and efects that can be included in the
trailer are limited. Therefore, it is dificult to edit a trailer that caters
to the various users in the target audience. However, the viewer
could be lost if the trailer is not attractive enough to them.</p>
      <p>
        To solve this problem, in Section 4, we define seven editing biases
(scene reordering, length of scenes, background music, emphasis
of characters, number of lines, number of topic words, and change
of sentiment polarity) that occur when movies are summarized
and edited into trailers based on definitions in related work and
editing features of a movie. In addition, we propose the method
IUI Workshops’19, March 20, 2019, Los Angeles, USA
© 2019 for the individual papers by the papers’ authors. Copying permitted for private
and academic purposes. This volume is published and copyrighted by its editors.
Giannetti and Leach [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] analyzed the basic techniques and efects of
camera work and shots in their study. According to their analysis,
shots such as long and close shots emphasize specific topics such
as the characters and background in the scenes. However, these
methods are not efective in current movie trailers because scenes in
the movie are shortened by the editor when they are summarized for
a trailer. This means each scene of the movie can lose its continuity
when edited down. Video lighting is a similar case. Wheeler [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]
and Kanematsu et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] described lighting as one of the important
factors of video work because the lighting techniques can be used
to control the mood and atmosphere of a scene. However, lighting
may not be efective enough in the extremely short sequences of a
movie trailer. Therefore, in this paper, we do not define features of
shots and lighting as editing biases for movie trailers. Ikenobe [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]
defined a movie trailer as a series of scenes that a movie editor uses
to attract a target audience. Moreover, the order of each scene is
rearranged, not to follow the storyline, but to emphasize impressive
scenes. The method of reordering scenes can produce a trailer that
has a completely diferent story from that of the movie. Therefore,
the reordering of scenes is defined as an important editing feature
in this paper.
      </p>
      <p>
        Tomino [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] noted that sound elements such as background
music and sound efects can give movies continuity. Ikenobe [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]
also asserted that music is an essential factor for attracting viewer
interest in a movie trailer. However, Vineyard [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] noted that the
sound design of a movie is not a visible feature of editing. He defined
a sound design as a factor that can determine the atmosphere of the
movie. Therefore, in this paper, the background music is defined as
an essential editing feature that generates sentiment polarity.
2.2
      </p>
    </sec>
    <sec id="sec-3">
      <title>Genre Classification</title>
      <p>
        Buckland [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] noted that the border between the genres of a movie
is not clear because one movie can have several features from
other movie genres. Hesham et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] proposed a system that uses
machine learning to classify movie genre based on the textual
features of a movie’s subtitle. In their study, movies are also defined
as content that contains a mixture of genres. In contrast, the genre
classification system proposed by Ekenel et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] classifies genres
of TV programs and web videos with high precision by analyzing
audio and video features.
2.3
      </p>
    </sec>
    <sec id="sec-4">
      <title>Video Summarization</title>
      <p>
        Li et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] proposed a method to generate a video summary of a
story with important scenes extracted from the movie based on the
plot summary from Wikipedia [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Another approach proposed
by Xie et al. [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] is a video summarization system based on the
importance evaluation model. Although there are many video
summarization methods based on the story and important scenes of a
movie, fewer methods have been proposed that generate a movie
trailer based on the extraction of its elements. In this study, we
extract characteristic words and the names of the main characters
from the plot summary in Wikipedia.
      </p>
      <p>
        In the system proposed by Ercolessi et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], TV drama scenes
are detected and summarized based on automatic speaker
diarization and speech recognition. In our proposed method, text data in
the script and plot are used to analyze characteristic words and
used to define an important editing bias for movie trailers.
      </p>
      <p>
        Money et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] surveyed video summarization systems, which
include systems for movies. Although there are many systems
that focus on the audio-visual cues of video content, there are few
systems analyzing the elements intentionally added to a movie
trailer.
3
      </p>
    </sec>
    <sec id="sec-5">
      <title>MOVIE TRAILER BIASES</title>
      <p>
        Two types of data are available from the creators of movies: the
script and oficial trailers. A trailer is a summary of a movie created
using various editing methods. Because editors use these techniques
to attract the attention of the target audience and leave an
impression on them. In this study, these editing techniques are defined as
seven editing biases. In addition, the scripts of movies are provided
on several websites. e.g., [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].The script contains not only the
settings and background of the scenes, but also almost all the lines,
behavior, and facial expressions of the characters. The script is a
reproduction of a movie in the form of text. Therefore, our proposed
method analyzes the following five editing biases using text data:
scene reordering, the emphasis of characters, number of lines,
number of topic words, and change in sentiment polarity. In our method,
editing biases are analyzed by Microsoft Video Indexer.1The plot
and summary of a movie are provided on online databases such as
Wikipedia and the Internet Movie Database (IMDb) [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. In addition,
our proposed method analyzes the following two editing biases
using audio-visual data: length of scene and background music.
      </p>
      <p>In this paper, we define editing features as two groups of editing
biases: audio-visual biases and contents biases. Audio-visual biases
include 1. Scene Reordering, 2. Length of Scenes, and 3. Background
Music (see Figure 1). Contents biases include a. Emphasis of
Characters, b. Number of Lines, c. Number of Topics, and d. Change in
Sentiment Polarity. Editing biases (see Figure 2).
4
4.1</p>
    </sec>
    <sec id="sec-6">
      <title>DEGREES OF VIDEO EDITING BIASES</title>
    </sec>
    <sec id="sec-7">
      <title>Degrees of Audio-Visual Biases</title>
      <p>
        To analyze the degree of editing biases, we create the script of a
movie trailer by editing the script of the movie in advance. In this
section, Slow West 2 is analyzed as an example.
4.1.1 Scene Reordering. As Ikenobe [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] notes, scenes used in a
trailer are mixed up by the editors in an order that is unrelated
to the storyline so that they emphasize impressive scenes. In this
paper, we define the reordering of scenes as an editing bias that has
the largest influence on a viewer’s understanding of the story and
atmosphere. In Figure 1, the black arrows indicate reordered scenes
and the gray arrows indicate scenes that remain in the order of the
storyline. To extract the degree of exchange as an editing bias, the
proposed system calculates the rate of scene reordering, that is, the
proportion of scenes that appear before and after the location of
their sequence of the storyline.
4.1.2 Length of Scenes. As Giannetti and Leach [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] noted,
diferences of length of scenes and shots have a diferent impression on
viewers. A scene made up of shortcuts makes viewers excited. In
contrast, a scene made with long cuts contains detailed information
about the story. In this paper, we define the length of the scenes in a
trailer as an editing bias that determines a viewer’s impression and
understanding of the story. The black regions of Figure 1 shows the
scaling down of certain scenes and the gray regions indicate the
scaling up of certain scenes. To extract the degree of the editing
bias, we compare the component ratio of each scene of the trailer
and the movie.
1Video Indexer, Azure Media Services,https://azure.microsoft.com/ja-jp/services/
media-services/video-indexer/
2Slow West Oficial Trailer #1 (2015), https://www.youtube.com/watch?v=pFfsTsdJfF8
Analysis of Seven Editing Bias in Movie Trailer Based on Editing Features
4.1.3 Background Music . Tomino [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] noted that sound elements
such as background music and sound efects can give movies
continuity. Vineyard [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] noted that the sound design of a movie is not a
visible feature of editing. He defined a sound design as a factor that
can determine the atmosphere of the movie. Because a movie trailer
is a mixture of audio and visual content, background music has
a strong influence on a viewer’s impression regarding sentiment.
Furthermore, it is evident that background music is an efective
way to emphasize the mood of a certain situation in a trailer. Some
editors use music that does not exist in the movie to emphasize a
particular mood. In this paper, the background music is defined as
one element of the sentiment polarity of a movie trailer.
4.2
      </p>
      <p>Degrees of Content Biases
4.2.1 Emphasis of Characters. The emphasis of characters in the
trailer is a way for the editor not only to describe the main cast of
the movie but also to create a certain atmosphere. Certain characters
that create a negative sentiment, such as the villain, maybe more
emphasized in the trailer. As a result, a viewer’s impression and
the sentiment polarity regarding the trailer may be diferent from
those of the movie. In our proposed method, the diferences in
the appearance rates of the characters in the trailer and movie are
defined as the emphasis of particular characters. These rates are
determined by an analysis of the script. In the proposed method,
the main characters are determined based on the order of the cast
in Wikipedia and the number of their occurrences in the script (see
Figure 2(a)).
4.2.2 Number of Lines. The number of lines afects the
understanding of a story as an editing bias because lines rather than visual
information help viewers to understand the content of the movie.
In addition, the lines of the trailer and the contents of the movie
can be defined as an editing bias when there is a gap. The proposed
method extracts two types of text data: the lines themselves and the
script of the trailer without any lines. This is because the lines are
not always synchronized with the scenes of the trailer. The number
of lines is calculated based on the number of words in the trailer
(see Figure 2(b)).
4.2.3 Number of Topics. Similar to the number of lines, the
number of topics in the scene determines the amount of information
conveyed to viewers. In the proposed system, nouns and verbs are
extracted from the text data of the script as the characteristic words
of the scene (see Figure 2(c)).
4.2.4 Change in Sentiment Polarity. In addition to the background
music, the emphasis of the characters, topics in the lines, and script
also change the sentiment polarity of the trailer (see Figure 2(d)). In
this paper, we hypothesize that the background music has a larger
influence on the sentiment polarity of a viewer’s impression than
the other three biases.
5</p>
    </sec>
    <sec id="sec-8">
      <title>USER INTERFACE</title>
      <p>The user interface of the analysis result is shown in Figure 3. Firstly,
a user uploads a movie trailer file or pastes an URL of the movie
trailer on the video window to view as illustrated in the orange box.
Secondly, the degrees of seven editing biases are presented as seek
bars on the right side of the video window.</p>
      <p>The window under the video window presents several movie
genres closest to the balance of the degrees of the uploaded movie
trailer file. In addition, the typical balance of the degrees of seven
editing biases for each genre is shown in the same window. For
example, when a balance of the seven editing biases of a trailer
uploaded by a user is similar to the typical balance of movie genre
"Action", "Sci-fi" and "Adventure", these genres are presented as
buttons. The user can click these buttons and compare the diferences
between the balance of the trailer and the typical one. Therefore,
the user may discover the uniqueness of an uploaded trailer or
adjust the editing technique of a trailer based on the result of the
analysis.
6</p>
    </sec>
    <sec id="sec-9">
      <title>CONCLUSION</title>
      <p>In this paper, we defined seven editing biases in a movie trailer
based on definitions in related work and editing features of a movie.
We then proposed a method that can extract seven editing biases:
scene reordering, scene length, background music, emphasis of
characters, number of lines, number of topic words, and change in
sentiment polarity.In addition, we proposed a user interface that
presents degrees of seven editing biases based on movie trailer
analysis.</p>
      <p>
        As future work, we plan to generate movie trailers from any
movies using degrees of seven editing biases by the machine
learning method. In addition, it is necessary to analyze the content of
the trailer to analyze what kinds of content tend to be focused on
and how they afect a viewer’s impressions. In addition, a method
of avoiding spoilers of the story is required not to decrease user
interests in the film. To reject spoilers, Maeda et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]proposed a
system detecting spoilers in the user review based on its location
in the story documents such as text data of the literature. We plan
to propose the method using the script of the movie instead of the
text of literature.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1] [Warren Buckland]. [2003].
          <article-title>[Teach yourself film studies]</article-title>
          .
          <source>McGraw-Hill</source>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>[Hazim K. Ekenel</surname>
          </string-name>
          , Tomas Semela, and Rainer Stiefelhagen]. [2010].
          <article-title>[Contentbased video genre classification using multiple cues]</article-title>
          .
          <source>In Proceedings of the AIEMPro</source>
          <year>2010</year>
          , ACM, New York, Florence,
          <string-name>
            <surname>IT</surname>
          </string-name>
          , October,
          <volume>24</volume>
          ,
          <year>2010</year>
          , [6] pages.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>[Philippe</given-names>
            <surname>Ercolessi</surname>
          </string-name>
          , Herve Bredin, and Christine Senac]. [2012].
          <article-title>[StoViz: story visualization of TV series]</article-title>
          .
          <source>In Proceedings of the ACM MM</source>
          <year>2012</year>
          ,
          <article-title>Nara</article-title>
          ,
          <string-name>
            <surname>JPN</surname>
          </string-name>
          , October,
          <volume>29</volume>
          ,
          <year>2012</year>
          , [2] pages.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>[Louis</surname>
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Giannetti</surname>
          </string-name>
          and Jim Leach]. [2002]. [Understanding movies].
          <source>Pearson</source>
          Prentice Hall.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>[Mohammad</given-names>
            <surname>Hesham</surname>
          </string-name>
          , Bishoy Hani,
          <source>Nour Fouad and Eslam Amer]. [</source>
          <year>2018</year>
          ].
          <article-title>[Smart trailer: Automatic generation of movie trailer using only subtitles]</article-title>
          .
          <source>In Proceedings of the IWDRL</source>
          <year>2018</year>
          , Cairo, Egypt, March,
          <volume>29</volume>
          ,
          <year>2018</year>
          , [5] pages.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>[6] [Naoko Ikenobe]. [2002]. [Movie Trailer is Interesting]. Kobunsha.</mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <article-title>[7] [Internet movie database (IMDb)]</article-title>
          . [2018] from https://www.imdb.com/
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <article-title>[8] [The Internet movie script database (IMSDb)]</article-title>
          . [2018] from https://www.imsdb.com/
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>[Yoshihisa</given-names>
            <surname>Kanematsu</surname>
          </string-name>
          , Koji Mikami, Kunio Kondo and Mitsuru Kaneko].
          <source>[2010]. [Research on digitizing lighting information and lighting simulation using digital scrapbook] The Journal of the Society for Art and Science</source>
          .
          <volume>9</volume>
          ,
          <issue>2</issue>
          , [7]pages.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <source>[10] [Xueshan Li and Takehito Utsuro]. [</source>
          <year>2016</year>
          ].
          <article-title>[A method of assisting movie summarization based on key sentences of the plot]</article-title>
          .
          <source>In Proceedings of the JSAI</source>
          <year>2016</year>
          ,
          <article-title>Fukuoka</article-title>
          ,
          <string-name>
            <surname>JPN</surname>
          </string-name>
          , June, 6,
          <year>2016</year>
          , [5] pages.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11] [Kyosuke Maeda, Yoshinori Hijikata and Satoshi Nakamura]. [2016].
          <article-title>[A Basic Study on Spoiler Detection from Review Comments Using Story Documents]</article-title>
          .
          <source>In Proceedings of the 2016 IEEE/WIC/ACM WI</source>
          , Omaha, Nebraska, USA, October,
          <volume>13</volume>
          ,
          <year>2016</year>
          , [6]pages.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12] [Arthur G. Money, Harry Agius]. [2008].
          <article-title>[Video summarisation: A conceptual framework and survey of the state of the art]</article-title>
          .
          <source>The Journal of Visual Communication and Image Representation</source>
          .
          <volume>19</volume>
          ,
          <issue>2</issue>
          , [22]pages.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13] [Yoshiyuki Tomino]. [2011].
          <article-title>[Principle of Image: Conceptism from Beginners to Professionals]</article-title>
          .
          <source>Kinema Junpo.</source>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14] [Jeremy Vineyard]. [2008].
          <article-title>[Setting up your shots: Great camera moves every iflmmaker should know]</article-title>
          .
          <source>Michael Wiese Productions.</source>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15] [Paul Wheeler]. [2013]
          <article-title>[ Digital cinematography]</article-title>
          . Focal Press.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>[16] [Wikipedia]. [2018] from https://en.wikipedia.org/wiki/</mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17] [Yuxiang Xie, Xidao Luan, Jingmeng He, Lili Zhang, Xin Zhang and Chen Li]. [2017]
          <article-title>[A movie summary generation system]</article-title>
          .
          <source>In Proceedings of IEEE DSC</source>
          <year>2017</year>
          , Shenzhen, China, June,
          <volume>26</volume>
          ,
          <year>2017</year>
          , [4]pages.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>