<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>TUB-IRML at MediaEval 2014 Visual Privacy Task: Privacy Filtering through Blurring and Color Remapping</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dominique Maniry</string-name>
          <email>dmaniry@cs.tu-berlin.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Esra Acar</string-name>
          <email>esra.acar@tu-berlin.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sahin Albayrak</string-name>
          <email>sahin.albayrak@dai-labor.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>DAI Laboratory, Technische Universität Berlin</institution>
          <addr-line>Ernst-Reuter-Platz 7, TEL 14, 10587 Berlin</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2014</year>
      </pub-date>
      <fpage>16</fpage>
      <lpage>17</lpage>
      <abstract>
        <p>This paper describes the participation of the TUB-IRML group to the MediaEval 2014 Visual Privacy task. We present a method for privacy protection of individuals in surveillance videos. In order to achieve this, our method obscures both shape and appearance of identity-related regions through blurring and color remapping. The intelligibility is preserved by displaying edges and anomalous events are hinted at by special colors. The experimental results obtained on surveillance videos show that our method considerably outperforms other participating teams in terms of privacy score. However, the drawback is that the results in terms of intelligibility are below average.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. INTRODUCTION</title>
      <p>
        The MediaEval 2014 Visual Privacy Task addresses the
problem of privacy protection in video surveillance, which is
gaining more and more importance due to concerns raised
about the privacy of monitored individuals. Detailed
description of the task, the dataset and the evaluation
methodologies are given in the paper by Badii et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. As part
of the MediaEval 2014 Visual Privacy Task, our privacy
lter is evaluated using the Privacy Evaluation Video Dataset
(PEViD) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>In the context of this task, we propose a simple but
effective privacy lter which aims not only at obscuring facial
identity, but also at protecting other identity revealing
features such as accessories and clothing. This is achieved by
obscuring both shape and appearance of identity-revealing
regions in videos.</p>
    </sec>
    <sec id="sec-2">
      <title>THE PROPOSED METHOD</title>
      <p>The application of our privacy lter is a four-step
process. First, we convert each frame into grayscale and apply
a Gaussian blur to all privacy-related regions of the frame.
The intensity of the blurring can be controlled using three
di erent blur levels (obtained by varying the standard
deviation of the Gaussian kernel) for regions labeled with low,
medium and high privacy requirements.</p>
      <p>As a second step, the pixel values are quantized to a given
number of values (e.g., 8). These values are remapped to
either a green or red color with the corresponding pixel
intensity, so that the relation between light and dark regions
remains same. The red color is used whenever an anomalous
event (e.g., ghting, stealing or dropping a bag) happens. In
other cases (i.e., non-anomalous), the individuals are shown
in a green color. The aim of this red-green coloring is to
enable human operators to focus on any event which requires
particular attention. The second step removes, depending
on the blur level and number of colors, most of the shape
and appearance information that could potentially reveal a
person's identity, gender or ethnicity, while preserving their
movements and actions.</p>
      <p>In the third step, the obscured image I^(x; y) is blended
back into the original frame I(x; y) to create a smooth
transition between obscured regions and the background. The
blending mask mask(x; y) is a binary image where
annotated regions have a value of 1 and remaining regions have a
value 0. The smoothing is achieved by applying a Gaussian
blur to the blending mask. The result is:
result(x; y) = mask(x; y) I^(x; y) + (1
mask(x; y)) I(x; y)</p>
      <p>In the nal step, we target a better intelligibility by
including some shape information in the image. The obscured
regions are overlaid with edges obtained with Canny Edge
detection. Edges in regions with a high privacy
requirement (i.e., faces) are discarded in order not to reveal identity
through the edges of facial features. The remaining edges
are emphasized using morphological dilation with a 3x3
circle as structuring element.</p>
    </sec>
    <sec id="sec-3">
      <title>RESULTS AND DISCUSSION</title>
      <p>Our submitted run was created using a constant blur level
of 14 for all three privacy levels. The number of colors is
8. This choice of parameters favors privacy over
intelligibility. The submissions of eight teams have been evaluated
in a user study. The user study has been conducted with
three di erent groups (i.e., streams). Stream 1 represents
230 crowd-sourcing workers, Stream 2 is 65 people working
at Thales (mainly in Research&amp;Development { R&amp;D) and
Stream 3 has 59 participants from sectors including R&amp;D,
data protection and law enforcement from all around the
world. The results for our method and the median across
all 8 submissions can be seen in Table 3.</p>
      <p>Among the participants, our proposed method achieved
the highest privacy score. The privacy protection of our
method still comes with a trade-o in intelligibility, as seen
by the consistent below-average scores. We think that this
could be improved by adding additional hints during and
after anomalous events.</p>
      <p>The appropriateness/pleasantness score is also consistently
below average. One possible cause for this is that the
privacy lter obscures the whole rectangular region around a
person including a signi cant portion of the background.
This could be improved with a pixel-wise foreground
segmentation. However, this requires the foreground
segmentation to be very accurate, since every false positive could
potentially reveal identity-related information. Another
unpleasant artifact is the blinking of the overlaid edges. When
edge values oscillate around threshold values, the edges can
become distracting. We think that adaptive thresholds or
temporal smoothing should be explored as a future work.</p>
      <p>The evaluations of Stream 1, Stream 2 and Stream 3 for
our method and the other participating teams are
summarized in Figure 3, Figure 4 and Figure 5, respectively.</p>
    </sec>
    <sec id="sec-4">
      <title>CONCLUSIONS</title>
      <p>In this paper, we proposed a privacy lter that obscures
both shape and appearance of privacy-related regions. The
user study has shown that our method is very e ective at
protecting privacy. As a future work, we plan to evaluate
different parameters to nd a suitable balance between privacy
and intelligibility for di erent contexts. Another interesting
future work would be to improve the appropriateness by
reducing the obscured regions using a pixel-wise segmentation.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>The research leading to these results has received funding
from the European Community FP7 under grant agreement
number 261743 (NoE VideoSense).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Badii</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Touradj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Fedorczak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Korshunov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Piatrik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Eiselein</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A</given-names>
            .
            <surname>Al-Obaidi</surname>
          </string-name>
          .
          <article-title>Overview of the mediaeval 2014 visual privacy task</article-title>
          .
          <source>In MediaEval 2014 Workshop</source>
          , Barcelona, Spain, October
          <volume>16</volume>
          -17
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P.</given-names>
            <surname>Korshunov</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Ebrahimi</surname>
          </string-name>
          .
          <article-title>PEViD: privacy evaluation video dataset</article-title>
          .
          <source>In Applications of Digital Image Processing XXXVI</source>
          , San Diego, CA,
          <year>August</year>
          25-29
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>