<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>TELECOM ParisTech at ImageClef 2009: Large Scale Visual Concept Detection and Annotation Task</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Marin Ferecatu</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hichem Sahbi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Institut TELECOM</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>TELECOM ParisTech</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>46</institution>
          ,
          <addr-line>rue Barrault, 75634 Paris Cedex</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper we describe the participation of TELECOM ParisTech in the Large Scale Visual Concept Detection and Annotation Task at the ImageClef 2009 challenge. This year, the focus was in the extension of (i) the amount of data available for training and testing, and (ii) the number of concepts to be annotated. We use Canonical Correlation Analysis in order to infer a latent space where text and visual description are highly correlated. Starting from a visual description of a test image, we first map it into the latent space, then we predict the underlying text features (and also annotations) which best fit the visual ones in the latent space. Our method is very fast while achieving good results.</p>
      </abstract>
      <kwd-group>
        <kwd>Image annotation</kwd>
        <kwd>Canonical Correlation Analysis</kwd>
        <kwd>Text and image descriptors</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Hichem.Sahbi@telecom-paristech.fr</p>
    </sec>
    <sec id="sec-2">
      <title>Summary of our approach</title>
      <p>
        This year, the VCDAT focuses on scaling annotation algorithms to thousands of images and possibly more,
which is indeed a very difficult task. Image annotation is still an unsolved problem and recent state of
the art algorithms perform less than satisfactorily on most image databases [2, 5]. Image annotation is one
branch of computer vision related to object detection and recognition; its goal is to decide whether an image
contains one or multiple targeted objects and if yes, finds their locations. This problem is well studied and
reasonably well solved for particular objects such as faces [
        <xref ref-type="bibr" rid="ref6 ref7">18, 17</xref>
        ] but remains reputedly difficult for many
other classes of objects [
        <xref ref-type="bibr" rid="ref5">10, 15</xref>
        ].
      </p>
      <p>
        Generally, local approaches, for instance those relying on keypoint extraction or image segmentation,
are likely to offer better results, but at the expense of a much higher computational effort [
        <xref ref-type="bibr" rid="ref2 ref4">12, 14</xref>
        ].
Regardless the computational issues, VCDAT uses 53 concepts and many of them are holistic1 so local (and also
object based) methods are unlikely to provide descent results for this particular level of difficulty.
Furthermore, local approaches hit the exterme variability of objects (concepts) into scenes and the limited amount
of training images in order to capture this variability.
      </p>
      <p>Instead, we focus on global approaches i.e., those which extract global image descriptions and easily
handle large scale databases and annotations. This scalability will be achieved at the detriment of slight
decrease of precision. Moreover, as we shall see, adding and training our system with new concepts is
straightforward and does not require separate models for each one.</p>
      <p>The remainder of this paper is organized as follows, we first describe our visual image and text features
(see §3), then we discuss the application of Canonical Correlation Analysis (CCA) in order to infer a latent
space where the two underlying representations are highly correlated (§4.) Given a visual description of
a new (test) image, we first project it into the CCA latent space, then we infer text features as a linear
combination of basic concepts which correlate the best with the visual one. Finally, we back-project the
resulting text features into the (input) concept space and we normalize the projection coefficients between
0 and 1. A value close to 1 means that the corresponding concept is likely to be present into an image while
a value close to 0 corresponds to an unlikely concept.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Text and visual content description</title>
      <p>Visual descriptors. Global image descriptors have some properties that are very desirable in our case: (a)
they have small memory footprint and thus fit into standard PCs without any specific storage requirements;
(b) they are very fast to compute as they involve simple distance computation operations, guaranteeing real
time responses; and (3) they do not include any a priori object model and thus can be applied to any target
category. Indeed, global descriptors have been shown to perform well in this framework, for example with
machine learning and data mining algorithms [2, 9, 4].</p>
      <p>
        More precisely, we use a combination of color, texture and shape features, as follows. To represent
color we use weighted color histograms: they provide a summary description of the color information
including spatial measure in order to enphasize image regions that are interesting with respect to the visual
content [16, 1]. As for texture features we use the power spectral density distribution in the complex plane.
This has been shown to perform well when combined with color and shape histograms [
        <xref ref-type="bibr" rid="ref1">11</xref>
        ]. Roughly,
a high energy spectrum concentrated at low frequencies highlights large scale informations in an image,
while high frequencies correspond to textured regions (small scale details). In order to describe the shape
content of an image we use standard edge orientation histograms. First, edges are extracted from images,
then the gradient is computed using only the edge pixels. The orientation of the gradient is quantized w.r.t.
the angle resulting into a histogram that is sensible to the general flow of lines in the image [8]. More
details on image descriptors can be found in [3].
      </p>
      <p>Text descriptors. We use the annotations provided for the training set in order to compute the text features.
The latter have 53 dimensions, one for each concept c, indicating the presence or the absence of c. The
resulting feature vector is very sparse; i.e., when applying principal component analysis (PCA), we found
that 48 dimensions are sufficient in order to capture 100% of the statistical variance of the training data.</p>
      <p>1Holistic means that the annotation is based on a global impression of a scene and not necessarily related to its physical objects.</p>
    </sec>
    <sec id="sec-4">
      <title>Prediction using CCA</title>
      <p>Canonical Correlation Analysis was first introduced by Hotelling [7] and it is used in order to capture linear
relationships between two (or many) ordered2 sample sets in different feature spaces. Canonical correlation
analysis seeks a pair of linear transformations, one for each of the feature spaces, which map training and
testing data into a common latent space. The latter is built in order to maximize the correlation between
the sample sets in different feature spaces[6].</p>
      <p>Given a test image, first we extract its visual feature vector and we project it into the CCA latent space.
Then, we back-project the latent feature vector into the 53 dimensions of text space using the
MoorePenrose pseudo-inverse of the CCA transformation matrix. Now, annotations correspond to the entries
among the 53 dimensions where the score is larger than a given threshold.</p>
      <p>Training data consists of 5.000 images sharing 53 concepts. Fig. 1 shows the distribution of the number
of images through different concepts. The most frequent one appears in 4656 images while the less frequent
annotates only 18 images. Notice that both “very frequent” and “very rare” concepts are difficult to learn
as the underlying positive and negative classes are clearly unblanced.</p>
      <p>5000
4000
s
e
g3000
a
m
i</p>
      <p>We randomly split the training set in two parts: one used for learning the CCA transform (4.000 images)
and the other one used in order to evaluate the performance (1.000 images). Since the output of the
algorithm has an asymptotic normal distribution, we normalize it to 0.5 mean and 1/6 standard deviation.
This ensures that 99.7 of the predicted scores lie between 0 and 1. Scores less than 0 (resp. larger than 1)
are mapped to 0 (resp. 1).</p>
      <p>The evaluation measure we use is the annotation error defined as the expected false negatives and false
positives. For each concept c, we fix a threshold τ (c) and we annotate images with c if the underlying
scores are larger that τ (c). Notice that τ (c) is fixed in order to minimize the error rate. We then linearly
shift τ (c) to 0.5 in order to comply with the submission format.</p>
      <p>On these challenging test images, our annotation method achieves relatively reasonable performances;
the false positive error rate is 0.18 while the false negative one reaches 0.21. Nevertheless, our method is
very efficient; in practice it tooks about a second in order to achieve training and prediction using a standard
Pentium-M processor (with 2500 Mhz).</p>
      <p>We also extended our method in order to use the ontology suggested by the challenge. Text features
were enriched using this ontology in order to include all intermediate concepts and then propagate the
2One may define any arbitrary order for each sample set but should keep that order in different feature spaces.
annotation along the hypernyms tree. Notice that predictions include only the 53 concepts required by the
benchmark. However, the ontology is too small in order to provide a noticeable improvement. Indeed, its
total number of nodes is 68 where 53 (out of the 68) correspond to the candidate annotations. Again, we
found that text features are still living into a subspace of 48 dimensions and this clearly shows that new
extended concepts provide the same amount of information as initial ones.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusion and perspectives</title>
      <p>In this work we introduced the participation of TELECOM ParisTech in the Large Scale Visual Concept
Detection and Annotation Task at ImageClef 2009. This year the task focuses in scalability of the
annotation methods to large databases. Consequently, we use global, fast and easy to compute images descriptors
that require very few computation resources. Our method constructs a latent space, using Canonical
Correlation Analysis, where text and image features are highly correlated. It is extremely fast, it runs in less
that a second both for training and for testing on a standard 2.5 GHz PC, and makes annotation effective
and efficient in order to handle large scale databases.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements References</title>
      <p>This work was supported by the French National Research Agency (ANR) under the AVEIR3 project,
ANR-06-MDCA-002.</p>
      <p>[1] Nozha Boujemaa, Julien Fauqueur, Marin Ferecatu, Franc¸ois Fleuret, Vale´rie Gouet, Bertrand Le
Saux, and Hichem Sahbi. Ikona: Interactive generic and specific image retrieval. In Proceedings of
the International workshop on Multimedia Content-Based Indexing and Retrieval (MMCBIR’2001),
2001.
[2] Ritendra Datta, Dhiraj Joshi, Jia Li, and James Wang. Image retrieval: Ideas, influences, and trends
of the new age. ACM Computing Surveys, 40(2):5:1–60, 2008.
[3] Marin Ferecatu. Image retrieval with active relevance feedback using both visual and keyword-based
descriptors. PhD thesis, INRIA—University of Versailles Saint Quentin-en-Yvelines, France, 2005.
[4] Theo Gevers and Arnold W. M. Smeulders. Content-based image retrieval: An overview. In</p>
      <p>G. Medioni and S. B. Kang, editors, Emerging Topics in Computer Vision. Prentice Hall, 2004.
[5] Allan Hanbury. A survey of methods for image annotation. Journal of Visual Languages and
Computing, 19(5):617–627, 2008.
[6] David R. Hardoon, Sandor Szedmak, and John Shawe-Taylor. Canonical correlation analysis: An
overview with application to learning methods. Neural Computation, 16(12):2639–2664, 2004.
[7] H. Hotelling. Relations between two sets of variates. Biometrika, 28:312–377, 1936.
[8] A.K. Jain and A. Vailaya. Shape-based retrieval: a case study with trademark image databases.</p>
      <p>Pattern Recognition, 31(9):1369–1390, 1998.
[9] M. Lew, N. Sebe, C. Djeraba, and R. Jain. Content-based multimedia information retrieval:
State-ofthe-art and challenges. ACM Transactions on Multimedia Computing, Communication, and
Applications, 2(1):1–19, 2006.
[10] D. Lowe. A survey of methods for image annotation. International Journal of Computer Vision,
80(2), 2004.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>B.S.</given-names>
            <surname>Manjunath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Salembier</surname>
          </string-name>
          , and T. Sikora, editors. Introduction to MPEG-7: Multimedia Content Description Interface. Wiley,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Krystian</given-names>
            <surname>Mikolajczyk</surname>
          </string-name>
          and
          <string-name>
            <given-names>Cordelia</given-names>
            <surname>Schmid</surname>
          </string-name>
          .
          <article-title>A performance evaluation of local descriptors</article-title>
          .
          <source>IEEE Transactions on Pattern Analysis &amp; Machine Intelligence</source>
          ,
          <volume>27</volume>
          (
          <issue>10</issue>
          ):
          <fpage>1615</fpage>
          -
          <lpage>1630</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Stefanie</given-names>
            <surname>Nowak</surname>
          </string-name>
          and
          <string-name>
            <given-names>Hanna</given-names>
            <surname>Lukashevich</surname>
          </string-name>
          .
          <article-title>Multilabel classification evaluation using ontology information</article-title>
          .
          <source>In Proc. of the IRMLES Workshop</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Jean</surname>
            <given-names>Ponce</given-names>
          </string-name>
          , Martial Hebert, Cordelia Schmid, and
          <string-name>
            <given-names>Andrew</given-names>
            <surname>Zisserman</surname>
          </string-name>
          .
          <article-title>Towards category-level object recognition</article-title>
          , volume
          <volume>4170</volume>
          . Springer,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Torralba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. P.</given-names>
            <surname>Murphy</surname>
          </string-name>
          , , and
          <string-name>
            <given-names>W. T.</given-names>
            <surname>Freeman</surname>
          </string-name>
          .
          <article-title>Sharing visual features for multiclass and multiview object detection</article-title>
          .
          <source>In Proc. of CVPR</source>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>P.</given-names>
            <surname>Viola</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Jones</surname>
          </string-name>
          .
          <article-title>Sharing visual features for multiclass and multiview object detection</article-title>
          .
          <source>In Proc. of ICCV</source>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chellappa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. J.</given-names>
            <surname>Phillips</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Rosenfeld</surname>
          </string-name>
          .
          <article-title>Face recognition: A literature survey</article-title>
          .
          <source>International Journal of Computer Vision</source>
          ,
          <volume>35</volume>
          (
          <issue>4</issue>
          ):
          <fpage>399</fpage>
          -
          <lpage>458</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>