<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>DS@GT at Touché: Image Search and Ranking via CLIP and Image Generation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Benjamin Ostrower</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Patcharapong Aphiwetsa</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Georgia Institute of Technology</institution>
          ,
          <addr-line>225 North Avenue, Atlanta, 30332</addr-line>
          ,
          <country country="US">United States</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <abstract>
        <p>Our team made 2 submissions in the task "Image Retrieval for Arguments", where our submission focused on retrieving images. Our two runs made use of Image Generation comparison and CLIP embeddings. The exponential growth of digital imagery has profoundly influenced various fields, ranging from social media and entertainment to scientific research and healthcare. The importance and proliferation of visual media will only accelerate as a form of eficient communication, hence the phrase "a picture is worth a thousand words". Touché ofers a competition on selecting the most relevant images from a crawled corpus for a set of arguments. Therefore we attempted to enter in this touche task to improve on solutions for retrieving images related to arguments. We wanted our solutions to only focus on images and descriptions of images, to try and not use any webpage text. Our solution approaches focused around combining image descriptions and the image itself as a comprehensive unit and then for our other submission implementing one more step on top of that to add a comparison to generated images that used the arguments themselves as prompts for the generated images.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Image Generation</kwd>
        <kwd>CLIP</kwd>
        <kwd>Image Retrieval</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <sec id="sec-2-1">
        <title>2.1. Related work</title>
        <p>
          The defining paper for retrieving images for arguments is by Kiesel [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. In it they use natural language
processing techniques found in the web text of surrounding these images to create expanded keyword
searches in the web text to attempt to track the stance of an argument. Last year at Touché 2023 team
Picard [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] constructed a similar solution - one of their submissions involved image generation using
stable difusion.The authors prompt the image generation with the arguments from the competition
to create a benchmark image that is used to compare the competition images searching for the most
similar using CLIP.
2.2. CLIP
CLIP [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] stands for Contrastive Language Image Pretraining. It is a model developed by OpenAI that
embeds images or texts into the same vector space by training on images with their corresponding
captions. It is helpful to reduce the dimensionality of text or images, but still be semantically similar to
retrieve relevant results from one modality to the other.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.3. Stable Difusion</title>
        <p>Stable Difusion [ 4] is a neural network model that is capable of producing images given a prompt.
By decomposing the image formation process into a sequential application of denoising autoencoders
stable difusion can achieve state-of-the-art synthesis results on image data.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. System Overview</title>
      <sec id="sec-3-1">
        <title>3.1. Embedding Pipeline</title>
        <p>Both submissions made use of CLIP from OpenAI. The competition supplied the images and their
corresponding image descriptions obtained from LLava. These modalities were embedded using CLIP
into a 70-30 ratio of Image to text. These embeddings were stored in a chromaDB vector database for
later retrieval.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Retrieval</title>
        <p>Provided arguments (only the arguments no premises or claims used) were used as queries to be brushed
against the vector database. The arguments were embedded using CLIP to keep the same dimensionality
as the combined image-text embeddings of the images. These arguments were compared pairwise
across each image in the database via cosine similarity keeping the top 10 for our initial submission.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Image generation</title>
        <p>For the image generation submission for each topic images were generated of a set of tinyLLama
generated supporting/detracting arguments depending on the stance. For example if the stance was
pro the prompt would instruct to provide supporting claims if it was anti then it would prompt with
claims to detract. The number of arguments generated would vary from 3-7. The prompt format for a
supporting generation is found in figure 1.
{
} ,
{
}
" r o l e " : " s y s t e m " ,
" c o n t e n t " : " You a r e a s t u d e n t t r a i n e d t o t h i n k c r i t i c a l l y f o r e a c h
c l a i m b r e a k i t down i n t o s e v e r a l s u b c l a i m s " ,
" r o l e " : " u s e r " ,
" c o n t e n t " : f " C r e a t e some numbered prompts t o g i v e t o a machine t o
c r e a t e i m a g e s t h a t s u p p o r t t h e c l a i m : ’ { prompt } ’ "</p>
        <p>These tinyLLama-generated supporting/detracting arguments were then fed into the
stable-difusion2-1-base for image generation. These generated images are again embedded with CLIP and compared
to the top 40 retrieved images using the method described in the prior section for a given argument.
Because there was a varying number of images generated for each argument when comparing the
crawled images to the generated images we take the highest average score across all generated images
as our metric for most relevant images.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>Our approaches didn’t beat baseline of BM25 and SBERT. We do see that the added filter of comparing
the top results to images generated from the arguments does increase the accuracy of the model. It
appears that Images alone</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>The image generated approach worked best, however both submissions didn’t beat baseline. Future
directions of work include re-ranking LLava Visual Question Answering generations - i.e. ask LLava to
describe the picture in relevance to argument in question. Utilizing BM25 and webpage text to decipher
keywords that might indicate the relevance of the image.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>The Data Science at Georgia Tech Club.
[4] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, B. Ommer, High-resolution image synthesis with
latent difusion models, in: Proceedings of the IEEE/CVF conference on computer vision and pattern
recognition, 2022, pp. 10684–10695.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kiesel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Reichenbach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Stein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <article-title>Image Retrieval for Arguments Using StanceAware Query Expansion</article-title>
          , in: K.
          <string-name>
            <surname>Al-Khatib</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Hou</surname>
          </string-name>
          , M. Stede (Eds.), 8th Workshop on Argument Mining (ArgMining
          <year>2021</year>
          )
          <string-name>
            <surname>at</surname>
            <given-names>EMNLP</given-names>
          </string-name>
          , Association for Computational Linguistics,
          <year>2021</year>
          , pp.
          <fpage>36</fpage>
          -
          <lpage>45</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2021</year>
          .argmining-
          <volume>1</volume>
          .4.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Moebius</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Enderling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. T.</given-names>
            <surname>Bachinger</surname>
          </string-name>
          , Jean-luc picard at touch∖'e 2023:
          <article-title>Comparing image generation, stance detection and feature matching for image retrieval for arguments</article-title>
          ,
          <source>arXiv preprint arXiv:2307.09172</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Radford</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. W.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hallacy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ramesh</surname>
          </string-name>
          , G. Goh,
          <string-name>
            <given-names>S.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Sastry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Askell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mishkin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Clark</surname>
          </string-name>
          , et al.,
          <article-title>Learning transferable visual models from natural language supervision</article-title>
          ,
          <source>in: International conference on machine learning, PMLR</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>8748</fpage>
          -
          <lpage>8763</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>