<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>CIKM MMSR, Oct</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1145/3571884.3597144</article-id>
      <title-group>
        <article-title>Designing Interfaces for Multimodal Vector Search Applications⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Owen P. Elliott</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tom Hamer</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jesse Clark</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>15 Kearny St</institution>
          ,
          <addr-line>San Francisco, CA 94108</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <addr-line>276 Flinders Street, Melbourne, VIC 3000</addr-line>
          ,
          <country country="AU">Australia</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>25</volume>
      <issue>2024</issue>
      <fpage>8748</fpage>
      <lpage>8763</lpage>
      <abstract>
        <p>Multimodal vector search ofers a new paradigm for information retrieval by exposing numerous pieces of functionality which are not possible in traditional lexical search engines. While multimodal vector search can be treated as a drop in replacement for these traditional systems, the experience can be significantly enhanced by leveraging the unique capabilities of multimodal search. Central to any information retrieval system is a user who expresses an information need, traditional user interfaces with a single search bar allow users to interact with lexical search systems efectively however are not necessarily optimal for multimodal vector search. In this paper we explore novel capabilities of multimodal vector search applications utilising CLIP models and present implementations and design patterns which better allow users to express their information needs and efectively interact with these systems in an information retrieval context.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Multimodal</kwd>
        <kwd>CLIP</kwd>
        <kwd>Information Retrieval</kwd>
        <kwd>Vector Search</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Diferent search backends lead to difering search experiences. This necessitates considered
implementation of methods of interaction. Modern multimodal search applications leverage artificial intelligence
(AI) models capable of producing representations which unify diferent modalities. While a multimodal
vector search system can be treated as a drop in alternative to a traditional keyword search engine,
merely using it as a direct replacement doesn’t exploit its full potential. The fundamental components
of a standard search interface have remained largely unchanged since early research into interfaces for
statistical retrieval systems, such as inverted indices with TF-IDF[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] or BM25[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Emerging areas, such
as generative AI, have driven the development of new Human Computer Interaction (HCI) paradigms.
Chatbot agents such as OpenAI’s ChatGPT[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] have exposed users to new ways of seeking information
with natural language[4, 5]. Multimodal vector search systems ofer a similar green field for HCI
research.
      </p>
      <p>In this paper we explore techniques and interface elements for multimodal vector search in online
image search applications1. In particular, we focus on multimodal systems built with CLIP models[6],
however much of the content generalizes to other multimodal models (such as ImageBind[7] or
LanguageBind[8]). We provide visual examples of UI implementations and define the concepts of
query refinement, semantic filtering, contextualisation, and random recommendation walks as they
pertain to multimodal information retrieval. We aim to provide practical implementations who’s complexity
can be hidden from the user making them suitable for non-expert users.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Properties of Multimodal Models and Representations</title>
      <p>To develop efective methods of interaction for multimodal vector search applications, it is essential to
understand the properties of multimodal models and representations. In this section, we discuss the
properties of CLIP models and vector representations for multimodal search.</p>
      <sec id="sec-2-1">
        <title>2.1. Properties of CLIP Models</title>
        <p>CLIP models are a class of models trained to encode images and text into a shared embedding space[6].
CLIP models are trained on large datasets of text and image pairs[9] to maximize the cosine similarity
between matching image-text pairs and minimize the similarity between non-matching pairs, typically
done with in-batch negatives. This allows for the model to be used for a variety of tasks such as
zero-shot classification and retrieval.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Vector Representations for Multimodal Search</title>
        <p>Multimodal models, such as CLIP, create vectors for each modality that exist within a shared space.
Multiple vectors of one or more modalities can be combined into a single representation via weighted
interpolations, such as linear interpolation (lerp) or spherical linear interpolation (slerp)[10].</p>
        <p>Given a set of  vectors  = {1, 2, . . . ,  | ‖‖ = 1} in R, and their corresponding weights
 = {1, 2, . . . ,  |  ∈ R}, we can define lerp and slerp as follows:
Linear Interpolation (lerp):

lerp = ∑︁</p>
        <p>=1
^lerp =</p>
        <p>lerp
‖lerp‖
Then, normalize the result to obtain the final result:
Spherical Linear Interpolation (slerp):</p>
        <p>Spherical linear interpolation does not apply natively to  vector combinations, an iterative approach
can be used to merge vectors hierarchically. The algorithm for hierarchical slerp is presented in
Algorithm 1.</p>
        <p>Algorithm 1 Hierarchical slerp Interpolation
Require: Set of unit vectors  = {1, 2, . . . , } and weights  = {1, 2, . . . , }
Ensure: Interpolated vector slerp
1: Initialize  (0) ←  ,  (0) ← 
2: while length of  () &gt; 1 do
3: Initialize  (+1) ← [],  (+1) ← []
4: for i = 1 to ⌊length of  ()/2⌋ do
5: Compute weights sum: sum ←
2(− ) 1 + 2</p>
        <p>()
2()
sum
slerp(2(− ) 1, 2(), )
6: Compute interpolation parameter:  ←
7: Compute interpolated vector: () ←
8: Update weights: (+1) ← s2um
9: Append () to  (+1), (+1) to  (+1)
10: end for
11: if length of  () is odd then
12: Append the last vector and weight unchanged to  (+1) and  (+1)
13: end if
14: Update  () ←  (+1),  () ←  (+1)
15: end while
16: return  ()[0]
where the function slerp(0, 1, ) is defined as follows:
slerp(0, 1, ) =
sin((1 − )Ω)
sin Ω
0 +
sin(Ω)
sin Ω
1
and Ω = arccos( 0 · 1).</p>
        <p>Combined representations via lerp and slerp merge understanding from multiple fields and modalities
into a single unit normalized vector which can be compared to other merged vectors or individual vectors
produced by the same model. This property arises naturally with CLIP models however techniques
such as Generalized Contrastive Learning (GCL) can also be used to directly optimise for this[11].</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. User Interface Elements and Implementations</title>
      <p>In this section, we present user interface elements and their implementations for multimodal vector
search applications. These elements are inspired by the nature of CLIP models and properties of
multimodal representations discussed in Section 2.1 and Section 2.2.</p>
      <sec id="sec-3-1">
        <title>3.1. Query Refinement</title>
        <p>Query refinement is not something new in the field of information retrieval, however multimodal vector
search enables novel and efective implementations. By merging the query with additional queries,
users can provide more context to the search engine, which can lead to more relevant results. This can
be done iteratively by interpolating additional query vectors with positive or negative weights. Vectors
for queries can be merged with approaches such as lerp or slerp as discussed in Section 2.2. Many
existing search UIs treat search as a single shot process, similar to what is done in information retrieval
benchmarking, in reality though, this is not reflective of real world scenarios. Users interact with retrieval
systems in a search session where multiple queries are executed[12, 13], iterative refinement ties into
this concept and bears semblance to other models of information retrieval such as berrypicking[14].</p>
        <p>One way in which we can present this functionality is through additional input fields which enable
query refinement with natural language as shown in Figure 1. Each input corresponds to a term which is
vectorised and combined via linear interpolation with weights, "more of this" query terms are assigned
a positive weight and "less of this" query terms are assigned a negative weight.</p>
        <p>Formally, for a CLIP model  with text encoder txt, we can create refined queries from multiple
queries as follows:
refined = N [((txt(dining chair) · 1.0) + (txt(scandinavian design) · 0.6) + (txt(upholstery) · − 1.1)]
where N[v] denotes the unit normalized version of the vector v. This vector refined becomes the
query vector for the search engine. The weights are abstracted from the user allowing for iterative
refinement on results with natural language as shown in Figure 2.</p>
        <sec id="sec-3-1-1">
          <title>3.1.1. Removing Low Quality Items</title>
          <p>Query refinement can also be applied in marketplaces with large amounts of user generated content
where quality of product listings can be dubious. By merging a query with a negatively weighted
query term concerning quality we can dissuade the search from items relevant to the query indicating
a lack of quality in the visual component of the listing. Queries can be merged with vectors such as
(txt(low quality, low res, burry, jpeg artefacts) · − 1.1). In a marketplace setting this can be used to
encourage higher quality listings with more professional or appealing photos as shown in Figure 3.</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Query Prompting and Expansion</title>
        <p>In Section 2.1 we refered to how CLIP models are trained, providing an intuition as to the nature of
the text that is in domain for these models. In search, we often encounter short queries of one or two
words which don’t provide the level of specificity which would be typically considered in domain for
CLIP models given the captions they are trained with, this is similar to the problem of using CLIP for
zero-shot classification. When performing zero-shot classification with CLIP, dataset labels are typically
a single word, which does not align with the text captions seen in the model’s training data. To work
around this, labels are prefixed with additional text to convert it into a caption[ 15]. A simple prefix for
class labels in zero-shot classification is "a photo of a" or "an image of a"[16].</p>
        <p>We draw influence from CLIP zero-shot classification implementations and present "semantic filtering"
as an approach to align queries with in domain captions and create query expansions with minimal user
input. Semantic filtering alters the semantic representation of a query to control results in a similar
manner to traditional filtering, without the need to label metadata. It provides a structured way to
perform query expansions[17, 18, 19] to short queries without requiring an expert user to design a
verbose query. This approach also draws inspiration from more modern prompt engineering strategies
used with Large Language Models (LLMs)[20]. The goal is to expand this user submitted query with
additional text within the model’s context window. For example, to semantically filter to a boho style
the, a query could be expanded with "A bohemian (boho) style image of a &lt;QUERY&gt;, rich in patterns,
colors, and textures" where &lt;QUERY&gt; is the user submitted query.</p>
        <p>The process of prompt design can be abstracted from the user, we can retain familiar UI elements
while altering their backend implementation to expose new functionality to the user. This can be done
by providing a set of predefined prompts which can be selected by the user to modify the query. A
traditional selector as shown in Figure 4 is a suitable element to expose this functionality.</p>
        <sec id="sec-3-2-1">
          <title>3.2.1. Realtime LLM Assisted Query Expansion</title>
          <p>Semantic filtering can also be performed online with the inclusion of vision capable LLMs. Using direct
or indirect user feedback on search results with a visual component we can prompt LLMs to extract
query expansion terms to better align a user’s search term with their desired information. This is
useful when a user may not know the best way to describe a visual style they are looking for or if they
are unaware of the semantic capabilities of the underlying search engine. The process is depicted in
Figure 5.</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Realtime Personalisation and Contextualised Search</title>
        <p>Taking influence from the field of relevance feedback[ 21, 22], vectors of existing documents in the index
can be harnessed as query expansion terms in realtime, steering search results towards analogous items.
Contextualisation can be broadly categorised into two types:
• Intra-category Contextualisation: These contextualise with items from the same category. For
instance, recommending another watch based on a user’s preference for a specific watch model.
• Inter-category Contextualisation: Here, contextualisations span diferent categories. An
example might be tailoring search results for "couch" by a user’s afinity for certain rug patterns
or style of cofee table.</p>
        <p>Intra-category contextualisation is the simpler of the two cases and can be achieved by combining a
query with information from documents from its own result set, a well established pattern in relevance
feedback. Inter-category contextualisation is more challenging; it is not something that is easily done
with lexical search implementations, however with multimodal embedding models, information can be
combined across categories. These contextualisations can be implemented with explicit, implicit, or
pseudo relevance feedback.</p>
        <p>Intra-category contextualisation can be achieved by merging the query vector with one or more
results from the existing result set, the original query retains the majority of the weight, as shown in
Figure 6.</p>
        <p>The ability of CLIP models to capture complex inter-category relationships can be applied to
disconnected pieces of information, in Figure 7 we show that text queries can be contextualised with
cross-modal information, in particular that a search for a backpack can be tailored with an image of a
forest.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Recommendations as Search</title>
        <p>Recommendations are an application of search. To formulate recommendations as a search problem we
consider a query vector  in R which exists in the same embedding space as a corpus of vectors ;
where in search  would be derived from a user submitted query, in recommendations this vector is
derived from some other source, or combination of sources, which orients the vector towards suitable
item recommendations. This formulation can be applied to multimodal search applications with models
like CLIP; the high dimensional embedding spaces is suficiently expressive, with enough degrees of
freedom, to create these representations. Formulation of recommendations as a search problem is trivial
for similar items however raises challenges for diversification of recommended items. We present two
approaches to tackle this issue:
• Vector Ensembling: Merging vectors for disparate items to ensemble content.
• Random Recommendation Walks: Traversal of the vector space for adjacent items to explore
diverse but related content.</p>
        <sec id="sec-3-4-1">
          <title>3.4.1. Vector Ensembling</title>
          <p>A recommendation vector can be constructed from document vectors, pieces of user information, or
any combination of any number of both. Combination can be done with techniques such as lerp or
slerp as discussed in Section 2.2. Interpolation between vectors of the same class (e.g. all document
embeddings) with equal weights seeks a middle point between their representations which provides
an ensembling efect where distinct classes of items can be retrieved by a single vector with some
shared qualities. Using slerp preserves the geometric relationship between constituent vectors in the
hypersphere, calculated as vensembled = HierarchicalSlerp(,  ) where ∀ ∈ ,  = 1. This is useful
in online recommendations applications where interactions from clicks or add-to-carts (ATCs) can be
used to build a dynamic list of products to ensemble when generating recommendations. An example
of this ensembling efect is shown in Figure 8.</p>
          <p>Utilising existing document vectors for the search means that recommendations can be done in
realtime and has no cold-start problem for new products or users. Information can be gathered from a
session on the fly without prior knowledge about the user[23].</p>
        </sec>
        <sec id="sec-3-4-2">
          <title>3.4.2. Random Recommendation Walks</title>
          <p>To diversify recommendations we must deviated from the immediate neighbourhood of our query
vector without disregarding relevancy. Random walks can achieve this by finding neighbours to our
initial recommendation vector, selecting neighbours, and exploring outwards from these neighbours
(using their embeddings as queries). We present a process for performing random recommendation
walks in Algorithm 2 and Algorithm 3.</p>
          <p>Algorithm 2 Generate Recommendation Tree with a Random Walk
Require: v ∈ R, : number of layers, : maximum children per node, : nearest neighbours
Ensure: : Tree structure with children up to  layers deep
1: Initialize  with (v, {}) as the vector and an empty list for children
2: Initialize  set with {v}
3: Initialize   as a queue containing 
4: for ℓ = 1 to  − 1 do
5: Initialize   as an empty queue
6: while   ̸= ∅ do
7: Dequeue  from  
8: ℎ ← GetLayer(, , , )
9: for each ℎ ∈ ℎ do
10: Enqueue child into  
11: end for
12: end while
13:   ←  
14: end for
15: return</p>
          <p>In practice, this output can be represented in a variety of formats. A typical grid or carousel layout can
be used to display the results of the random recommendation walk. Another more tailored visualisation
is to retain the tree structure created by the traversal as shown in Figure 9. These trees can be interactive
to enable exploratory search and discovery.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>In this paper, we have explored the unique capabilities and enhanced user experiences ofered by
multimodal vector search systems, particularly those leveraging CLIP models. By understanding the
properties of these models and their vector representations, we proposed novel user interface elements
that can efectively facilitate the expression of information needs in a multimodal context. Techniques
such as query refinement, semantic filtering, contextualisation, and recommendations ofer the potential
to improve search relevance and user satisfaction. The implementation of linear interpolations and
spherical linear interpolations with hierarchical slerp, provides robust methods for combining vectors
across diferent modalities. This allows for more nuanced and contextually relevant search results,
demonstrating the unique properties of multimodal vector search when compared to traditional lexical
search systems. Additionally, the introduction of vision capable LLMs for realtime query expansion
further extends how multiple modalities can be leveraged in search experiences.</p>
      <p>While our study focuses on CLIP models, the principles and techniques described are broadly
applicable to other multimodal models such as ImageBind and LanguageBind. The proposed user
interface elements and implementations are broadly applicable in various multimodal search applications.
By presenting these multimodal search capabilities and their implementations, we hope to further
understanding and ideation around how users can be enabled in describing their information need. Our
goal is to deliver more intuitive and efective search experiences for users.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>Thanks to Farshid Zavareh for the implementation of the hierarchical slerp algorithm (Algorithm 1).
[12] J. Teevan, C. Alvarado, M. S. Ackerman, D. R. Karger, The perfect search engine is not enough: a
study of orienteering behavior in directed search, in: Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems, CHI ’04, Association for Computing Machinery, New York,
NY, USA, 2004, p. 415–422. URL: https://doi.org/10.1145/985692.985745. doi:10.1145/985692.
985745.
[13] R. Jones, K. L. Klinkner, Beyond the session timeout: automatic hierarchical segmentation of search
topics in query logs, in: Proceedings of the 17th ACM Conference on Information and Knowledge
Management, CIKM ’08, Association for Computing Machinery, New York, NY, USA, 2008, p.
699–708. URL: https://doi.org/10.1145/1458082.1458176. doi:10.1145/1458082.1458176.
[14] M. J. Bates, The design of browsing and berrypicking techniques for the online search interface,</p>
      <p>Online review 13 (1989) 407–424.
[15] O. Saha, G. Van Horn, S. Maji, Improved zero-shot classification by adapting vlms with text
descriptions, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern
Recognition, 2024, pp. 17542–17552.
[16] OpenAI, Prompt engineering for imagenet, 2024. URL: https://github.com/openai/CLIP/blob/main/
notebooks/Prompt_Engineering_for_ImageNet.ipynb, accessed: 2024-08-06.
[17] E. M. Voorhees, Query expansion using lexical-semantic relations, in: SIGIR’94: Proceedings of
the Seventeenth Annual International ACM-SIGIR Conference on Research and Development in
Information Retrieval, organised by Dublin City University, Springer, 1994, pp. 61–69.
[18] E. N. Efthimiadis, Query expansion., Annual review of information science and technology (ARIST)
31 (1996) 121–87.
[19] J. Xu, W. B. Croft, Improving the efectiveness of information retrieval with local context analysis,
ACM Trans. Inf. Syst. 18 (2000) 79–112. URL: https://doi.org/10.1145/333135.333138. doi:10.1145/
333135.333138.
[20] G. Marvin, N. Hellen, D. Jjingo, J. Nakatumba-Nabende, Prompt engineering in large language
models, in: International conference on data intelligence and cognitive informatics, Springer, 2023,
pp. 387–402.
[21] G. Salton, C. Buckley, Improving retrieval performance by relevance feedback, Journal of the</p>
      <p>American society for information science 41 (1990) 288–297.
[22] R. W. White, G. Marchionini, Examining the efectiveness of real-time query expansion,
Information Processing &amp; Management 43 (2007) 685–704.
[23] H. Cui, J.-R. Wen, J.-Y. Nie, W.-Y. Ma, Query expansion by mining user logs, IEEE Transactions on
Knowledge and Data Engineering 15 (2003) 829–839. doi:10.1109/TKDE.2003.1209002.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>K.</given-names>
            <surname>Sparck Jones</surname>
          </string-name>
          ,
          <article-title>A statistical interpretation of term specificity and its application in retrieval</article-title>
          ,
          <source>Journal of documentation 28</source>
          (
          <year>1972</year>
          )
          <fpage>11</fpage>
          -
          <lpage>21</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Baeza-Yates</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ribeiro-Neto</surname>
          </string-name>
          , et al.,
          <source>Modern information retrieval</source>
          , volume
          <volume>463</volume>
          , ACM press New York,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>OpenAI</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Achiam</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Adler</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Agarwal</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Ahmad</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <string-name>
            <surname>Akkaya</surname>
            ,
            <given-names>F. L.</given-names>
          </string-name>
          <string-name>
            <surname>Aleman</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Almeida</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Altenschmidt</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Altman</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Anadkat</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Avila</surname>
            , I. Babuschkin,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Balaji</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Balcom</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Baltescu</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Bao</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Bavarian</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Belgum</surname>
            ,
            <given-names>I. Bello</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Berdine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Bernadett-Shapiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Berner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bogdonof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Boiko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Boyd</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.-L.</given-names>
            <surname>Brakman</surname>
          </string-name>
          , G. Brockman,
          <string-name>
            <given-names>T.</given-names>
            <surname>Brooks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Brundage</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Button</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Cai</surname>
          </string-name>
          , R. Campbell,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>