<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Simple Regularization for Aligning Embedding Spaces for Cross-brand Recom mendation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ioannis Partalas</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Expedia Group</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Geneva</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Switzerland</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
          <email>H@10</email>
          <email>H@100</email>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Recommender Systems, Prague, Czech Republic.</string-name>
        </contrib>
      </contrib-group>
      <abstract>
        <p>In online platforms it is often the case to have multiple brands under the same group which may target diferent customer profiles, or have diferent domains. For example, in the hospitality domain, on-line travel platforms may have multiple brands which have either diferent traveler profiles or are more relevant in a local context. In this context, learning embeddings for hotels that can be leveraged in recommendation tasks in multiple brands requires to have a common embedding that can be induced using alignment approaches. At the same time, one needs to ensure that this common embedding space does not degrade the performance in any of the brands.</p>
      </abstract>
      <kwd-group>
        <kwd>Cross-brand recommendations</kwd>
        <kwd>Embeddings</kwd>
        <kwd>Hotel embeddings</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Online platforms often have multiple brands, for the same line of business, under the same group which
may target diferent customer profiles. As an example, in the hospitality and retail domains the on-line
platforms may have diferent brands, that can either have diferent profiles of customers or be more
relevant locally. A main task in retail platforms, is to recommend products to customers which requires
to learn an embedding space that captures their salient attributes. Hence, enable similarity comparisons
that can be leveraged from recommendation systems.</p>
      <p>
        In the recent years approaches that learn product embeddings from the interactions of the customers
with the on-line platform have been proposed1[], [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] as well as approaches tailored to the hospitality
domain [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ],[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. These approaches leverage the seminal word2vec model [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] in order to learn the
embedding space by treating the clicked items as tokens in a sentence. While it is common to learn
such embeddings in a single domain/brand, in the context of electronic commerce we would like to be
able to leverage such embeddings across diferent domains/brands. As mentioned previously, one can
learn hotel embeddings on a specific brand and leverage them in another brand in order to bootstrap or
improve the hotel embeddings in the latter case. Subsequently these hotel representations can be used
in tasks like personalized recommendations.
      </p>
      <p>
        To do so, one would need to align the embedding spaces of the diferent brands and use this aligned
space to capture the intent of the users while searching on the on-line platform. In a very recent
work, Bianchi et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] study the alignment of product embeddings to enable zero-shot learning in a
cross-shop scenario. Their setting is more general as in our case the multi-brands belong to the same
domain, and usually we dispose of a partial overlap of the inventories across brand domains.
Workshop on Recommenders in Tourism (RecTour 2025), September 22, 2025, co-located with the 19th ACM Conference on
      </p>
      <p>CEUR</p>
      <p>ceur-ws.org</p>
      <p>In this work we propose to align embeddings from diferent brands using a simple regularization
approach from domain adaptation. We build upon the hotel2vec mode4l][ for learning hotel embeddings
and extend it to accommodate alignment of embedding spaces. Our approach, can also be thought of as
a transfer learning one as we are able to bootstrap the learning procedure of hotel2vec in a target brand
using the hotel2vec embeddings from a source brand.</p>
      <p>The main contributions of this paper are the following: i) we propose a simple yet efective domain
adaptation approach that adds a regularization term in the loss function of hotel2vec, ii) we present
empirical results on the task of next-hotel prediction for two brands and iii) we also implement an
alignment method borrowed from the task of alignment of cross-lingual embeddings. We show empirically,
that to perform such a transpose to another domain one should be careful of the particularities of it.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        In recommendation systems learning embeddings for users and items that capture the semantics is a
critical part. In the domain of electronic commerce, approaches as prod2vec have been proposed in
order to learn representations of products1][, [8], [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] which leverage the skip-gram model [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. In [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]
the authors propose to learn embeddings for YouTube videos by combining multiple features, which
are used for candidate generation and ranking. Other approaches have also been proposed that include
metadata 2[] or try to capture diferent aspects of the product from diferent sources (clickstream data,
text and images) 9[].
      </p>
      <p>Aligning embedding spaces is a topic that has been extensively studied in the NLP domain and
specifically in the context of cross-lingual embeddings. A simple and eficient approach to align two
embedding spaces of diferent languages is to learn a linear projection [10]. In [11] the authors employ
an iterative approach starting from seed mappings of words (source language to target language) and
do the linear projection by imposing an orthogonality constraint in the projection matrix. A survey on
cross-lingual word embedding can be found in [12]. In this work we study the efectiveness of such
alignment approaches in the context of hotel embeddings.</p>
      <p>
        Domain adaptation is also a topic that has attracted interest in the recommendation systems space
[13], [14] where the goal is to transfer knowledge from a source to a target task. In our setting we
employ a domain adaptation approach in order to align diferent embedding spaces and we follow a
straightforward regularization approach1[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Such approaches have been included in a single framework
for domain adaptation 1[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Other works propose adversarial approaches for domain adaptation that
learn in the same time the alignment and an embedding space that is invariant to the dom1a7in], [[18],
[19]. More recently, graph and Transformer based methods have been proposed to address cross-domain
recommendations [20, 21, 22] as well as LLM-based models [23]. In [24] the authors propose to use a
proximal operator to learn shared latent factors through a Matrix Factorization approach.
      </p>
      <p>The approach most similar to our work is that presented in7][where the authors propose to
align product embeddings to enable cross-shop recommendations. Specifically, they propose diferent
approaches either based on content (images and text) or using the clicked products in the diferent
shops as supervised signals in methods for alignment of cross-lingual embeddings. Our use case has a
simpler setting as we dispose a partial overlap among the products of the diferent embeddings spaces
we wish to align.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Alignment through Regularized Domain Adaptation</title>
      <p>In order to learn hotel representations, we follow the hotel2vec mod4el],[which implements a neural
model and is trained with the skip-gram model and negative sampling. The model learns diferent
embeddings for click (  ), hotel properties  ( ) and geographical information (  ) which are fused
to learn an enriched representation. Specifically, the embeddings are calculated as follows:  =
 (  ;   ),   =  (  ;   ),   =  (  ;   ) where  (;  ) = ReLU( ‖‖ 2 ) and   ,   and   refer to the
hotel2vec model minimizes the following loss function:
input features for the click, amenity and geographical embedding. The final hotel2vec embedding is
calculated as a projection of the concatenated embedding sℎ: = ReLU([  ,   ,   ]  ).</p>
      <p>LetVhi be the representation of the hotℎel as calculated by the above equation whereℎ ∈  . Then
 () = −</p>
      <p>∑
(ℎ ,ℎ )∈ +
log  ( ℎ
  ℎ ) + ∑ log  (− ℎ  ℎ )
ℎ ∈ 
(1)
where  + are the skip-gram pairs of clicked hotels in the session that are generated using a fixed
length window.   are the negative samples that are sampled from the same market of clicked hotels in
the session, as a traveler searches for a specific destination. Finally,() is the sigmoid function.</p>
      <p>In this work, we propose to extend the hotel2vec model in order to accommodate embedding spaces
alignment in the case of multi-brand representations.</p>
      <p>We do so by employing a domain adaptation scheme. Denote s the source domain where we already
learned a hotel2vec model by minimizing the loss function in Equatio1n, 
domain  t we learn hotel2vec representations by minimizing the following regularized function:
 () . Then in the target
  () =  () + ||
ℎt −  ℎs ||22

where  ℎs refers to the corresponding embedding of hoteℎl in the source domain. Note that the
embeddi ngs of the source domain are fixed and not re-trained. Also,  is a parameter that controls
how much knowledge we would like to transfer from the source domain. As in our case we want to
align the embedding spaces, we would like to constraint the model to be as close as possible in the
source domain, hence set equal to 1.0. In the experimental section we experiment with this parameter
to understand the impact in the downstream task. Note that we define the strength of regularization
globally but in a more fine version it could be defined per hotel. We leave this for future work.</p>
      <p>The regularization framework is evaluated under the assumption of partially overlapping property
inventories across brand domains, where approximate correspondences between entities are inferred
for the purposes of this paper.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental Setting</title>
      <p>We evaluate the proposed approach in the next item prediction task where we want to predict the next
hotel clicked in the session based on the previous hotel clicked. We collect click sessions over one year
of searches for two brands, namelyBrand A and Brand B. Each dataset has millions of user interaction
sessions and hundreds of thousands of unique properties distributed across distinct brand domains. We
randomly split the sessions into training, validation, and test with a ration of 8:1:1.</p>
      <p>We use a system with 64GB RAM, 8 CPU cores, and a Tesla V100 GPU. We use Python 3 as the
programming language and the Tensorflow [ 25] library for the neural network architecture and gradient
calculations. For hotel2vec we follow the experimentation methodology that authors employed4]info[r
tuning the hyperparameters of the model. The model in both brands is trained wi2th-regularization.</p>
      <p>We compare the regularization approach with the linear projection alignment approach presented in
[10]. In that case given the embeddings that we export from the trained models in brand A and brand B
domains, denoted S and  T respectively, we solve the following optimization problem:

min || S −</p>
      <p>T 2
||
2
The alignment is learned only on the common hotel embeddings for the two sets of vectors in order to
avoid injecting too much noise.</p>
      <p>We compare the diferent approaches in terms of Hits@k and Mean Reciprocal Rank at k (MRR@k).
Both metrics are calculated over the ranking induced by cosine similarity of the embedding vectors.
Note that when we have a cross-brand setting this corresponds to a zero-shot learning framework.
-29.17 -24.48 -23.16 -25.05 -37.74 -33.91 -21.87 -24.61</p>
    </sec>
    <sec id="sec-5">
      <title>5. Results</title>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>We presented in this work a simple yet efective regularization approach for aligning embedding spaces
in a multi-brand scenario. For example, in the hospitality/retail domains the on-line platforms have
multiple brands that operate in the same domain. The idea is to add a regularizer in the objective
function of the model that learns the embeddings in order to force them to be as close as possible to the
embeddings of the source brand, hence performing domain adaptation. This kind of approaches has
also been explored in the past in Natural Language Processing tasks15[].</p>
      <p>We evaluated the proposed approach in the next-hotel prediction task for two brands. We measured
performance in terms of hits@k and MRR@k metrics. We also, compared with linear projection
alignment borrowed by the cross-lingual approaches for aligning embeddings of diferent languages
[10].</p>
      <p>The results showed that the proposed approach can align the spaces of the multiple brands achieving
good performance in both brands. Indeed the results showed that we can outperfom the single-domain
models. We also observed that an approach like the linear projection without taking into account some
particularities of the domain leads to worse performance.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Future Work</title>
      <p>
        For future work we would like to add a more adaptive regularization parameter that can be defined per
hotel rather than being global. In that way we may wish to transfer knowledge when we are certain
that a pair of hotels in the source and target brands should have the same embedding. We would also
like to explore the use of multiple source brands in order to align in the same time the embedding
spaces. Multi-task approaches can be leveraged to align the diferent embedding spaces1[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Also, other
alignment approaches could be explored [26]. Finally, we would like to explore adversarial cross-domain
adaptation for aligning the embedding spaces1[8]. In this case, we want to leverage the similar features
across the brands while also learning specific embeddings for each brand.
      </p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
[8] O. Barkan, N. Koenigstein, Item2vec: neural item embedding for collaborative filtering, in: 2016
IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP), IEEE, 2016,
pp. 1–6.
[9] L. Singh, S. Singh, S. Arora, S. Borar, One embedding to do them all, arXiv preprint arXiv:1906.12120
(2019).
[10] T. Mikolov, Q. V. Le, I. Sutskever, Exploiting similarities among languages for machine translation,</p>
      <p>CoRR abs/1309.4168 (2013). URL: http://arxiv.org/abs/1309.4168.arXiv:1309.4168.
[11] M. Artetxe, G. Labaka, E. Agirre, Learning bilingual word embeddings with (almost) no bilingual
data, in: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), Association for Computational Linguistics, Vancouver, Canada, 2017, pp.
451–462. URL: https://www.aclweb.org/anthology/P17-104.2doi:10.18653/v1/P17-1042.
[12] S. Ruder, A survey of cross-lingual embedding models, CoRR abs/1706.04902 (2017). URL: http:
//arxiv.org/abs/1706.04902. arXiv:1706.04902.
[13] F. Yuan, L. Yao, B. Benatallah, Darec: Deep domain adaptation for cross-domain
recommendation via transferring rating patterns, in: Proceedings of the Twenty-Eighth International
Joint Conference on Artificial Intelligence, IJCAI-19, International Joint Conferences on
Artificial Intelligence Organization, 2019, pp. 4227–4233. URL:https://doi.org/10.24963/ijcai.2019/587.
doi:10.24963/ijcai.2019/587.
[14] H. Kanagawa, H. Kobayashi, N. Shimizu, Y. Tagami, T. Suzuki, Cross-domain recommendation via
deep domain adaptation, in: L. Azzopardi, B. Stein, N. Fuhr, P. Mayr, C. Hauf, D. Hiemstra (Eds.),
Advances in Information Retrieval, Springer International Publishing, Cham, 2019, pp. 20–29.
[15] H. Daumé III, Frustratingly easy domain adaptation, in: Proceedings of the 45th Annual Meeting of
the Association of Computational Linguistics, Association for Computational Linguistics, Prague,
Czech Republic, 2007, pp. 256–263. URL: https://www.aclweb.org/anthology/P07-103.3
[16] W. Lu, H. L. Chieu, J. Löfgren, A general regularization framework for domain adaptation,
in: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,
Association for Computational Linguistics, Austin, Texas, 2016, pp. 950–954. URhLt: tps://www.
aclweb.org/anthology/D16-1095. doi:10.18653/v1/D16-1095.
[17] S. Motiian, Q. Jones, S. Iranmanesh, G. Doretto, Few-shot adversarial domain adaptation, in:
I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, R. Garnett (Eds.),
Advances in Neural Information Processing Systems, volume 30, Curran Associates, Inc., 2017.
[18] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, V.
Lempitsky, Domain-adversarial training of neural networks, J. Mach. Learn. Res. 17 (2016) 2096–2030.
[19] X. Chen, Y. Sun, B. Athiwaratkun, C. Cardie, K. Weinberger, Adversarial deep averaging networks
for cross-lingual sentiment classification, Transactions of the Association for Computational
Linguistics 6 (2018) 557–570.
[20] Z. Xu, W. Pan, Z. Ming, A multi-view graph contrastive learning framework for cross-domain
sequential recommendation, in: Proceedings of the 17th ACM Conference on Recommender
Systems, RecSys ’23, Association for Computing Machinery, 2023, p. 491–501.
[21] G. Lin, C. Gao, Y. Zheng, J. Chang, Y. Niu, Y. Song, K. Gai, Z. Li, D. Jin, Y. Li, M. Wang, Mixed
attention network for cross-domain sequential recommendation, in: Proceedings of WSDM
conference, WSDM ’24, New York, NY, USA, 2024, p. 405–413.
[22] H. Ma, R. Xie, L. Meng, X. Chen, X. Zhang, L. Lin, J. Zhou, Triple sequence learning for cross-domain
recommendation, ACM Trans. Inf. Syst. 42 (2024).
[23] A. Petruzzelli, C. Musto, L. Laraspata, I. Rinaldi, M. de Gemmis, P. Lops, G. Semeraro, Instructing and
prompting large language models for explainable cross-domain recommendations, in: Proceedings
of the 18th ACM Conference on Recommender Systems, RecSys ’24, Association for Computing
Machinery, 2024, p. 298–308.
[24] A. Samra, E. Frolov, A. Vasilev, A. Grigorevskiy, A. Vakhrushev, Cross-domain latent factors sharing
via implicit matrix factorization, in: Proceedings of the 18th ACM Conference on Recommender
Systems, RecSys ’24, Association for Computing Machinery, New York, NY, USA, 2024, p. 309–317.
doi:10.1145/3640457.3688143.
[25] M. Abadi, A. Agarwal, P. Barham, et. al., TensorFlow: Large-scale machine learning on
heterogeneous systems, 2015. URL:https://www.tensorflow.org, /software available from tensorflow.org.
[26] F. Bianchi, V. D. Carlo, P. Nicoli, M. Palmonari, Compass-aligned distributional embeddings for
studying semantic diferences across corpora, CoRR abs/2004.06519 (2020). URL: https://arxiv.org/
abs/2004.06519. arXiv:2004.06519.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Grbovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Radosavljevic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Djuric</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Bhamidipati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Savla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Bhagwan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Sharp</surname>
          </string-name>
          ,
          <article-title>E-commerce in your inbox: Product recommendations at scale</article-title>
          ,
          <source>in: Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining</source>
          ,
          <year>2015</year>
          , pp.
          <fpage>1809</fpage>
          -
          <lpage>1818</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F.</given-names>
            <surname>Vasile</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Smirnova</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Conneau, Meta-prod2vec:
          <article-title>Product embeddings using side-information for recommendation</article-title>
          ,
          <source>in: Proceedings of the 10th ACM Conference on Recommender Systems</source>
          , ACM,
          <year>2016</year>
          , pp.
          <fpage>225</fpage>
          -
          <lpage>232</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Covington</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Adams</surname>
          </string-name>
          , E. Sargin,
          <article-title>Deep neural networks for youtube recommendations</article-title>
          ,
          <source>in: Proceedings of the 10th ACM Conference on Recommender Systems, RecSys '16</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2016</year>
          , pp.
          <fpage>191</fpage>
          -
          <lpage>198</lpage>
          . URL: http://doi.acm.
          <source>org/10</source>
          .1145/2959100.2959190.doi:
          <volume>10</volume>
          .1145/ 2959100.2959190.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>I.</given-names>
            <surname>Partalas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Morvan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sadeghian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Minaee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Cowan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Hotel2vec: Learning hotel embeddings from user click sessions with side information</article-title>
          .,
          <source>in: Proceedings of the Workshop on Recommenders in Tourism co-located with the 15th ACM Conference on Recommender Systems (RecSys</source>
          <year>2021</year>
          ),
          <year>2021</year>
          , pp.
          <fpage>69</fpage>
          -
          <lpage>84</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>C.</given-names>
            <surname>Grbovic</surname>
          </string-name>
          ,
          <article-title>Real-time personalization using embeddings for search ranking at airbnb</article-title>
          ,
          <source>in: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</source>
          ,
          <string-name>
            <surname>KDD</surname>
          </string-name>
          ,
          <year>2018</year>
          , pp.
          <fpage>311</fpage>
          -
          <lpage>320</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mikolov</surname>
          </string-name>
          , I. Sutskever,
          <string-name>
            <given-names>K.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. S.</given-names>
            <surname>Corrado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dean</surname>
          </string-name>
          ,
          <article-title>Distributed representations of words and phrases and their compositionality</article-title>
          ,
          <source>in: Advances in neural information processing systems</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>3111</fpage>
          -
          <lpage>3119</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bianchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tagliabue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bigon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Greco</surname>
          </string-name>
          ,
          <article-title>Fantastic embeddings and how to align them: Zero-shot inference in a multi-shop scenario</article-title>
          ,
          <source>in: Proceedings of the SIGIR 2020 eCom workshop</source>
          ,
          <year>July 2020</year>
          ,
          <string-name>
            <given-names>Virtual</given-names>
            <surname>Event</surname>
          </string-name>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>