<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Developing the Key Attributes for Product Matching Based on the Item's Image Tag Comparison</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>National Technical University “Kharkiv Polytechnic Institute”</institution>
          ,
          <addr-line>2, Kyrpychova str., 61002 Kharkiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <abstract>
        <p>With the constant growth of the number of products on e-marketplaces, buyers feel hard to find and choose items that would satisfy all their needs and expectations. Search and filtering algorithms of recommender systems, although are striving to help users, still fail quite often due to incomplete and inaccurate description of items. The given work suggests to combine analysis of both item description and item image in order to construct groups of similar items. Since a person can define whether two items are similar or not looking at two images and a brief description, it is suggested to form a set of similar items based on users' judgments and then to extract the core of keywords for the specific type of products. Further, it is proposed to use the given core to evaluate the similarity of any new item added to the definite group. The case study deals with the building of the core of keywords for sneakers. The developed key attributes allow matching the items with a high precision, thus, proving the effectiveness of the method of the core construction.</p>
      </abstract>
      <kwd-group>
        <kwd>E-commerce</kwd>
        <kwd>Item's Images</kwd>
        <kwd>Similarity Items</kwd>
        <kwd>Image Similarity</kwd>
        <kwd>Images Matching</kwd>
        <kwd>Tag Similarity</kwd>
        <kwd>Key Attributes</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>E-commerce marketplaces are the inexhaustible source of goods that may satisfy the
needs of any exacting client. But often, the task to find the necessary product becomes
a real challenge for a customer. While the sellers are allowed to create multiple item
records for the same product, the customers see the endless lists of recommended items
that actually might be a single product. Further, looking deeper into that list, a person
can realize that it is not an easy task to estimate if the products are the same since their
descriptions, images, titles, prices may be too different.</p>
      <p>
        Buyers need to get an excellent user experience while searching for products and
making purchases on the e-marketplaces. That is why to stimulate purchases,
e-commerce platforms are constantly improving the efficiency of collaborating filtering
algorithms that form the list of recommended items. One of the steps in this process is items
matching. Many e-commerce platforms such as Walmart check if the item already
exists in the catalogue before setting up a new one [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Other platforms implement items
matching step during the search itself. In both cases, this step is necessary and requires
intelligent procedures to be fulfilled.
      </p>
      <p>Items matching is usually based on a comparison of textual data provided in item
descriptions. Firstly, this data is often incomplete or missed. Secondly, it is represented
in a way that puts a strain on matching algorithms. Item’s attributes may be called
differently and may contain un-normalized values that make the analysis more difficult.
And sometimes the description of an item does not contribute much to the buyer’s
perception. For example, such products as mobile phones, TV-sets, air humidifiers can be
precisely described and identified via their technical parameters. On the other hand,
clothes, shoes, bags are the products that do not have many attributes for description
and even if they do, the show and layout of a thing play a bigger role for item’s
acquisition. In this situation, items’ images should play a major role in matching.</p>
      <p>The given paper represents the idea of images matching combined with matching
textual descriptions of items. The developed pipeline for data processing allows to
collect similar items based on expert judgments, extract their tags and build the core of
key attributes that may be used for further matching of new items.</p>
      <p>The rest of the paper is organized in the following way. Section 2 considers existing
works in the area of images matching in general and in the e-commerce domain in
particular. Section 3 describes the scheme of items processing and the tools used for
collection of the initial set of items. The experiments with two datasets are given in Section
4. The analysis of results and conclusions are given in Sections 5 and 6.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Works</title>
      <p>
        Images matching is a complex problem discussed by many researchers in the context
of computer vision and augmented reality applications, 3D modeling, visual search, etc.
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Many studies represent images as a feature vector and use neural networks for
calculating the similarity between images. For example, in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] authors suggest a Siamese
network architecture for generic image retrieval and comparison. The developed
architecture showed the best results if it was used together with a pre-trained convolution
neural networks obtained during solving the similar problem. Another implementation
of the multi-scale Siamese network for image similarity evaluation is given in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
Authors incorporated a method of Curriculum learning for online pair mining strategy
identification which allowed to provide the increasing difficulty of image pairs during
the network training process.
      </p>
      <p>
        In addition to feature-based methods of image matching that are implemented
usually as neural networks, there is also a group of geometric point matching methods.
Researchers in [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ] describe affine transformation used for images matching. One of
the issues resolved by the authors is forming a minimal discrete set of affine
transformations applied to each image before matching. Affine transformations together with
genetic algorithm implementation for 2D images matching is presented in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        Treating an image not only as a set of pixels but as an object that has its internal
sense has given a start to the direction of images semantic analysis. For example,
authors of [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] worked in the field of semantic segmentation in the computer vision. They
represented the 3D-model approach to automatic generation of synthetic images that
can be used together with real-world images for semantic segmentation.
      </p>
      <p>
        The field of e-commerce has brought new statements of images matching problem.
Images matching is a non-trivial problem and although everyone realizes that it may
improve the efficiency of recommender systems, still researchers find weak points in
similarity measures of images and keep on working on new methods. The main goal of
images matching in e-marketplace applications is de-duplication of item records. Many
studies are aimed at finding identical images that may indicate identical products [
        <xref ref-type="bibr" rid="ref1 ref8">1, 8</xref>
        ].
In [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] it is stated that due to a large amount of items in the e-marketplace, it is hard to
collect manual judgments and it is suggested to use neural networks as feature extractor
and cosine similarity. Author of [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] describes the results of using GrabCut computer
vision background removal tool for analysis of mobile photo images of new goods that
arrive to the fashion store. For image segmentation, Tiramisu DenseNets was used
which is a type of convolution architecture. All this was done for checking if the given
product already exists in the online store. A kind of similar application of e-commerce
images matching is search of all items that look similar to what a buyer has taken a
photo of in a real shop but for some reason wants to find online [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        Generally speaking, images matching is a part of the bigger e-commerce items
matching problem. Many researchers suggest different ways to find similar items in the
e-stores [
        <xref ref-type="bibr" rid="ref1 ref10 ref11">1, 10, 11</xref>
        ]. For example, in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] it is proposed to analyze user’s browsing of
items and build a weighted graph where nodes are the items and edges are the
associative connectors that reflect whether two items were watched by the same client during
the search. Thus, similar products are identified that can be used for personalized
recommendations. However, this study does not analyze item images.
      </p>
      <p>
        Our previous works [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ] represent the approach to grouping similar items based
on clustering techniques. Two experiments described deal with mobile phones and
bicycles data collected from e-marketplaces by the developed web crawling tool. We can
conclude that k-means clustering based on retrieved textual/numerical values of items’
attributes gives results that are accurate enough. However, further experiments have
shown that items that do not have many attributes in their description cannot be grouped
with the help of the proposed method. Images and not attributes have much more sense
in the case of such items as clothes or textile objects. That is why the given work is
setting sights on the analysis of item images in order to find similar products that can
be recommended to a buyer.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Collection and Processing of Item Images and Descriptions</title>
      <p>The main goal of our research is to find ways to improve the quality of the search for
product offers on electronic trading platforms. We have suggested that the solution to
this problem should be based on the structuring of item offers. From our point of view,
the most natural way is to group similar items. The analysis shows that existing
approaches, such as the use of filters or recommendations, are not always effective. The
use of filters may limit the choice of products. For example, the product description
does not contain information about all color options, the price includes shipping costs
and does not match the selected range, an error may be made in the model name, the
brand is not specified, etc. The results of the recommendation systems vary on different
trading platforms and often use mixed algorithms that take into account not only the
characteristics of goods but also the behavior of users. As a result, some products may
not fall into the search selection or similarity group. On the other hand, we noticed that
a person surfing via an e-marketplace always identifies similar products. In this case,
the person relies on the image and a brief description of the item. The main idea of this
study is to create a core of keywords that determine the description of a similarity group.</p>
      <p>
        Analyzing a lot of up-to-date evaluation approaches, we found out the pairwise
comparison is the best way for humans. Following the Theory of Intelligence [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], we
propose to take into consideration the ability of human intelligence to perceive images. In
the process of examining human behavior, it is important to research internal subjective
state and information processing which cause a particular way of behavior. Examples
of internal states are images and representations of real goods. We present the image of
the product and get the perception as an assessment of the internal state. We suggest
that the perception of images of similar products is the same. A person compares images
of goods. If the goods are similar, then the answer should be "yes".
      </p>
      <p>
        According to [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], the internal relations can be represented with a predicate function
named predicate of equivalence. The predicate of equivalence is a two-place predicate,
which is reflective, symmetrical, and transitive. By analogy with the predicate of
equivalence, we propose to consider the predicate of goods similarity. The reflexiveness of
the predicate of similarity means that similar items are represented by similar images.
The symmetry of the predicate of similarity means that objects remain similar even if
you swap places of their images. The transitiveness of the predicate of similarity means
that if the first and the second items are similar and the second and the third items are
similar too, then the first and the third items are also similar.
      </p>
      <p>Therefore, just monitoring how people implement items matching we can obtain data
about items similarity. We suggest applying pairwise comparison of item images to
obtain similarity estimation. Based on the processing of the pairwise comparison matrix
we can determine the groups of similar items. We make a hypothesis that the group of
similar items has the common core description, that is why we suggest building the core
of the keywords for each group. So, we can extend the group of similar items based on
comparing the new item with the core of keywords. The general scheme of processing
of items is presented in fig. 1.</p>
      <p>Collecting images and descriptions is performed with the help of grabbing and
parsing software. The raw data sets are inspected manually. We have developed our own
mobile application for pairwise comparison of items. We suggest using a
crowdsourcing approach in order to collect estimations of item matches. Due to this reason, we
have asked volunteers to upload our mobile application and do comparisons. The
developed software component is an Android-mobile client.
The user can compare images in pairs and indicate the similarity or difference of items
offered for comparison. The data provided by the user is collected in cloud storage and
further can be used for the next steps of processing. The result of the comparison is sent
directly to Firebase Cloud node via HTTP-protocol where it is validated and stored.
Each user session has a unique ID to make data analysis more efficient. If the user wants
to finish the comparing, the “Finish” button should be pressed and the application
would be closed. The image comparison screen is given in fig. 2.
Therefore, we collect images and descriptions of items, then estimate the similarity
based on image matching, and finally process item description to build the core of tags
for each group of similar items. The given study is focused on developing the key
attributes of similar items based on the item’s image tag comparison.
4
4.1</p>
    </sec>
    <sec id="sec-4">
      <title>Experiments</title>
      <sec id="sec-4-1">
        <title>Data Set Collection and Preparation</title>
        <p>In this paper, we have checked the idea that if we know item groups with similar
objects, then in the future we can easily find the correct group for new items. Thus, the
main idea to make an experiment is checking the assumptions: 1) that similar items
have the same core of tags; 2) that group of a new object can be easily found if the core
for a group of objects is known. In the first step, we create the dataset. Each item in our
dataset has an image and a text description. The example of such an item is presented
in Table 1.
A brand-new, Unused, Unworn, Sneakers, Breathable,
Lightweight, Comfort, G27706, Does not apply, Full Year Article,
Leather, adidas, Fabric, Does not apply, 2010-2019, Low Top,
Leisure, Rubberband, Unisex, Lacing, Casual, adidas Continental,
adidas Originals, adidas Continental 80 , Standard, Casual Shoes,</p>
        <p>White / Scarlet / Collegiate Navy, Fitness Studio &amp; Training</p>
        <p>For our experiments, we have transformed the item description into a set of tags.
We prepared two datasets according to our data processing pipeline. The first dataset is
the set of similar items gathered from the eBay trading platform
(https://www.ebay.com/). We chose the data about white sneakers. The second dataset
contains different items, for example, different types of shoes, bags, etc. from the same
source. Table 2 shows information about datasets.
After the analysis of our datasets, we decided to clean the data, because tags for all
items vary a lot. For example, we have such tags as "M" or "9.5" as the size of shoes or
"2010-2019", "B42000", "light-weight", "Running &amp; Jogging". A lot of tags contain
different punctuation such as "&amp;", "-", ",", ")", "(", etc. All stages of the cleaning
process are presented in Fig. 3.</p>
        <p>Item description</p>
        <p>Set of tags</p>
        <p>Delete
punctuations</p>
        <p>Remove tags with
length less that 3
symbols and
stopwords
Formation of new
tags from long
descriptions</p>
        <p>Calculate frequency</p>
        <p>of tags
Detele repeating</p>
        <p>tags
Remove digits</p>
        <p>Final core of tags
In addition to removing punctuation and other characters, we needed to convert long
tags (such as "Running &amp; Jogging") to a set of short tags ("Running" and "Jogging").
For example, "White / Scarlet / Collegiate Navy" was transformed into "White", "Scarlet",
"Collegiate" and "Navy". We decided to remove stopwords and tags with length less than
3 symbols, these tags are not informative for the dataset. In order to get the tag core,
we also need to calculate the tag frequency for the collection and remove duplicate tags.
The number of tags after each stage is shown in Fig. 4.
4.2</p>
      </sec>
      <sec id="sec-4-2">
        <title>Experiments with Similarity Tags</title>
        <p>
          Having the cleaned tag list sorted by descending tag frequency, we created the tag core
for the Dataset_1. The algorithm of tag core creation was described in detail in work
[
          <xref ref-type="bibr" rid="ref14">14</xref>
          ].
This algorithm is based on word2vec model. The proposed algorithm is not
complicated, but it takes into account the semantic similarity of words using word2vec and the
tag frequency after the pre-processing stage. The results of these experiments are
presented in Table. 3.
We took a similarity value over 0.75. It was selected based on an analysis of the tags
received as well as their number. As Table 3 shows, we received a fairly short list of
tags when the value of the similarity measure is more than 0.75. As a test of how
correctly our tag core describes our data set, we compared each item from Dataset_1 with
our core using function similarity from Spacy library (Fig. 5). The mean of our results
is 0.87, min is 0.73 and max is 0.92. These results show that we have a good core for
our Dataset_1 and this tag core describes our collection of items pretty well.
On the next step, we made the experiments with the tag core and Dataset_2. In this case,
we compare each item’s tags with our core and receive a similarity score. This score
helps us to answer the question “Does this item belong to this group or not?”. The Fig.
6 shows the similarity values for this experiment. The mean of our results is 0.74, min
is 0.34 and max is 0.85. These results show that we have very different items in
Dataset_2 - both very similar and dissimilar. We analyzed these results and prepared some
samples. These samples are presented in Table 4. The examples from Table 4
demonstrate that our approach works correctly with different types of items.
Similarity
score
0,8
0,6
0,63
0,57
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Evaluation and Analysis of Results</title>
      <p>We evaluate the results of the experiment in two stages. The results presented above
show that the constructed core describes the dataset well. Inside the dataset items are
similar, but not always the same products. The analysis showed that the obtained
estimates for identical pictures are equal. In these cases, the sets of tags also match. This
confirms the assumption that the grouping can be carried out on the basis of assessing
the similarity of the product description and core of tags for the group. It should be
noted that for identical items with different images, we find differences in similarity
values and in the sets of tags respectively. This may be because different sellers provide
such descriptions.</p>
      <p>The main idea of building a core of tags is to be able to determine whether a new
product will fall into a group of similar goods. In order to estimate the proposed way to
matching items, we use the Dataset_2. It contains 162 items of sneakers, shoes, and
some other goods. According to the data processing pipeline discussed above, the sets
of tags for each item description are created. We evaluate manually all the items
whether they match items from the Dataset_1. Then we compare manual estimates with
the similarity score of each item. We use the 0.75 similarity score as an indicator to join
the item group. As a result, we obtain Total Items = 162, True Positive Items = 101,
True Negative Items = 19, False Positive Items = 14, False Negative Items = 28.
Therefore, Accuracy = 0.74, Precision = 0.88, Recall = 0.89. Manually classified items are
all sneakers with a similar shape or similar models without taking into account their
color. Thus, we can conclude that the proposed approach is validated, but it should be
improved by studying the similarity score more deeply.
6</p>
    </sec>
    <sec id="sec-6">
      <title>Conclusions and Future Work</title>
      <p>The given work represents the experimental pipeline of the core of key attributes
construction for e-commerce items. It is suggested to build such a core from the tags of the
items that have been acclaimed as similar by the experts based on images comparison.
The method proposed was proved to be suitable for constructing the core for sneakers.
The two open questions in method application are: 1) the definition of the threshold for
attributes inclusion to the core since different values lead to a different number of
keywords included; 2) the adaptation of the threshold for the analysis of new items that
have to be compared with the core.</p>
      <p>The future directions of this research include testing the method on other types of
products, not clothes (shoes) but household appliances, for example. Also, the
application of the method for bigger initial samples (including millions of items) obviously
will require its modification and additional testing. And finally, the evaluation of search
and filtering algorithms performance with and without application of the keywords core
is planned for future work.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>More</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Product Matching in E-commerce Using Deep Learning (</article-title>
          <year>2017</year>
          ). https://medium.com/walmartlabs/product
          <article-title>-matching-in-ecommerce-4f19b6aebaca</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Walia</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suneja</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>A Conceptual Study on Image Matching Techniques</article-title>
          .
          <source>Global Journal of Computer Science and Technology</source>
          ,
          <volume>10</volume>
          (
          <issue>12</issue>
          ),
          <fpage>83</fpage>
          -
          <lpage>88</lpage>
          (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Melekhov</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kannala</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rahtu</surname>
          </string-name>
          , E.:
          <article-title>Siamese network features for image matching</article-title>
          .
          <source>Conference: 2016 23rd International Conference on Pattern Recognition (ICPR)</source>
          pp.
          <fpage>378</fpage>
          -
          <lpage>383</lpage>
          (
          <year>2016</year>
          ). DOI:
          <volume>10</volume>
          .1109/ICPR.
          <year>2016</year>
          .
          <volume>7899663</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Appalaraju</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chaoji</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Image similarity using Deep CNN and Curriculum Learning (</article-title>
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Rodríguez</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Delon</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morel</surname>
          </string-name>
          , J.-M.:
          <article-title>Fast Affine Invariant Image Matching</article-title>
          .
          <source>Image Processing On Line. 8</source>
          ,
          <fpage>251</fpage>
          -
          <lpage>281</lpage>
          (
          <year>2018</year>
          ). https://doi.org/10.5201/ipol.
          <year>2018</year>
          .
          <volume>225</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Bazargani</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Anjos</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lobo</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mollahosseini</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shahbazkia</surname>
          </string-name>
          , H.:
          <article-title>Affine Image Registration Transformation Estimation Using a Real Coded Genetic Algorithm with SBX (</article-title>
          <year>2012</year>
          ). doi.org/10.1145/2330784.2330990.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7. Zhang,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            ,
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            ,
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y.</surname>
          </string-name>
          :
          <article-title>Synthesizing Training Images for Semantic Segmentation</article-title>
          . In: Wang Y.,
          <string-name>
            <surname>Jiang</surname>
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peng</surname>
            <given-names>Y</given-names>
          </string-name>
          . (
          <article-title>eds) Image and Graphics Technologies and Applications</article-title>
          .
          <source>IGTA 2018. Communications in Computer and Information Science</source>
          , vol
          <volume>875</volume>
          , pp.
          <fpage>220</fpage>
          -
          <lpage>227</lpage>
          . Springer, Singapore (
          <year>2018</year>
          ). https://doi.org/10.1007/
          <fpage>978</fpage>
          -981-13-1702-6_
          <fpage>22</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Yam</surname>
          </string-name>
          , C.-Y.:
          <article-title>Deep Learning Image Segmentation for Ecommerce Catalogue Visual Search (</article-title>
          <year>2018</year>
          ). https://devblogs.microsoft.com/cse/2018/04/18/deep
          <article-title>-learning-image-segmentation-for-ecommerce-catalogue-visual-search/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Kiapour</surname>
            , M. H., Han,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lazebnik</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Berg</surname>
            ,
            <given-names>A. C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Berg</surname>
            ,
            <given-names>T. L.</given-names>
          </string-name>
          :
          <article-title>Where to Buy It: Matching Street Clothing Photos in Online Shops</article-title>
          .
          <source>IEEE International Conference on Computer Vision</source>
          (ICCV),
          <year>2015</year>
          , pp.
          <fpage>3343</fpage>
          -
          <lpage>3351</lpage>
          .
          <string-name>
            <surname>Santiago</surname>
          </string-name>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Madvariya</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Borar</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Discovering Similar Products in Fashion E-commerce</article-title>
          .
          <source>eCOM@SIGIR</source>
          . (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Cherednichenko</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vovk</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kanishcheva</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Godlevskyi</surname>
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Towards Improving the Search Quality on the Trading Platforms</article-title>
          . In: S.Wrycza, J. Maslankowski(Eds):
          <source>11th SIGSAND/PLAIS</source>
          <year>2018</year>
          , LNBIP 333 pp.
          <fpage>21</fpage>
          -
          <lpage>30</lpage>
          . Springer (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Cherednichenko</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vovk</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kanishcheva</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Godlevskyi</surname>
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Studying Items Similarity for Dependable Buying on Electronic Marketplaces</article-title>
          .
          <source>Proc. 2nd Int. Conf. On Computational Linguistics and Intelligent Systems (COLINS)</source>
          ,
          <string-name>
            <surname>Volume</surname>
            <given-names>I</given-names>
          </string-name>
          : Main Conference (Lviv, Ukraine,
          <year>2018</year>
          ). Vol.
          <volume>2136</volume>
          . pp.
          <fpage>78</fpage>
          -
          <lpage>89</lpage>
          CEUR-WS (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Bondarenko</surname>
            ,
            <given-names>M. F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shabanov-Kushnarenko</surname>
            ,
            <given-names>U. P.</given-names>
          </string-name>
          :
          <article-title>Theory of intelligence: a Handbook</article-title>
          . SMIT Company, Kharkiv (
          <year>2006</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Kanishcheva</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cherednichenko</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sharonova</surname>
          </string-name>
          , N.:
          <article-title>Image Tag Core Generation</article-title>
          . In: 1st International Workshop on Digital Content &amp; Smart
          <string-name>
            <surname>Multimedia (DCSMart 2019) Ukraine</surname>
          </string-name>
          , CEUR Workshop Proceedings, Volume
          <volume>1</volume>
          , pp.
          <fpage>35</fpage>
          -
          <lpage>44</lpage>
          .
          <string-name>
            <surname>Lviv</surname>
          </string-name>
          (
          <year>2019</year>
          ). http://ceur-ws.org/Vol2533/preface.pdf
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>