<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">End-to-End Neural Ranking for eCommerce Product Search An application of task models and textual embeddings</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Eliot</forename><forename type="middle">P</forename><surname>Brenner</surname></persName>
							<email>eliot.brenner@jet.com</email>
						</author>
						<author>
							<persName><forename type="first">Raymond</forename><surname>Zhao</surname></persName>
							<email>raymond@jet.com</email>
						</author>
						<author>
							<persName><forename type="first">Aliasgar</forename><surname>Kutiyanawala</surname></persName>
							<email>aliasgar@jet.com</email>
						</author>
						<author>
							<persName><forename type="first">John</forename><surname>Yan</surname></persName>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">Jet.com</orgName>
								<orgName type="institution" key="instit2">Walmart Labs</orgName>
								<address>
									<settlement>Hoboken</settlement>
									<region>NJ</region>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="institution" key="instit1">Jet.com</orgName>
								<orgName type="institution" key="instit2">Walmart Labs</orgName>
								<address>
									<settlement>Hoboken</settlement>
									<region>NJ</region>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<orgName type="institution" key="instit1">Jet.com</orgName>
								<orgName type="institution" key="instit2">Walmart Labs</orgName>
								<address>
									<settlement>Hoboken</settlement>
									<region>NJ</region>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff3">
								<orgName type="institution" key="instit1">Jet.com</orgName>
								<orgName type="institution" key="instit2">Walmart Labs</orgName>
								<address>
									<settlement>Hoboken</settlement>
									<region>NJ</region>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff4">
								<address>
									<settlement>Ann Arbor</settlement>
									<region>Michigan</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">End-to-End Neural Ranking for eCommerce Product Search An application of task models and textual embeddings</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">6A9FFBA8E6D9694DFD1658C573ED04D2</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T06:31+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>• Information systems → Query representation</term>
					<term>Probabilistic retrieval models</term>
					<term>Relevance assessment</term>
					<term>Task models</term>
					<term>Enterprise search</term>
					<term>• Computing methodologies → Neural networks</term>
					<term>Bayesian network models</term>
					<term>Ranking, Neural IR, Kernel Pooling, Relevance Model, Embedding, eCommerce, Product Search, Click Models, Task Models</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>We consider the problem of retrieving and ranking items in an eCommerce catalog, often called SKU s, in order of relevance to a user-issued query. The input data for the ranking are the texts of the queries and textual elds of the SKUs indexed in the catalog. We review the ways in which this problem both resembles and diers from the problems of information retrieval (IR) in the context of web search, which is the context typically assumed in the IR literature. The dierences between the product-search problem and the IR problem of web search necessitate a dierent approach in terms of both models and datasets. We rst review the recent state-of-the-art models for web search IR, focusing on the CLSM of <ref type="bibr" target="#b19">[20]</ref> as a representative of one type, which we call the distributed type, and the kernel pooling model of <ref type="bibr" target="#b25">[26]</ref>, as a representative of another type, which we call the local-interaction type. The dierent types of relevance models developed for IR have complementary advantages and disadvantages when applied to eCommerce product search. Further, we explain why the conventional methods for dataset construction employed in the IR literature fail to produce data which suces for training or evaluation of models for eCommerce product search. We explain how our own approach, applying task modeling techniques to the click-through logs of an eCommerce site, enables the construction of a large-scale dataset for training and robust benchmarking of relevance models. Our experiments consist of applying several of the models from the IR literature to our own dataset. Empirically, we have established that, when applied to our dataset, certain models of local-interaction type reduce ranking errors by one-third compared to the baseline system (tf-idf). Applied to our dataset, the distributed models fail to outperform the baseline. As a basis for a deployed system, the distributed models have several advantages, computationally, over the local-interaction models. This motivates an ongoing program of work, which we outline at the conclusion of the paper.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>Currently deployed systems for eCommerce product search tend to use inverted-index based retrieval, as implemented in Elasticsearch <ref type="bibr" target="#b3">[4]</ref> or Solr <ref type="bibr" target="#b20">[21]</ref>. For ranking, these systems typically use legacy relevance functions such as tf-idf <ref type="bibr" target="#b24">[25]</ref> or Okpai BM25 <ref type="bibr" target="#b17">[18]</ref>, as implemented in these search systems. Such relevance functions are based on exact ("hard") matches of tokens, rather than semantic ("soft") matches, are insensitive to word order, and have hard-coded, rather than learned weights. On the one hand, their simplicity makes legacy relevance functions scalable and easy-to-implement. One the other hand, they are found to be inadequate in practice for ne-grained ranking of search results. Typically, in order to achieve rankings of search results that are acceptable for presentation to the user, eCommerce sites overlay on top of the legacy relevance function score, a variety of handcrafted lters (using structured data elds) as well as hard-coded rules for specic queries. In some cases, eCommerce sites are able to develop intricate and specialized proprietary NLP systems, referred to as Query-SKU Understanding (QSU) Systems, for analyzing and matching relevant SKUs to queries. QSU systems, while potentially very eective at addressing the shortcomings of legacy relevance scores, require a some degree of domain-specic knowledge to engineer <ref type="bibr" target="#b7">[8]</ref>. Because of concept drift, the maintenance of QSU systems demands a longterm commitment of analyst and programmer labor. As a result of these scalability issues, QSU systems are within reach only for the very largest eCommerce companies with abundant resources.</p><p>Recently, the eld of neural IR (NIR) , has shown great promise to overturn this state of aairs. The approach of NIR to ranking diers from the aforementioned QSU systems in that it learns vector-space representations of both queries and SKUs which facilitate learningto-rank (LTR) models to address the task of relevance ranking in an end-to-end manner. NIR, if successfully applied to eCommerce, can allow any company with access to commodity GPUs and abunant user click-through logs to build an accurate and robust model for ranking search results at lower cost over the long-term compared to a QSU system. For a current and comprehensive review of the eld of NIR, see <ref type="bibr" target="#b10">[11]</ref>.</p><p>Our aim in this paper is to provide further theoretical justication and empirical evidence that fresh ideas and techniques are needed to make Neural IR a practical alternative to legacy relevance and rule-based systems for eCommerce search. Based on the results of model training which we present, we delineate a handful of ideas which appear promising so far and deserve further development.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">RELATION TO PREVIOUS WORK IN NEURAL IR</head><p>The eld of NIR shares its origin with the more general eld of neural NLP <ref type="bibr" target="#b2">[3]</ref> in the word embeddings work of word2vec <ref type="bibr" target="#b8">[9]</ref> and its variants. From there, though, the eld diverges from the more general stream of neural NLP in a variety of ways. The task of NIR, as compared with other widely known branches such as topic modeling, document clustering, machine translation and automatic summarization, consists of matching texts from one collection (the queries) with texts from another collection (webpages or SKUs, or other corpus entries, as the case may be). The specic challenges and innovations of NIR in terms of models are driven by the fact that entries from the two collections (queries versus documents) are of a very dierent nature from one another, both in length, and internal structure and semantics.</p><p>In terms of datasets, the notion of relevance has to be dened in relation to a specic IR task and the population which will use the system. In contrast to the way many general NLP models can be trained and evaluated on publicly available labeled benchmark corpora, a NIR model has to be trained on a datasest tailored to reect the information needs of of the population the system is meant to serve. In order to produce such task-specic datasets in a reliable and scalable manner, practitioners need to go beyond traditional methods such as expert labeling and simple statistical aggregations. The solution has been to take "crowdsourcing" to its logical conclusion and to use the "big data" of user logs to extract relevance judgments. This has led NIR to depend for scalability on the eld of Click Models, which are discussed at great length in the context of web search in the book <ref type="bibr" target="#b1">[2]</ref>.</p><p>While NIR has great promise for application to eCommerce, the existing NIR literature has thus far been biased towards the web search problem, relegating eCommerce product search to "niche" eld status. We hope to remedy this situation by demonstrating the need for radical innovation in the eld of NIR to address the challenges of eCommerce product search.</p><p>One of the most important dierences is that for product search, as compared to web search, both the intent and vocabulary of queries tend to be more restricted and "predictable". For example, when typing into a commercial web search engine, people can be expected to search for any type of information. Generally speaking, customers on a particular eCommerce site are only looking to satisfy only one type of information need, namely to retrieve a list of SKUs in the catalog meeting certain criteria. Users, both customers and merchants, are highly motivated by economic concerns (time and money spent) to craft queries and SKU text elds to facilitate the surfacing of relevant content, for example, by training themselves to pack popular and descriptive keywords into queries and SKUs. As a consequence, the non-neural baselines, such as tf-idf, tend to achieve higher scores on IR metrics in product search datasets than in web search. An NIR model, when applied to product search as opposed to web search, has a higher bar to clear to justify the added system complexity.</p><p>Further, the lift NIR provides over tf-idf is largely in detecting semantic, non-exact matches. For example, consider a user searching the web with the query "king's castle". This user will likely have her information need met with a document about the "monarch's castle" or even possibly "queen's castle", even if it does not explicitly mention "king". In contrast, consider the user issuing the query "king bed" on an eCommerce site. She would likely consider a "Monarch bed"<ref type="foot" target="#foot_0">1</ref> irrelevant unless that bed is also "king", and a "queen bed" even more irrelevant. An approach based on word2vec or Glove <ref type="bibr" target="#b15">[16]</ref> vectors would likely consider the words "king", "queen" and "monarch" all similar to one another based on the distributional hypothesis. The baseline systems deployed on eCommerce websites often deal with semantic similarity by incorporating analyzers with handcrafted synonym lists built in, an approach which is completely infeasible for open-domain web search. This further blunts the positive impact of learned semantic similarity relationships.</p><p>Another dierence between "general" IR for web-search and IR specialized for eCommerce product search is that in the latter the relevance landscape is simutaneously "atter" and more "jagged". With regard to "atness", consider a very popular web search for 2017 (according to <ref type="bibr" target="#b21">[22]</ref>): "how to make solar eclipse glasses". Although the query expresses a very specic search intent, it is likely that the user making it has other, related information needs. Consequently, a good search engine results page (SERP), could be composed entirely of links to instructions on making solar eclipse glasses, which while completely relevant, could be redundant. A better SERP would be composed of a mixture of the former, and of links to retailers selling materials for making such glasses, to instructions on how to use the glasses, and to the best locations for viewing the upcoming eclipse, all which are likely relevant to some degree to the user's infromation need and improve SERP diversity. In contrast, consider the situation for a more typical eCommerce query: "tv remote". A good product list view (PLV) page for the search "tv remote" would display exclusively SKUs which are tv remotes, and no SKUs which are tvs. Further, all the "tv remote" SKUs are equally relevant to the query, though some might be more popular and engaging than others. With regard to "jaggedness", consider the query "desk chair": any "desk with chair" SKUs would be considered completely irrelevant and do not belong anywhere on the PLV page, spite of the very small lexical dierence between the queries "desk chair" and "desk with chair". In web search, by contrast, documents which are relevant for a given query tend to remain partially relevant for queries which are lexically similar to the original query. If the NIR system is to compete with currently deployed rule-based systems, it is of great importance for a dataset for training and benchmarking NIR models for product search to incorporate such "adversarial" examples prevalently.</p><p>Complications arise when we estimate relevance from the clickthrough logs of an eCommerce site, as compared to web search logs. Price, image quality are factors comparable in importance to relevance in driving customer click behavior. As an illustration, consider the situation in which the top-ranked result on a SERP or PLV page for a query has a lower click-through rate (CTR) than the documents or SKUs at lower ranks. In the web SERP case, the low CTR sends a strong signal that the document is irrelevant to the query, but in the eCommerce PLV it may have other other explanations, for example, the SKU's being mispriced or having an inferior brand or image. We will address this point in much more detail in Section 4 below.</p><p>Another factor which assumes more importance in the eld of eCommerce search versus web search is the dierence between using the NIR model for ranking only and using it for retrieval and ranking. In the former scenario, the model is applied at runtime as the last step of a pipeline or "cascade", where earlier steps of the pipeline use cruder but computationally faster techniques (such as inverted indexes and tf-idf) to identify a small subcorpus (of say a few hundred SKUs) as potential entries on the PLV page, and the NIR model only re-ranks this subcorpus (see e.g. <ref type="bibr" target="#b22">[23]</ref>). In the latter scenario, the model, or more precisely, approximate nearest neighbor (ANN) methods acting on vector representations derived from the model, both select and rank the search results for the PLV page from the entire corpus. The latter scenario, retrieval and ranking as one step, is more desirable but (see § §3 and 6 below) poses additional challenges for both algorithm developers and engineers. The possibility of using the NIR model for retrieval, as opposed to mere re-ranking, receives relatively little attention in the web search literature, presumably because of its infeasibility: the corpus of web documents is so vast, in the trillions <ref type="bibr" target="#b23">[24]</ref>, compared to only a few million items in each category of eCommerce SKUs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">NEURAL INFORMATION RETRIEVAL MODELS</head><p>In terms of the high-level classication of Machine Learning tasks, which includes such categories as "classication" and "regression", Neural Information Retrieval (NIR) falls under the Learning to Rank (LTR) category (see <ref type="bibr" target="#b9">[10]</ref> for a comprehensive survey). An LTR algorithm uses an objective function dened over possible "queries" q 2 Q and lists of "documents" d 2 D, to learn a model that, when applied to an new previously unseen query, uses the features of the both the query and documents to "optimally" order the documents for that query. For a more formal denition, see §1.3 of <ref type="bibr" target="#b9">[10]</ref>. As explained in §2.3.2 of <ref type="bibr" target="#b9">[10]</ref>, it is common practice in IR to approximate the general LTR task with a binary classication task. For the sake of simplicity, we follow this so-called "pairwise approach" to LTR. Namely, we rst consider LTR algorithms whose orderings are induced from a scoring function f : Q ⇥ D ! R in the sense that the ordering for q consists of sorting D in the order d 1 , . . .</p><formula xml:id="formula_0">d N satisfying f (q, d 1 ) f (q, d 2 ) • • • f (q, d N ).</formula><p>Next, we train and evaluate the LTR model by presenting it with triples (q, d rel , d irrel ) 2 Q ⇥D ⇥D, where d rel is deemed to be more relevant to q than d irrel : the binary classication task amounts to assigning scores f so that f (q, d rel ) &gt; f (q, d irrel ).</p><p>There are many ways of classifying the popular types of NIR models, but the way that we nd most fundamental and useful for our purposes is the classication into distributed and local-interaction models. The distinction between the two types of models lies in the following architectural dierence. The rst type of model rst transforms d and q into "distributed" vector representations D (d) 2 V D and Q (q) 2 V Q of xed dimension, and only after that feeds the representations into an LTR model. The second type of model never forms a xed-dimensional "distributed" vector representation of document or query, and instead forms a "interaction" matrix representation hi(q), i(d)i of the pair (q, d). The matrix hi(q), i(d)i is sensitive only to local, not global, interactions between q and d, and the model subsequently uses hi(q), i(d)i as the input to the LTR model. More formally, in the notation of Table <ref type="table" target="#tab_0">1</ref>, the score function f takes the following form in the distributed case:</p><formula xml:id="formula_1">f (q, d) = r Q (q), D (d) = r Q (i(q)), D (i(d)) ,<label>(1)</label></formula><p>and the following form in the local-interaction case:</p><formula xml:id="formula_2">f (q, d) = r hi(q), i(d)i .<label>(2)</label></formula><p>Thus, the interaction between q and d occurs at a global level in the distributed case and at the local level in the local-interaction case. The NIR models are distinguished individually from others of the same type by the choices of building blocks in Table <ref type="table" target="#tab_0">1</ref>. <ref type="foot" target="#foot_1">2</ref> In Tables <ref type="table" target="#tab_2">2 and 3</ref>, we show how to obtain some common NIR models in the literature by making specic choices of the building blocks. Note that since w determines i, and , i together determine , to specify a local-interaction model (Table <ref type="table" target="#tab_2">3</ref>) we only need to specify the mappings w, r , and to specify a distributed NIR model (Table <ref type="table" target="#tab_1">2</ref>), we need only additionally specify . We remark that the distributed word embeddings in the "Siamese" and Kernel Pooling models are implicitly assumed to be normalized to have 2-norm one, which allows the application the dot product h•, •i to compute the cosine similarity.</p><p>The classication of NIR models in Tables <ref type="table" target="#tab_2">2 and 3</ref> is not meant to be an exhaustive list of all models found in the literature: for a more comprehensive exposition, see for example the baselines section §4.3 of <ref type="bibr" target="#b25">[26]</ref>. Laying out the models in this fashion facilitates a determination of which parts of the "space" of possible models remain unexplored. For example, we see the possibility of forming a new "hybrid" by using the distributed word embeddings layer w from the Kernel Pooling model <ref type="bibr" target="#b25">[26]</ref>, followed by the representation layer from CLSM <ref type="bibr" target="#b19">[20]</ref>, and we have called this distributed model, apparently nowhere considered in the existing literature "Siamese".</p><p>Substituting into (1), we obtain the following formula for the score f of the Siamese model:</p><formula xml:id="formula_3">f (q, d) = hmlp 1 cnn 1 (i(q)), mlp 1 cnn 1 (i(d))i.</formula><p>(3)</p><p>As an example missing from the space of possible local-interaction models, we have not seen any consideration of the "hybrid" architecture obtained by using the local-interaction layer from the Kernel-Pooling model and a position-aware LTR layer such as a convolution neural net. Substituting into (2), the score function would be</p><formula xml:id="formula_4">f (q, d) = mlp cnn hi(q), i(d)i ,<label>(4)</label></formula><p>where in both ( <ref type="formula">3</ref>) and ( <ref type="formula" target="#formula_4">4</ref>), i is the embedding layer obtained induced by a trainable word embedding w. These are just a couple of the other new architectures which could be formed in this way: we just highlight these two because they seem especially promising.</p><p>The main benet of distinguishing between distributed and local-interaction models is that a distributed architecture enables retrieval-and-ranking, whereas a local-interaction architecture restricts the model to re-ranking the results of a separate retrieval mechanism. See the last paragraph of Section 2 for an explanation of this distinction. From a computational complexity perspective, the reason for this is that the task of precomputing and storing all the representations involved in computing the score is, for a distributed model, O(|Q | + |D|) in space and time, but for a localinteraction model, O(|Q ||D|). (Note that we need to compute the representations which are the inputs to r , namely, in the distributed case, the vectors Q (q), D (d), and in the local-interaction case, the matrices hi(q), i(d)i: computing the variable-length representations i(d) and i(q) alone will not suce in either case). Further, in the distributed model case, assuming the LTR function r (•) is chosen to be the dot-product h•, •i, the retrieval step can be implemented by ANN. This is the reason for the choice of r as h•, •i in dening the "Siamese" model. For practical implementations of ANN using Elasticsearch at industrial scale, see <ref type="bibr" target="#b18">[19]</ref> for a general text-data use-case, and <ref type="bibr" target="#b13">[14]</ref> for an image-data use-case in an e-Commerce context. We will return to this point in §7.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">FROM CLICK MODELS TO TASK MODELS 4.1 Click Models</head><p>The purpose of click models is to extract, from observed variables, an estimate of latent variables. The observed variables generally include the sequence of queries, PLVs/SERPs and clicks, and may also include hovers, add-to-carts (ATCs), and other browser interactions recorded in the site's web-logs. The main latent variables are the relevance and attractiveness of an item to a user. A click model historically takes the form of a probabilistic graphical model called a Bayesian Network (BN), whose structure is represented by a Directed Acyclic Graph (DAG), though more recent works have introduced other types of probabilistic model including recurrent <ref type="bibr" target="#b0">[1]</ref> and adversarial <ref type="bibr" target="#b12">[13]</ref> neural networks. The click model embodies certain assumptions about how users behave on the site. We are going to focus on the commonalities of the various click models rather than their individual dierences, because our aim in discussing them is to motivate another type of model called task models, which will be used to construct the experimental dataset in Section 5.</p><p>To model the stochasticity inherent in the user browsing process, click models adopt the machinery of probability theory and conceptualize the click event, as well as related events such as examinations, ATCs, etc., as the outcome of a (Bernoulli) random variable. The three fundamental events for describing user interaction with a SKU u on a PLV are denoted as follows:</p><p>(1) The user examining the SKU, denoted by E u ;</p><p>(2) The user being suciently attracted by the SKU "tile" to click it, denoted by A u ; (3) The user clicking on the SKU, by C u . The most basic assumption relating these three events is the following Examination Hypothesis (Equation (3.4), p. 10 of <ref type="bibr" target="#b1">[2]</ref>):</p><formula xml:id="formula_5">EH C u = 1 , E u = 1 and A u = 1.</formula><p>(</p><formula xml:id="formula_6">)<label>5</label></formula><p>The EH is universally shared by click models, in view of the observation that any click which the user makes without examining the SKU is just "noise" because it cannot convey any information about the SKU's relevance. In addition, most of the standard click models, including the Position Based (PBM) and Cascade (CM) Models, also incorporate the following Independence Hypothesis:</p><formula xml:id="formula_7">IH E u |= A u .<label>(6)</label></formula><p>The IH appears reasonable because the attractiveness of a SKU represents an inherent property of the query-SKU pair, whereas whether the SKU is examined is an event contingent on a particular presentation on the PLV and the individual user's behavior. This means that E u and A u should not be able to inuence one another, as the IH claims. Adding the following two ingredients to the EH (5) and the IH (6), we obtain a functional, if minimal, click model (1) A probabilistic description of how the user interacts with the PLV: i.e., at a minimum, a probabilistic model of the variables E u . (2) A parameterization of the distributions underlying the variables A u . We can specify a simple click model by carrying out (1)-(2) as follows:</p><p>(1) P(E u = 1) depends only on the rank of u, and is given by a single parameter r for each rank, so that</p><formula xml:id="formula_8">P(E u r ) =: r 2 [0, 1]. (2) P(A u = 1|E u = 1) =: u,q 2 [0, 1],</formula><p>where there is an independent Bernoulli parameter u,q for each pair of query and SKU. The click model obtained by specifying (1)-( <ref type="formula" target="#formula_2">2</ref>) as above is called the PBM. For a template-based representation of the PBM, see Figure <ref type="figure" target="#fig_0">1</ref>. For further detail on the PBM and other BN click models, see <ref type="bibr" target="#b1">[2]</ref>, and for interpretation of template-based representations of BNs see Chapter 6 of <ref type="bibr" target="#b6">[7]</ref>.</p><p>The parameter estimation of click model BN's relies on the following consequence of the EH (5):</p><formula xml:id="formula_9">{E u = E u 0 = 1 &amp; C u = 0 &amp; C u 0 = 1} u 0 ,q &gt; u,q ,<label>(7)</label></formula><p>where in <ref type="bibr" target="#b6">(7)</ref>, the symbol means "increases the likelihood that". This still leaves open the question of how to connect attractiveness to relevance, which click models need to do in order to fulll their purpose, namely extracting relevance judgments from the click logs. The simplest way of making the connection is to assume </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Symbol Meaning</head><p>Examples or Formulas</p><formula xml:id="formula_10">h•, •i dot-product hA, Bi = A • B T for vectors or matrices A, B A2 R m⇥k , B 2 R n⇥k ) hA, Bi 2 R m⇥n , hA, Bi i, j = Õ k `=1 A i, `Bj, [. . .] Matrix Augmentation (concatenation) A 1 , . . . , A K 2 R M ⇥N ) [A 1 , . . . , A k ] 2 R M ⇥K N mlp k fully-connected (k-layer) perceptron mlp 2 (x) = tanh(W 2 • tanh(W 1 • x + b 1 ) + b 2 ), W i weight matrices cnn k</formula><p>k-layer convolutional nn (with possible max-pooling) cnn 1 (x) = tanh(W c • x), W c convolutional weight matrix w word or embedding word hashing of <ref type="bibr" target="#b4">[5]</ref> mapping derived from word2vec, e.g. token t 7 ! w(t) 2 R 300</p><formula xml:id="formula_11">i local document embedding i : [t 1 , . . . , t k ] 7 ! [w(t 1 ) T , . . . ,w(t 0 k ) T | {z } k 0 =min(k, N ) , 0 T , . . . , 0 T | {z } N k 0 times ] 2 R 300⇥N</formula><p>N {Q |D } Dimension of local document embedding along token axis</p><formula xml:id="formula_12">N D = 1000, N Q = 10 for Duet model. V {Q |D } Vector space for distributed representations of D or Q V Q = R 300 , V D = R 300⇥899 for Duet Model V</formula><p>Vector space for distributed representations of D and Q as above, when</p><formula xml:id="formula_13">V Q = V D . {Q |D }</formula><p>parameterized mapping of i({q|d}) to {q |d } ({q|d}) 2 V {Q |D } mlp; cnn; mlp cnn; mlp 1 parameterized mapping of i(d), i(q) to 2 V special case of above, when</p><formula xml:id="formula_14">V Q = V D , and Q = D {Q |D } {q |d } i, global document embedding of q or d into V {Q |D } composition or previous mappings i, global document embedding of q or d into V special case when V Q = V D , and Q = D , Q = D r LTR model mapping hi(q), i(d)i or ( Q (q), D (d)) 7 ! R h•, •i [20]; mlp 3 « [12]; mlp 1 K [26] 1 summation along 1-axis, mapping R m⇥n ! R m 1 : R m⇥n ! R m , 1 (A) i = Õ n j=1 A i, j « Hadamard (elementwise) product (of matrices, tensors) A, B 2 R m⇥n ) A « B 2 R m⇥n «</formula><p>Broadcasting, followed by Hadamard (operator overloading) </p><formula xml:id="formula_15">A 2 R m , B 2 R m⇥n ) A « B = [A, . . . , A | {z } n times ] « B K Kernel-pooling map of [26] K(M i ) = [K 1 , . . . , K K ], K k RBF kernels with mean µ k . Kernel-pooling composed with soft TF map of [26] M 2 R N Q ⇥N D ) (M) = ÕN Q i=1 log K(M i )</formula><formula xml:id="formula_16">Q D dim V Q dim V D r DSSM [5]</formula><p>"word hashing", see <ref type="bibr" target="#b4">[5]</ref> 30, 000 mlp 3</p><formula xml:id="formula_17">1 = Q 128 128 h•, •i CLSM [20]</formula><p>"word hashing", see <ref type="bibr" target="#b4">[5]</ref> 30, 000 mlp 1 cnn 1 = Q 128 128 h•, •i Duet <ref type="bibr" target="#b11">[12]</ref> (distributed side) "word hashing", see <ref type="bibr" target="#b4">[5]</ref> 2000 mlp 1 cnn 1 cnn 2 300 300 ⇥ 899 mlp 3 « "Siamese" (ours) distributed word embeddings 300 mlp 1 cnn 1 = Q 32 512 32 512 h•, •i </p><formula xml:id="formula_18">CH C u = 0 ) S u = 0. (<label>8</label></formula><formula xml:id="formula_19">)</formula><p>The CH is formally analogous to the EH, (5), since just as the EH says that only u for which E u = 1 are eligible to have C u = 1, regardless of their inherent attractiveness u,q , the CH says that only u for which C u = 1 are eligible to have S u = 1, regardless of their inherent relevance u,q . The justication of the CH is that the user cannot know if the document is truly relevant, and thus cannot have her information need satised by the document, unless she clicks on the document's link displayed on the SERP. According to the CH, in order to completely describe S u , we need only specify its conditional distribution when C u = 1. We will follow the above click models by specifying (in continuation of ( <ref type="formula" target="#formula_1">1</ref>)-( <ref type="formula" target="#formula_2">2</ref>) above):</p><p>(3) P(S u = 1|C u = 1) =: u,q 2 [0, 1], where there is an independent Bernoulli parameter u,q for each pair of query and SKU.</p><p>Since S u is latent, to allow estimation of the paramters u,q , we need to connect S u to the observed variables C u , and the connection is generally made through the examination variables {E 0 u }, by an assumption such as the following:</p><formula xml:id="formula_20">P(E u 0 = 1|S u = 1) , P(E u 0 = 1|S u = 0),</formula><p>where u 0 , u is a SKU encountered after u in the process of browsing the SERP/PLV.</p><p>Our reason for highlighting the CH is that we have found that the CH limits the applicability of these models in practice. Consider the following Implication Assumption:</p><formula xml:id="formula_21">IA u,q is high ) u,q is high. (<label>9</label></formula><formula xml:id="formula_22">)</formula><p>Clearly the IA is not always true, even in the SERP case, because, for example, the search engine may have done a poor job of producing the snippet for a relevant document. But in order for parameter tuning to converge in a data-ecient manner, the AI must be predominantly true. The reason for this is that, according to the CH (8), the variable C u acts as a "censor" deleting random samples for the variable S u . For a xed number N of observations of a SKU with xed u,q , the eective sample size for the empirical estimate ˆ u,q is approximately N • u,q , so that as u,q ! 0 + , the variance of ˆ u,q is scaled up by a factor of 1/ u,q ! 1 . See Chapter 19 of <ref type="bibr" target="#b6">[7]</ref> for a more comprehensive discussion of values missing at random and associated phenomena. We have already discussed in §2 that relevance of a SKU is a necessary but not sucient condition for a high CTR, and consequently the IA, (9) is frequently violated in practice for SKUs. Empirically, we have observed poor performance of most click models (other than the PBM and UBM) on eCommerce site click-through logs, and we attribute this to the failure of the IA. Thus a suitable modication of the usual click model framework is needed to extract relevance judgments from click data. This is the subject of §4.3 below.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Relevance versus Attractiveness</head><p>At this point, we take a step back from our development of the task model to address a question that the reader may already be asking: given the complications of extracting relevance from click logs in the eCommerce setting, why not completely forgo modeling relevance and instead model the attractiveness of SKUs directly? After all, it would seem that the goal of search ranking in eCommerce is to present the most engaging PLV which will result in the most clicks, ATCs, and purchases. If a model can be trained to predict SKU attractiveness directly from query and SKU features in an end-to-end manner, that would seem to be sucient and decrease the motivation to model relevance separately. There are at least two practical reasons for wanting to model relevance separately from attractiveness. The rst is that relevance is the most stable among the factors that aect attractiveness, namely price, customer taste, etc., all of which vary signicantly over time. Armed with both a model of relevance and a separate model of how relevance interacts with the more transient factors to impact attractiveness, the site manager can estimate the eects on attractiveness that can be achieved by pulling various levers available to her, i.e., by modifying the transient factors (changing the price, or attempting to alter customer taste through dierent marketing). The second is that the textual (word, query, and SKU-document) representations produced by modeling relevance, for example, by applying any of the distributed models from §3, have potential applications in related areas such as recommendation, synonym identication, and automated ontology construction. Allowing transient factors correlated with attractiveness, such as the price and changing customer taste, to inuence these representations, would skew them in unpredictable and undesirable ways, limiting their utility. We will return to the point of non-search applications of the representations in §7.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Task Models</head><p>Instead of using a click model that considers each query-PLV independently, we will use a form of behavioral analysis that groups search requests into tasks. The point of view we are taking is similar to the one adopted in §4 of <ref type="bibr" target="#b27">[28]</ref> and subsequent works on the Taskcentric Click Model (TCM). Similar to the TCM of <ref type="bibr" target="#b27">[28]</ref>, we assume that when a user searches on several semantically related queries in the same session, the user goes through a process of successively querying, examining results, possibly with clicks, and rening the query until it matches her intent. We sum up the process in the ow chart, Figure <ref type="figure" target="#fig_1">2</ref>, which corresponds to both Figure <ref type="figure" target="#fig_1">2</ref>, the "Macro Model" and Figure <ref type="figure" target="#fig_2">3</ref>, the "Micro model" in <ref type="bibr" target="#b27">[28]</ref>.</p><p>There are several important ways in which we have simplied the analysis and model as compared with the TCM of <ref type="bibr" target="#b27">[28]</ref>. First, we do not consider the so-called "duplicate bias" or the associated freshness variable of a SKU in our analysis, but we do indicate explicitly that the click variable depends on both relevance and other factors of SKU attractiveness. Second, we do not consider the last query in the query chain, or the clicks on the last PLV, as being in any way special. Third, <ref type="bibr" target="#b27">[28]</ref> perform inference and parameter tuning on the model, whereas in this work, at least, we use the model for a dierent purpose (see below). As a result <ref type="bibr" target="#b27">[28]</ref> need to adopt a particular click model inside the TCM (called the "micro model", or PLV/SERP interaction model) to fully specify the TCM. For our purposes the PLV interaction model could remain unspecied, but for the sake of concreteness, we specify a particular model, similar to the PBM, to govern the user's interaction with the PLV, inside TCM in Figure <ref type="figure" target="#fig_2">3</ref>, which is comparable to Figure <ref type="figure">4</ref> of <ref type="bibr" target="#b27">[28]</ref>. Note that in comparison to their TCM model we have added the A (attractiveness factor), eliminated the "freshness" and "previous examination factors", and otherwise just made some changes of notation, namely using S instead of R to denote relevance/satisfaction in agreement with the notation of §4.1, and the more standard r instead of j for "rank". The important new variables present in the TCM are the following two relating to the session ow, rather than the internal search request ow (continuing the numbering (1)-(3) from §4.1):</p><p>(4) The user's intent being matched by query i, denoted by M i ;</p><p>(5) The user submitting another search request after the ith query session, denoted by N i .</p><p>A complete specication of our TCM consists of the template-based DAG representation of gure 3 together with the following parameterizations (compare ( <ref type="formula">16</ref>)-( <ref type="formula">24</ref>) of <ref type="bibr" target="#b27">[28]</ref>):</p><formula xml:id="formula_23">P(M i = 1) = 1 2 [0, 1] P(N i = 1|M i = 1) = 2 2 [0, 1] P(E i,r ) = r P(A i,r ) = u,q P(S i,r ) = u,q M i = 0 ) N i = 1 C i,r = 1 , M i = 1, E i,r = 1, S i,r = 1, A i,r = 1.</formula><p>Resuming our discussion from §3, we are seeking a way to extract programmatically from the click logs a collection of triples (q, d rel , d irrel ) 2 Q ⇥ D ⇥ D where d rel ,q &gt; d irrel ,q , which is suciently "adversarial". The notion of "adversarial" which we adopt, is that, rst, q belongs to a task (multi-query session) in the sense of the TCM, q = q i 0 was preceded by a "similar" query q i , and d irrel , while irrelevant to q i 0 , is relevant to q i . Note that this method of identifying the triples, at least heuristically, has a much higher chance of producing adversarial example because the similarity between q i and q i 0 implies with high probability that d rel ,q i 0 d irrel ,q i 0 is much smaller than would be expected if we chose d irrel from the SKU collection at random. Leaving aside the question, for the moment, of how to dene search sessions (a point we will return to in Section 5), we can begin our approach to the construction of the triples by dening criteria which make it likely that the user's true search intent is expressed by q i 0 , but not by q i (recall again that i &lt; i 0 , meaning q i is the earlier of the two queries):</p><p>(1) The PLV for q i had no clicks: C q i ,u r = 0, r = 1, . . . n.</p><p>(2) The PLV for q i 0 had (at least) one click, on r 0 : C q i 0,u r 0 = 1. It turns out that relying on these criteria alone is too naïve. The problem is not the relevance u q i 0, r 0 to q i 0 , but the supposed irrelevance of the u q i ,r to q i . In the terms of the TCM, this is because criterion <ref type="bibr" target="#b0">(1)</ref>, absence of any click on the PLV for q i , does not in general imply that M i = 0. In <ref type="bibr" target="#b27">[28]</ref>, the authors address this issue by performing parameter estimation using the click logs holistically. We make adopt a version this approach in future work. In the present work, we take a dierent approach, which is to examine the content of q i , q i 0 and add a lter (condition) that makes it much more likely that M i = 0, namely (3) The tokens of q i 0 properly contain the tokens of q i . An example of a (q i , q i 0 ) which satises ( <ref type="formula">3</ref>) is (bookshelf, bookshelf with doors), whereas an example which does not satisfy ( <ref type="formula">3</ref>) is (wooden bookshelf, bookshelf with doors): see the last line of Table <ref type="table" target="#tab_3">4</ref>. The idea behind (3) is in this situation, the user is rening her query to better match her true search intent, so we apply the term "renement" either to q i 0 or to the pair (q i , q i 0 ) as a whole. This addresses the issue that we need M i = 0 to conclude that u i,r are likely irrelevant to the query q i 0 .</p><p>However, there are still another couple of issues with using the triple (q 0 , u q i 0, r 0 , u q i ,r ) as a training example. The rst stems from the observation that lack of click on u q i ,r provides evidence for the irrelevance (to the user's true intent q 0 ) only if it was examined, E u q i ,r = 1. This is the same observation as <ref type="bibr" target="#b6">(7)</ref>, but in the context of TCMs. We address this by adding a lter:</p><p>(4) The rank r of u q i ,r  ; a small integer parameter, implying that r is relatively close to 1. The second is an "exceptional" type of situation where M i = 0, but certain u i,r on the PLV for q i are relevant to q i 0 . Consider as an example, the user issues the query q i "rubber band", then the query "Acme rubber band" q i 0 , then clicks on an SKU u q i 0, r 0 = u q i ,r she has previously examined on the PLV for q i . This may indicate that the PLV for q i actually contained some results relevant to q i 0 , and examining such results reminded the user of her true search intent. In order to lter out the noise that from the dataset which would result from such cases, we add this condition:</p><p>(5) u q i 0, r 0 does not appear anywhere in the PLV for q i .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">DATASET CONSTRUCTION</head><p>We group consecutive search requests by the same user into one session. Within each session, we extract semantically related query pairs (q i , q i 0 ), q i submitted before q i 0 . The query q i 0 is considered semantically related to the previously submitted q i if they satisfy condition (3) above in §4.3. We keep only (q i , q i 0 ) satisfying ( <ref type="formula" target="#formula_1">1</ref>)-(3) in §4.3. As explained in §4.3, if in addition the clicked SKU u q i 0, r 0 , and the unclicked SKUs u q i ,r satisfy (4)-( <ref type="formula" target="#formula_6">5</ref>), we have high condence that M i = 0 and u q i ,r does not satisfy the user's information  need, whereas M i 0 = 1 and u q i 0, r 0 does. Therefore, based on our heuristic, we can construct an example for our dataset by constructing as many as training examples for each eligible click, of the form (q, d rel , d irrel ) := (q i 0 , u q i 0, r 0 , u q i ,r ), r = 1, . . . , .</p><p>For our experiments, we processed logged search data on Jet.com from April to November 2017. We ltered to only search requests related to electronics and furniture categories, so as to enable fast experimentation. We implemented our construction method in Apache Spark <ref type="bibr" target="#b26">[27]</ref>. Our nal dataset consists of around 3.6 million examples, with 130k unique q, 131k unique d rel , 275k unique d irrel , with 68k SKUs appearing as d rel and d irrel for dierent examples. Table <ref type="table" target="#tab_3">4</ref> shows some examples extracted by our method.</p><p>In order to compensate for the relatively small number of unique q in the dataset and the existence of concept drift in eCommerce, we formed the train-validate-split in the following manner rather than the usual practice in ML of using random splits: we reserved the rst six months of data for training, the seventh month for validation, and the eighth (nal) month for testing. Further, we ltered out from the validation set all examples with a q seen in training; and from the test set, all examples with a q seen in validation or training. This turned out to result in a (training:validation:test) ratio of (75 : 2.5 : 1). We now give an overview of the principles used to form the SKU (document) text as actually encoded by the models. The query text is just whatever the user types into the search eld. The SKU text is formed from the concatenating the SKU title with the text extracted from some other elds associated with the SKU in a database, some of which are free-text description, and others of which are more structured in nature. Finally, both the query and SKU texts are lowercased and normalized to clean up certain extraneous elements such as html tags and non-ASCII characters and to expand some abbreviations commonly found the SKU text. For example, an apostrophe immediately following numbers was turned into the token "feet". No models and no stemming or NLP analyzers, only regular expressions, are used in the text normalization.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">EXPERIMENTAL RESULTS</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.1">Details of Model Training</head><p>All of the supervised IR models were trained on one NVIDIA Tesla K80 GPU using PyTorch <ref type="bibr" target="#b14">[15]</ref>, with the margin ranking loss:</p><formula xml:id="formula_24">L(f (q, d rel ), f ( q, d irrel )) = max(0, f (q, d irrel ) f (q, d rel ) + 1).</formula><p>We used the Adam optimizer <ref type="bibr" target="#b5">[6]</ref>, with an initial learning rate of 1⇥10 4 , using PyTorch's built-in learning rate scheduler to decrease the learning rate in response to a plateau in validation loss. For the kernel-pooling model, it takes about 8 epochs, with a run time of about 2.5 hours assuming a batch-size of 512, for the learning rate to reach 1⇥10 6 , after which further decrease in the learning rate does not result in signicant validation accuracy improvements. Also, for the kernel-pooling model, we explored the eect of truncating the SKU text at various lengths. In particular, we tried keeping the rst 32, 64 and 128 tokens of the text, and we report the results below. For the CLSM and Siamese models we tried changing the dimension of both V , the distributed query/document representations, and the number of channels of the cnn layer (dimension of input to mlp 1 ) using values evenly spaced in logarithmic space between 32 and 512. We also tried 3 dierent values of the dropout rate in these models. For the Kernel-pooling model, we used word vectors from an unsupervised pre-training step, using the training/validation texts as the corpus. As our word2vec algorithm, we used the CBOW and Skipgram algorithms as implemented Gensim's <ref type="bibr" target="#b16">[17]</ref> FastText wrapper. We observed no improvement in performance of the relevance model from changing the choice of word2vec algorithm or altering the word2vec hyperparameters from their default values. For the tf-idf baseline, we also used the Gensim library, without changing any of the default settings, to compile the idf (inverse document frequency) statistics.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Model</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2">Error Rate Comparison</head><p>We have reported our main results, the error rates of the trained relevance models in Table <ref type="table" target="#tab_5">5</ref>. The most notable nding is that the kernel-pooling model, our main representative of the distributed class showed a sizable improvement over the baseline, and in our experiments thus far, none of the distributed representation models even matched the baseline. We found that the distributed models had adequate capacity to overt the training data, but they are not generalizing well to the validation/test data. We will discuss ongoing eorts to correct this in §7 below. Another notable nding is that there is an ideal truncation length of the SKU texts for the kernelpooling model, in our case around 2 6 tokens, which allows the model enough information without introducing too much extraneous noise. Finally, following <ref type="bibr" target="#b25">[26]</ref>, we also evaluated a variant of the kernel-pooling model where the word embeddings were "frozen", i.e. xed at their initial word2vec values. Interestingly, unlike what was observed in <ref type="bibr" target="#b25">[26]</ref>, we observed only a modest degredation in performance, as measured by overall error rate, from the model using the word2vec embeddings as compared with the full model. Based on the qualitative analysis of the learned embeddings, §6.3, we believe it is still worthwhile to train the full model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.3">Pre-trained versus ne-tuned embeddings</head><p>Similarly to <ref type="bibr" target="#b25">[26]</ref>, we found that the main eect of the supervised retraining of the word embeddings was to decouple certain word pairs. Corresponding to Table <ref type="table">8</ref> of their paper we have listed some examples of the moved word pairs in Table <ref type="table">7</ref>. The training decouples roughly twice as many word pairs as it moves closer together. In spite of the relatively modest gains to overall accuracy from the netuning of embeddings, we believe this demonstrates the potential value of the ne-tuned embeddings for other search-related tasks.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Template-based representation of the PBM click model.</figDesc><graphic coords="6,114.53,83.68,118.79,100.80" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Task-centric Click Model with Macro Bernoulli variables M, N labelled.</figDesc><graphic coords="7,58.72,83.69,230.40,129.60" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Template-based DAG representation of the TCM</figDesc><graphic coords="8,65.92,266.00,216.00,144.01" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="1,0.00,159.54,612.00,472.91" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Notation for Building Blocks of NIR models</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 :</head><label>2</label><figDesc>Comparison of distributed NIR models in literature</figDesc><table><row><cell>Model</cell><cell>w</cell><cell>dim(w)</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3 :</head><label>3</label><figDesc>Comparison of local-interaction NIR models in literature</figDesc><table><row><cell>Model</cell><cell>w</cell><cell></cell><cell>dim(w)</cell><cell>r</cell></row><row><cell cols="2">tf-idf Duet [12] (local side) Kernel-pooling model [26] distributed word embeddings one-hot encoding, weighted by one-hot encoding</cell><cell>p</cell><cell cols="2">idf |Vocabulary| |Vocabulary| mlp 3 cnn 1 1 : R |Vocabulary| + 300 mlp 1</cell><cell>! R +</cell></row><row><cell cols="2">that an item (web page or SKU) is attractive if and only if it is</cell><cell></cell><cell></cell></row><row><cell cols="2">relevant, but this assumption is obviously implausible even in the</cell><cell></cell><cell></cell></row><row><cell cols="2">web search case, and all but the simplest click models reject it. A</cell><cell></cell><cell></cell></row><row><cell cols="2">more sophisticated approach, which is the option taken by click</cell><cell></cell><cell></cell></row><row><cell cols="2">models such as DCM, CCD, DBN, is to dene another Bernoulli</cell><cell></cell><cell></cell></row><row><cell>random variable</cell><cell></cell><cell></cell><cell></cell></row><row><cell>(4) S</cell><cell></cell><cell></cell><cell></cell></row></table><note>u , the user having her information need satised by SKU u, satisfying the following Click Hypothesis:</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 4 :</head><label>4</label><figDesc>Sample of Dataset , showing only original titles for SKU text</figDesc><table><row><cell>Query</cell><cell>Relevant SKU</cell><cell>Irrelevant SKU</cell></row><row><cell cols="2">epson ink cartridges Epson 252XL High-capacity Black Ink Cartridge</cell><cell>Canon CL-241XL Color Ink Cartridge</cell></row><row><cell>batteries aa</cell><cell>Sony S-am3b24a Stamina Plus Alkaline Batteries (aa; 24 Pk)</cell><cell>Durcell Quantum Alkaline Batteries -AAA, 12 Count</cell></row><row><cell>microsd 128gb</cell><cell>Sandisk Sdsqxvf-128g-an6ma Extreme Microsd Uhs-i Card With Adapter (128gb)</cell><cell>PNY 32GB MicroSDHC Card</cell></row><row><cell>accent chair</cell><cell>Acme Furniture Ollano Accent Chair -Fish Pattern -Dark Blue</cell><cell>ProLounger Wall Hugger Microber Recliner</cell></row><row><cell>bar stool red 2</cell><cell>Belleze© Leather Hydraulic Lift Adjustable Counter Bar Stool Dining Chair Red -Pack of 2</cell><cell>Flash Furniture Modern Vinyl 23.25 In. -32 In. Ad-justable Swivel Barstool</cell></row><row><cell>bookshelf with doors</cell><cell>Better Homes and Gardens Crossmill Bookcase with Doors, Multiple Finishes</cell><cell>Way Basics Eco-Friendly 4 Cubby Bookcase</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head></head><label></label><figDesc>Although the split is lopsided towards training examples, it still results in 46k test examples. We believe this drastic deletion of validation/test examples is well worth it to detect overtting and distinguish true model learning from mere memorization.</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head>Table 5 :</head><label>5</label><figDesc>Error rates of models, reported as percent of error rate of tf-idf baseline. Validation column reports error rate for the best (lowest) training epoch. Note that, in all experiments to date on our dataset, none of the distributed models (DSSM, CLSM, Siamese) have outperformed the baseline.</figDesc><table><row><cell></cell><cell cols="2">Validation Test</cell></row><row><cell>Kernel-pooling, trainable embeddings</cell><cell></cell><cell></cell></row><row><cell>truncation length 32</cell><cell>68.63</cell><cell>65.62</cell></row><row><cell>truncation length 64</cell><cell>63.13</cell><cell>62.92</cell></row><row><cell>truncation length 128</cell><cell>70.16</cell><cell>73.97</cell></row><row><cell>Kernel-pooling, frozen embeddings</cell><cell></cell><cell></cell></row><row><cell>truncation length 64</cell><cell>66.13</cell><cell>65.40</cell></row><row><cell>tf-idf baseline</cell><cell>N/A</cell><cell>100.0</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">For the sake of this example, we are assuming that "Monarch" is a brand of bed, only some of which are "king" (size) beds.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">Although the dimension of word2vec is a hyperparameter, we make the conventional choice that word-vectors have dimension</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="300" xml:id="foot_2">in the examples to keep the number of symbols to a minimum.</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">CONCLUSIONS</head><p>We showed how to construct a rich, adversarial dataset for eCommerce relevance. We demonstrated that one of the current stateof-art NIR models, namely the Kernel Pooling model, is able to reduce pairwise ranking errors on this dataset, as compared to the tf-idf baseline, by over a third. We observed that the distributional NIR models such as DSSM and CLSM overt and do not learn to generalize well on this dataset. Because of the inherent advantages of distributional over local-interaction models, our rst priority for ongoing work is to diagnose and overcome this overtting so that the distributional models at least outperform the baseline. The work is proceeding along two parallel tracks. One is explore further architectures in the space of all possible NIR models to nd ones which are easier to regularize. The other is to perform various forms of data augmentation, both in order to increase the sheer quantity of data available for the models to train on and to overcome any biases that the current data generation process may introduce.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ACKNOWLEDGMENTS</head><p>The authors would like to thank Ke Shen for his assistance setting up the data collection pipelines.</p></div>
			</div>


			<div type="funding">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>) Yan. 2018. End-to-End Neural Ranking for eCommerce Product Search: An application of task models and textual embeddings. In Proceedings of ACM SIGIR Workshop on eCommerce (SIGIR 2018 eCom). ACM, New York, NY, USA, 10 pages.</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0" />			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A neural click model for web search</title>
		<author>
			<persName><forename type="first">Alexey</forename><surname>Borisov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ilya</forename><surname>Markov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Maarten</forename><surname>De Rijke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pavel</forename><surname>Serdyukov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 25th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee</title>
				<meeting>the 25th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="531" to="541" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Click models for web search</title>
		<author>
			<persName><forename type="first">Aleksandr</forename><surname>Chuklin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ilya</forename><surname>Markov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Maarten</forename><surname>De Rijke</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Synthesis Lectures on Information Concepts, Retrieval, and Services</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="1" to="115" />
			<date type="published" when="2015">2015. 2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Neural Network Methods for Natural Language Processing</title>
		<author>
			<persName><forename type="first">Yoav</forename><surname>Goldberg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Synthesis Lectures on Human Language Technologies</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="page" from="1" to="287" />
			<date type="published" when="2017">2017. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">Clinton</forename><surname>Gormley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zachary</forename><surname>Tong</surname></persName>
		</author>
		<title level="m">Elasticsearch: The Denitive Guide: A Distributed Real-Time Search and Analytics Engine</title>
				<imprint>
			<publisher>O&apos;Reilly Media, Inc</publisher>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Learning deep structured semantic models for web search using clickthrough data</title>
		<author>
			<persName><forename type="first">Po-Sen</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xiaodong</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jianfeng</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Li</forename><surname>Deng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alex</forename><surname>Acero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Larry</forename><surname>Heck</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd ACM international conference on Conference on information &amp; knowledge management</title>
				<meeting>the 22nd ACM international conference on Conference on information &amp; knowledge management</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="2333" to="2338" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Adam: A method for stochastic optimization</title>
		<author>
			<persName><forename type="first">P</forename><surname>Diederik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jimmy</forename><surname>Kingma</surname></persName>
		</author>
		<author>
			<persName><surname>Ba</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1412.6980</idno>
		<imprint>
			<date type="published" when="2014">2014. 2014</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Probabilistic graphical models: principles and techniques</title>
		<author>
			<persName><forename type="first">Daphne</forename><surname>Koller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nir</forename><surname>Friedman</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2009">2009</date>
			<publisher>MIT press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Towards a simplied ontology for better e-commerce search</title>
		<author>
			<persName><forename type="first">Aliasgar</forename><surname>Kutiyanawala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Prateek</forename><surname>Verma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zheng</forename><surname>Yan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the SIGIR 2018 Workshop on eCommerce (ECOM 18</title>
				<meeting>the SIGIR 2018 Workshop on eCommerce (ECOM 18</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Distributed representations of sentences and documents</title>
		<author>
			<persName><forename type="first">Quoc</forename><surname>Le</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tomas</forename><surname>Mikolov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 31st International Conference on Machine Learning (ICML-14)</title>
				<meeting>the 31st International Conference on Machine Learning (ICML-14)</meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="1188" to="1196" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Learning to rank for information retrieval and natural language processing</title>
		<author>
			<persName><forename type="first">Hang</forename><surname>Li</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Synthesis Lectures on Human Language Technologies</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="1" to="121" />
			<date type="published" when="2014">2014. 2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Neural Models for Information Retrieval</title>
		<author>
			<persName><forename type="first">Bhaskar</forename><surname>Mitra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nick</forename><surname>Craswell</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1705.01509</idno>
		<imprint>
			<date type="published" when="2017">2017. 2017</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Learning to match using local and distributed representations of text for web search</title>
		<author>
			<persName><forename type="first">Fernando</forename><surname>Bhaskar Mitra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nick</forename><surname>Diaz</surname></persName>
		</author>
		<author>
			<persName><surname>Craswell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee</title>
				<meeting>the 26th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="1291" to="1299" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Modeling and Simultaneously Removing Bias via Adversarial Neural Networks</title>
		<author>
			<persName><forename type="first">John</forename><surname>Moore</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Joel</forename><surname>Pfeier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kai</forename><surname>Wei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rishabh</forename><surname>Iyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Denis</forename><surname>Charles</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ran</forename><surname>Gilad-Bachrach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Levi</forename><surname>Boyles</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Eren</forename><surname>Manavoglu</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1804.06909</idno>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Towards Practical Visual Search Engine Within Elasticsearch</title>
		<author>
			<persName><forename type="first">Cun</forename><surname>Mu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jun</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Guang</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jing</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zheng</forename><surname>Yan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the SIGIR 2018 Workshop on eCommerce (ECOM 18</title>
				<meeting>the SIGIR 2018 Workshop on eCommerce (ECOM 18</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Automatic dierentiation in PyTorch</title>
		<author>
			<persName><forename type="first">Adam</forename><surname>Paszke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sam</forename><surname>Gross</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Soumith</forename><surname>Chintala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gregory</forename><surname>Chanan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Edward</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zachary</forename><surname>Devito</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zeming</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alban</forename><surname>Desmaison</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Luca</forename><surname>Antiga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Adam</forename><surname>Lerer</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2017">2017</date>
			<publisher>NIPS-W</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Glove: Global vectors for word representation</title>
		<author>
			<persName><forename type="first">Jerey</forename><surname>Pennington</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Richard</forename><surname>Socher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christopher</forename><surname>Manning</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)</title>
				<meeting>the 2014 conference on empirical methods in natural language processing (EMNLP)</meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="1532" to="1543" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Software Framework for Topic Modelling with Large Corpora</title>
		<author>
			<persName><forename type="first">Radim</forename><surname>Rehurek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Petr</forename><surname>Sojka</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks</title>
				<meeting>the LREC 2010 Workshop on New Challenges for NLP Frameworks<address><addrLine>Valletta, Malta</addrLine></address></meeting>
		<imprint>
			<publisher>ELRA</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="45" to="50" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">The probabilistic relevance framework: BM25 and beyond</title>
		<author>
			<persName><forename type="first">Stephen</forename><surname>Robertson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hugo</forename><surname>Zaragoza</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Foundations and Trends® in Information Retrieval</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="333" to="389" />
			<date type="published" when="2009">2009. 2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Semantic Vector Encoding and Similarity Search Using Fulltext Search Engines</title>
		<author>
			<persName><forename type="first">Jan</forename><surname>Rygl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jan</forename><surname>Pomikalek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Radim</forename><surname>Rehurek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michal</forename><surname>Ruzicka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Vit</forename><surname>Novotny</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Petr</forename><surname>Sojka</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd Workshop on Representation Learning for NLP</title>
				<meeting>the 2nd Workshop on Representation Learning for NLP</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="81" to="90" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">A latent semantic model with convolutional-pooling structure for information retrieval</title>
		<author>
			<persName><forename type="first">Yelong</forename><surname>Shen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xiaodong</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jianfeng</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Li</forename><surname>Deng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Grégoire</forename><surname>Mesnil</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management</title>
				<meeting>the 23rd ACM International Conference on Conference on Information and Knowledge Management</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="101" to="110" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<title level="m" type="main">Apache Solr enterprise search server</title>
		<author>
			<persName><forename type="first">David</forename><surname>Smiley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Eric</forename><surname>Pugh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kranti</forename><surname>Parisa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Matt</forename><surname>Mitchell</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2015">2015</date>
			<publisher>Packt Publishing Ltd</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<title level="m" type="main">Year in Search 2017</title>
		<ptr target="https://trends.google.com/trends/yis/2017/GLOBAL/" />
		<imprint>
			<date type="published" when="2018-04-29">2018. April 29, 2018</date>
		</imprint>
	</monogr>
	<note>Google Trends</note>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<title level="m" type="main">An Exploration of Approaches to Integrating Neural Reranking Models in Multi-Stage Ranking Architectures</title>
		<author>
			<persName><forename type="first">Zhucheng</forename><surname>Tu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Matt</forename><surname>Crane</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Royal</forename><surname>Sequiera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Junchen</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jimmy</forename><surname>Lin</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1707.08275</idno>
		<imprint>
			<date type="published" when="2017">2017. 2017</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<title level="m" type="main">How Google Searches 30 Trillion Web Pages 100 Billion Times a month</title>
		<author>
			<persName><surname>Venturebeat</surname></persName>
		</author>
		<ptr target="https://venturebeat.com/2013/03/01/how-google-searches-30-trillion-web-pages-100-billion-times-a-month/" />
		<imprint>
			<date type="published" when="2013-04-29">2013. April 29, 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<title level="m" type="main">Tf-idf</title>
		<ptr target="https://en.wikipedia.org/wiki/Tf%E2%80%93idf" />
		<imprint>
			<date type="published" when="2018-04-30">2018. April 30, 2018</date>
		</imprint>
		<respStmt>
			<orgName>Wikipedia</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">End-to-end neural ad-hoc ranking with kernel pooling</title>
		<author>
			<persName><forename type="first">Chenyan</forename><surname>Xiong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zhuyun</forename><surname>Dai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jamie</forename><surname>Callan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zhiyuan</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Russell</forename><surname>Power</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval</title>
				<meeting>the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="55" to="64" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Apache spark: a unied engine for big data processing</title>
		<author>
			<persName><forename type="first">Matei</forename><surname>Zaharia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Reynold</forename><forename type="middle">S</forename><surname>Xin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Patrick</forename><surname>Wendell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tathagata</forename><surname>Das</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><surname>Armbrust</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ankur</forename><surname>Dave</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xiangrui</forename><surname>Meng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Josh</forename><surname>Rosen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Shivaram</forename><surname>Venkataraman</surname></persName>
		</author>
		<author>
			<persName><surname>Michael J Franklin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Commun. ACM</title>
		<imprint>
			<biblScope unit="volume">59</biblScope>
			<biblScope unit="page" from="56" to="65" />
			<date type="published" when="2016">2016. 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">User-click modeling for understanding and predicting search-behavior</title>
		<author>
			<persName><forename type="first">Yuchen</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Weizhu</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dong</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Qiang</forename><surname>Yang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining</title>
				<meeting>the 17th ACM SIGKDD international conference on Knowledge discovery and data mining</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="1388" to="1396" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
