Dataset Viewer
Auto-converted to Parquet Duplicate
id
stringlengths
14
14
text
stringlengths
89
1.21k
source
stringclasses
1 value
147d8b44b127-0
111An Analysis of Fusion Functions for Hybrid Retrieval SEBASTIAN BRUCH, Pinecone, USA SIYU GAIโˆ—,University of California, Berkeley, USA AMIR INGBER, Pinecone, Israel We study hybrid search in text retrieval where lexical and semantic search are fused together with the intuition that the two are complementary in how th...
2210.11934.pdf
147d8b44b127-1
federated search . Additional Key Words and Phrases: Hybrid Retrieval, Lexical and Semantic Search, Fusion Functions 1 INTRODUCTION Retrieval is the first stage in a multi-stage ranking system [ 1,2,43], where the objective is to find the top-๐‘˜set of documents, that are the most relevant to a given query ๐‘ž, from a la...
2210.11934.pdf
147d8b44b127-2
Words ( BoW ) and compute the similarity of two pieces of text using a statistical measure such as the term frequency-inverse document frequency (TF-IDF) family, with BM25 [ 32,33] being its most prominent member. We refer to retrieval with a BoW model as lexical search and the similarity scores computed by such a syst...
2210.11934.pdf
147d8b44b127-3
vector similarity or distance. We refer to this method as semantic search and the similarity scores computed by such a system as semantic scores . Hypothesizing that lexical and semantic search are complementary in how they model relevance, recent works [ 5,12,13,18,19,41] began exploring methods to fusetogether lexica...
2210.11934.pdf
147d8b44b127-4
University of California, Berkeley, Berkeley, CA, USA; Amir Ingber, Pinecone, Tel Aviv, Israel, ingber@pinecone.io.arXiv:2210.11934v1 [cs.IR] 21 Oct 2022
2210.11934.pdf
d2f12e5b35df-0
111:2 Sebastian Bruch, Siyu Gai, and Amir Ingber It is becoming increasingly clear that hybrid search does indeed lead to meaningful gains in retrieval quality, especially when applied to out-of-domain datasets [ 5,39]โ€”settings in which the semantic retrieval component uses a model that was not trained or fine-tuned on...
2210.11934.pdf
d2f12e5b35df-1
(such as dot product) may be unbounded, often they are normalized with min-max scaling [ 15,39] prior to fusion. A recent study [ 5] argues that convex combination is sensitive to its parameter ๐›ผand the choice of score normalization.1They claim and show empirically, instead, that Reciprocal Rank Fusion ( RRF) [6] may ...
2210.11934.pdf
d2f12e5b35df-2
a reasonable choice and study its sensitivity to the normalization protocol. We show that, while normalization is essential to create a bounded function and thereby bestow consistency to the fusion across domains, the specific choice of normalization is a rather small detail: There always exist convex combinations of s...
2210.11934.pdf
d2f12e5b35df-3
information. Observe that the distance between raw scores plays no role in determining their hybrid scoreโ€”a behavior we find counter-intuitive in a metric space where distance does matter. Examining this property constitutes our third and final research question (RQ3). Finally, we empirically demonstrate an unsurprisin...
2210.11934.pdf
a1a34c5d3489-0
research in this field. Our analysis leads us to believe that the convex combination formulation is theoretically sound, empirically effective, sample-efficient, and robust to domain shift. Moreover, 1c.f. Section 3.1 in [ 5]: โ€œThis fusion method is sensitive to the score scales ...which needs careful score normalizati...
2210.11934.pdf
f957f16ac078-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:3 unlike the parameters in RRF, the parameter(s) of a convex function are highly interpretable and, if no training samples are available, can be adjusted to incorporate domain knowledge. We organized the remainder of this article as follows. In Section 2, we revi...
2210.11934.pdf
f957f16ac078-1
38]. Conventional wisdom has that retrieval must be recall-oriented while improving ranking quality may be left to the re-ranking stages, which are typically Learning to Rank ( LtR) models [ 16,23, 28,38,40]. There is indeed much research on the trade-offs between recall and precision in such multi-stage cascades [ 7,2...
2210.11934.pdf
f957f16ac078-2
inversely proportionate to the fraction of documents that contain the termโ€”and adds the scores of query terms to arrive at the final similarity or relevance score. Because BM25, like other lexical scoring functions, insists on an exact match of terms, even a slight typo can throw the function off. This vocabulary misma...
2210.11934.pdf
f957f16ac078-3
cient for first-stage retrieval, however, making them more suitable for re-ranking stages. But by cleverly disentangling the query and document transformations into the so-called dual-encoder architecture, where, in the resulting design, the โ€œembeddingโ€ of a document can be computed independently of queries, we can pre...
2210.11934.pdf
357f2ffd45bc-0
obtain the vector representation of the query during inference. At a high level, these models project queries and documents onto a low-dimensional vector space where semantically-similar points stay closer to each other. By doing so we transform the retrieval problem to one of similarity search or Approximate Nearest N...
2210.11934.pdf
f16a465ce965-0
111:4 Sebastian Bruch, Siyu Gai, and Amir Ingber packages or through a managed service such as Pinecone2, creating an opportunity to use deep models and vector representations for first-stage retrieval [ 12,42]โ€”a setup that we refer to as semantic search. Semantic search, however, has its own limitations. Previous stud...
2210.11934.pdf
f16a465ce965-1
many existing fusion functions in experiments, but none compares the main ideas comprehensively. We review the popular fusion functions from these works in the subsequent sections and, through a comparative study, elaborate what about their behavior may or may not be problematic. 3 SETUP In the sections that follow, we...
2210.11934.pdf
f16a465ce965-2
vectors in R|๐‘‰|, with|๐‘‰|denoting the size of the vocabulary, and ๐‘“Lexis typically BM25. A retrieval system ois the spaceQร—D equipped with a metric ๐‘“o(ยท,ยท)โ€”which need not be a proper metric. We denote the set of top- ๐‘˜documents retrieved for query ๐‘žby retrieval system oby๐‘…๐‘˜ o(๐‘ž). We write๐œ‹o(๐‘ž,๐‘‘)to denote the...
2210.11934.pdf
f16a465ce965-3
where 1๐‘is1when the predicate ๐‘holds and 0otherwise. In words, and ignoring the subtleties introduced by the presence of score ties, the rank of document ๐‘‘is the count of documents whose score is larger than the score of ๐‘‘. Hybrid retrieval operates on the product space ofรŽo๐‘–with metric ๐‘“Fusion :รŽ๐‘“o๐‘–โ†’R. Without...
2210.11934.pdf
f16a465ce965-4
one of the top- ๐‘˜sets (i.e.,๐‘‘โˆˆU๐‘˜(๐‘ž)but๐‘‘โˆ‰๐‘…๐‘˜ o๐‘–(๐‘ž)for some o๐‘–), we compute its missing score (i.e.,๐‘“o๐‘–(๐‘ž,๐‘‘)) prior to fusion. 2http://pinecone.io
2210.11934.pdf
2509e69bc750-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:5 3.2 Empirical Setup Datasets : We evaluate our methods on a variety of publicly available benchmark datasets, both in in-domain and out-of-domain, zero-shot settings. One of the datasets is the MS MARCO Passage Retrieval v1 dataset [ 25], a publicly available r...
2210.11934.pdf
2509e69bc750-1
andFiQA (financial). For details of and statistics for each dataset, we refer the reader to [36]. Lexical search : We use PISA [ 22] for keyword-based lexical retrieval. We tokenize queries and documents by space and apply stemming available in PISAโ€”we do not employ any other preprocessing steps such as stopword remova...
2210.11934.pdf
2509e69bc750-2
exceeding a total of 1 billion pairs of text, including NQ, MS MARCO Passage, and Quora . As such, we consider all experiments on these three datasets as in-domain, and the rest as out-of-domain. We use the exact search for inner product algorithm ( IndexFlatIP ) from FAISS [ 11] to retrieve top 1000 approximate neares...
2210.11934.pdf
2509e69bc750-3
cut-off) rather than shallow metrics per the discussion in [ 39] to understand the performance of each system more completely. 4 ANALYSIS OF CONVEX COMBINATION OF RETRIEVAL SCORES We are interested in understanding the behavior and properties of fusion functions. In the remainder of this work, we study through that len...
2210.11934.pdf
daf800e846e5-0
111:6 Sebastian Bruch, Siyu Gai, and Amir Ingber An interesting property of this fusion is that it takes into account the distribution of scores. In other words, the distance between lexical (or semantic) scores of two documents plays a significant role in determining their final hybrid score. One disadvantage, however...
2210.11934.pdf
daf800e846e5-1
below: ๐œ™mm(๐‘“o(๐‘ž,๐‘‘))=๐‘“o(๐‘ž,๐‘‘)โˆ’๐‘š๐‘ž ๐‘€๐‘žโˆ’๐‘š๐‘ž, (2) where๐‘š๐‘ž=min๐‘‘โˆˆU๐‘˜(๐‘ž)๐‘“o(๐‘ž,๐‘‘)and๐‘€๐‘ž=max๐‘‘โˆˆU๐‘˜(๐‘ž)๐‘“o(๐‘ž,๐‘‘). We note that, min-max scaling is thede facto method in the literature, but other choices of ๐œ™o(ยท)in the more general expression below: ๐‘“Convex(๐‘ž,๐‘‘)=๐›ผ๐œ™Sem(๐‘“Sem(๐‘ž,๐‘‘))+( 1โˆ’๐›ผ)๐œ™Lex(๐‘“Lex(๐‘ž,๐‘‘)...
2210.11934.pdf
daf800e846e5-2
are valid as well so long as ๐œ™Sem,๐œ™Lex:Rโ†’Rare monotone in their argument. For example, for reasons that will become clearer later, we can redefine the normalization by replacing the minimum of the set with the theoretical minimum of the function (i.e., the maximum value that is always less than or equal to all values...
2210.11934.pdf
daf800e846e5-3
Another popular choice is the standard score (z-score) normalization which is defined as follows: ๐œ™z(๐‘“o(๐‘ž,๐‘‘))=๐‘“o(๐‘ž,๐‘‘)โˆ’๐œ‡ ๐œŽ, (5) where๐œ‡and๐œŽdenote the mean and standard deviation of the set of scores ๐‘“o(๐‘ž,ยท)for query๐‘ž. We will return to normalization shortly, but we make note of one small but important fact:...
2210.11934.pdf
daf800e846e5-4
4.1 Suitability of Convex Combination A convex combination of scores is a natural choice for creating a mixture of two retrieval systems, but is it a reasonable choice? It has been established in many past empirical studies that ๐‘“Convex with min-max normalization often serves as a strong baseline. So the answer to our...
2210.11934.pdf
6598f6af8980-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:7 (a) MS MARCO (b)Quora (c)NQ (d)FiQA (e)HotpotQA (f)Fever Fig. 1. Visualization of the normalized lexical ( ๐œ™tmm(๐‘“Lex)) and semantic ( ๐œ™tmm(๐‘“Sem)) scores of query- document pairs sampled from the validation split of each dataset. Shown in red are up to 20...
2210.11934.pdf
8eacecb53293-0
111:8 Sebastian Bruch, Siyu Gai, and Amir Ingber (a)๐›ผ=0.6 (b)๐›ผ=0.8 Fig. 2. Effect of ๐‘“Convex on pairs of lexical and semantic scores. From these figures, it is clear that positive and negative samples form clusters that are, with some error, separable by a linear function. What is different between datasets is the ...
2210.11934.pdf
8eacecb53293-1
of negative samples where lexical scores vanish. This empirical evidence suggests that lexical and semantic scores may indeed be complementaryโ€” an observation that is in agreement with prior work [ 5]โ€”and a line may be a reasonable choice for distinguishing between positive and negative samples. But while these figures...
2210.11934.pdf
8eacecb53293-2
positives and negatives that is unsurprisingly more in tune with the ๐›ผ=0.8setting of๐‘“Convex than ๐›ผ=0.6. 4.2 Role of Normalization We have thus far used min-max normalization to be consistent with the literature. In this section, we ask the question first raised by Chen et al. [ 5] on whether and to what extent the c...
2210.11934.pdf
a09d9440f156-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:9 โ€ข๐œ™mm: Min-max scaling of Equation (2); โ€ข๐œ™tmm: Theoretical min-max scaling of Equation (4); โ€ข๐œ™z: z-score normalization of Equation (5); โ€ข๐œ™mmโˆ’Lex: Min-max scaling of lexical scores, unnormalized semantic scores; โ€ข๐œ™tmmโˆ’Lex: Theoretical min-max normalized lexi...
2210.11934.pdf
a09d9440f156-1
For example, when ๐œ™Sem(๐‘ฅ)=๐‘Ž๐‘ฅ+๐‘and๐œ™Lex(๐‘ฅ)=๐‘๐‘ฅ+๐‘‘are linear transformations of scores for some positive coefficients ๐‘Ž,๐‘and real intercepts ๐‘,๐‘, then they can be reduced to the following rank-equivalent form: ๐‘“Convex(๐‘ž,๐‘‘)๐œ‹=(๐‘Ž๐›ผ)๐‘“Sem(๐‘ž,๐‘‘)+๐‘(1โˆ’๐›ผ)๐‘“Lex(๐‘ž,๐‘‘). In fact, letting ๐›ผโ€ฒ=๐‘๐›ผ/[๐‘๐›ผ+๐‘(1โˆ’๐›ผ)]t...
2210.11934.pdf
a09d9440f156-2
normalization protocol. More formally: Lemma 4.2. For every query, given an arbitrary ๐›ผ, there exists a ๐›ผโ€ฒsuch that the convex combination of min-max normalized scores with parameter ๐›ผis rank-equivalent to a convex combination of z-score normalized scores with ๐›ผโ€ฒ, and vice versa. Proof. Write๐‘šoand๐‘€ofor the minimu...
2210.11934.pdf
a09d9440f156-3
๐‘…Lex๐‘“Lex(๐‘ž,๐‘‘) ๐œ‹=1 ๐œŽSem๐œŽLex๐›ผ ๐‘…Sem๐‘“Sem(๐‘ž,๐‘‘)+1โˆ’๐›ผ ๐‘…Lex๐‘“Lex(๐‘ž,๐‘‘)โˆ’๐›ผ ๐‘…Sem๐œ‡Semโˆ’1โˆ’๐›ผ ๐‘…Lex๐œ‡Lex ๐œ‹=๐›ผ ๐‘…Sem๐œŽLex๐‘“Sem(๐‘ž,๐‘‘)โˆ’๐œ‡Sem ๐œŽSem+1โˆ’๐›ผ ๐‘…Lex๐œŽSem๐‘“Lex(๐‘ž,๐‘‘)โˆ’๐œ‡Lex ๐œŽLex, where in every step we either added a constant or multiplied the expression by a positive constant, both rank-preserving oper...
2210.11934.pdf
a09d9440f156-4
๐‘…Sem๐œŽLex/(๐›ผ ๐‘…Sem๐œŽLex+1โˆ’๐›ผ ๐‘…Lex๐œŽSem) completes the proof. The other direction is similar. โ–ก The fact above implies that the problem of tuning ๐›ผfor a query in a min-max normalization regime is equivalent to learning ๐›ผโ€ฒin a z-score normalized setting. In other words, there is a one-to-one relationship between the...
2210.11934.pdf
79a058f189f6-0
111:10 Sebastian Bruch, Siyu Gai, and Amir Ingber The question we wish to answer is as follows: Under what conditions is ๐‘“Convex with parameter ๐›ผand a pair of normalization functions (๐œ™Sem,๐œ™Sem)rank-equivalent to an ๐‘“โ€ฒ Convexof a new pair of normalization functions (๐œ™โ€ฒ Sem,๐œ™โ€ฒ Lex)with weight ๐›ผโ€ฒ? That is, for a ...
2210.11934.pdf
79a058f189f6-1
in the domains of ๐‘“and๐‘”we have that|๐‘“(๐‘ฆ)โˆ’๐‘“(๐‘ฅ)|โ‰ฅ๐›ฟ|๐‘”(๐‘ฆ)โˆ’๐‘”(๐‘ฅ)|for some๐›ฟโ‰ฅ1. For example, ๐œ™mm(ยท)is an expansion with respect to ๐œ™tmm(ยท)with a factor ๐›ฟthat depends on the range of the scores. As another example, ๐œ™z(ยท)is an expansion with respect to ๐œ™mm(ยท). Definition 4.4. For two pairs of functions ๐‘“,๐‘”:Rโ†’R...
2210.11934.pdf
79a058f189f6-2
|๐‘“โ€ฒ(๐‘ฆ)โˆ’๐‘“โ€ฒ(๐‘ฅ)| |๐‘“(๐‘ฆ)โˆ’๐‘“(๐‘ฅ)|=๐œ†|๐‘”โ€ฒ(๐‘ฆ)โˆ’๐‘”โ€ฒ(๐‘ฅ)| |๐‘”(๐‘ฆ)โˆ’๐‘”(๐‘ฅ)|. When๐œ†is independent of the points ๐‘ฅand๐‘ฆ, we call this relative expansion uniform : |ฮ”๐‘“โ€ฒ|/|ฮ”๐‘“| |ฮ”๐‘”โ€ฒ|/|ฮ”๐‘”|=๐œ†,โˆ€๐‘ฅ,๐‘ฆ. As an example, if ๐‘“and๐‘”are min-max scaling and ๐‘“โ€ฒand๐‘”โ€ฒare z-score normalization, then their respective rate of expansion i...
2210.11934.pdf
79a058f189f6-3
rank-equivalent on a collection of queries ๐‘„: ๐‘“Convex =๐›ผ๐œ™(๐‘“Sem(๐‘ž,๐‘‘))+( 1โˆ’๐›ผ)๐œ”(๐‘“Lex(๐‘ž,๐‘‘)), and ๐‘“โ€ฒ Convex =๐›ผโ€ฒ๐œ™โ€ฒ(๐‘“Sem(๐‘ž,๐‘‘))+( 1โˆ’๐›ผโ€ฒ)๐œ”โ€ฒ(๐‘“Lex(๐‘ž,๐‘‘)), if for the monotone functions ๐œ™,๐œ”,๐œ™โ€ฒ,๐œ”โ€ฒ:Rโ†’R,๐œ™โ€ฒexpands with respect to ๐œ™more rapidly than ๐œ”โ€ฒ expands with respect to ๐œ”with a uniform rate ๐œ†. Proof....
2210.11934.pdf
79a058f189f6-4
ranked above ๐‘‘๐‘—according to ๐‘“Convex . Shortening ๐‘“o(๐‘ž,๐‘‘๐‘˜)to๐‘“(๐‘˜) ofor brevity, we have that: ๐‘“(๐‘–) Convex>๐‘“(๐‘—) Convex=โ‡’๐›ผ (๐œ™(๐‘“(๐‘–) Sem)โˆ’๐œ™(๐‘“(๐‘—) Sem)) | {z } ฮ”๐œ™๐‘– ๐‘—+(๐œ”(๐‘“(๐‘—) Lex)โˆ’๐œ”(๐‘“(๐‘–) Lex)) | {z } ฮ”๐œ”๐‘—๐‘– >๐œ”(๐‘“(๐‘—) Lex)โˆ’๐œ”...
2210.11934.pdf
79a058f189f6-5
Lex) This holds if and only if we have the following: ( ๐›ผ>1/(1+ฮ”๐œ™๐‘– ๐‘— ฮ”๐œ”๐‘—๐‘–),ifฮ”๐œ™๐‘– ๐‘—+ฮ”๐œ”๐‘—๐‘–>0, ๐›ผ<1/(1+ฮ”๐œ™๐‘– ๐‘— ฮ”๐œ”๐‘—๐‘–),otherwise.(6) Observe that, because of the monotonicity of a convex combination and the monotonicity of the normalization functions, the case ฮ”๐œ™๐‘– ๐‘—<0andฮ”๐œ”๐‘—๐‘–>0(which implies that the se...
2210.11934.pdf
525a513a60ab-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:11 the opposite case ฮ”๐œ™๐‘– ๐‘—>0andฮ”๐œ”๐‘—๐‘–<0always leads to the correct order regardless of the weight in the convex combination. We consider the other two cases separately below. Case 1: ฮ”๐œ™๐‘– ๐‘—>0andฮ”๐œ”๐‘—๐‘–>0. Because of the monotonicity property, we can deduce ...
2210.11934.pdf
525a513a60ab-1
By assumption, using Definition 4.4, we observe that: ฮ”๐œ™โ€ฒ ๐‘– ๐‘— ฮ”๐œ™๐‘– ๐‘—โ‰ฅฮ”๐œ”โ€ฒ ๐‘—๐‘– ฮ”๐œ”๐‘—๐‘–=โ‡’ฮ”๐œ™โ€ฒ ๐‘– ๐‘— ฮ”๐œ”โ€ฒ ๐‘—๐‘–โ‰ฅฮ”๐œ™๐‘– ๐‘— ฮ”๐œ”๐‘—๐‘–. As such, the lower-bound on ๐›ผโ€ฒimposed by documents ๐‘‘๐‘–and๐‘‘๐‘—of query๐‘ž,๐ฟโ€ฒ ๐‘– ๐‘—(๐‘ž), is smaller than the lower-bound on ๐›ผ,๐ฟ๐‘– ๐‘—(๐‘ž). Like๐›ผ, this case does not additionally constrai...
2210.11934.pdf
525a513a60ab-2
the upper-bound does not change: ๐‘ˆโ€ฒ ๐‘– ๐‘—(๐‘ž)=๐‘ˆ๐‘– ๐‘—(๐‘ž)=1). Case 2: ฮ”๐œ™๐‘– ๐‘—<0,ฮ”๐œ”๐‘—๐‘–<0. Once again, due to monotonicity, it is easy to see that ฮ”๐œ™โ€ฒ ๐‘– ๐‘—<0and ฮ”๐œ”โ€ฒ ๐‘—๐‘–<0. Equation (6) tells us that, for the order to be preserved under ๐‘“โ€ฒ Convex, we must similarly have that: ๐›ผโ€ฒ<1/(1+ฮ”๐œ™โ€ฒ ๐‘– ๐‘— ฮ”๐œ”โ€ฒ ๐‘—๐‘–). Once ...
2210.11934.pdf
525a513a60ab-3
For๐‘“โ€ฒ Convexto induce the same order as ๐‘“Convex among all pairs of documents for all queries in ๐‘„, the intersection of the intervals produced by the constraints on ๐›ผโ€ฒhas to be non-empty: ๐ผโ€ฒโ‰œร™ ๐‘ž[max ๐‘– ๐‘—๐ฟโ€ฒ ๐‘– ๐‘—(๐‘ž),min ๐‘– ๐‘—๐‘ˆโ€ฒ ๐‘– ๐‘—(๐‘ž)]=[max ๐‘ž,๐‘– ๐‘—๐ฟโ€ฒ ๐‘– ๐‘—(๐‘ž),min ๐‘ž,๐‘– ๐‘—๐‘ˆโ€ฒ ๐‘– ๐‘—(๐‘ž)]โ‰ โˆ…. We next prove th...
2210.11934.pdf
525a513a60ab-4
We next prove that ๐ผโ€ฒis always non-empty to conclude the proof of the theorem. By Equation (6) and the existence of ๐›ผ, we know that max ๐‘ž,๐‘– ๐‘—๐ฟ๐‘– ๐‘—(๐‘ž)โ‰คmin ๐‘ž,๐‘– ๐‘—๐‘ˆ๐‘– ๐‘—(๐‘ž). Suppose that documents ๐‘‘๐‘–and๐‘‘๐‘—of query๐‘ž1maximize the lower-bound, and that documents ๐‘‘๐‘šand๐‘‘๐‘›of query๐‘ž2minimize the upper-bound....
2210.11934.pdf
525a513a60ab-5
ฮ”๐œ”๐‘›๐‘š Because of the uniformity of the relative expansion rate, we can deduce that: ฮ”๐œ™โ€ฒ ๐‘– ๐‘— ฮ”๐œ”โ€ฒ ๐‘—๐‘–โ‰ฅฮ”๐œ™โ€ฒ ๐‘š๐‘› ฮ”๐œ”โ€ฒ๐‘›๐‘š=โ‡’max ๐‘ž,๐‘– ๐‘—๐ฟโ€ฒ ๐‘– ๐‘—(๐‘ž)โ‰คmin ๐‘ž,๐‘– ๐‘—๐‘ˆโ€ฒ ๐‘– ๐‘—(๐‘ž). โ–ก It is easy to show that the theorem above also holds when the condition is updated to reflect a shift of lower- and upper-bounds to the ri...
2210.11934.pdf
525a513a60ab-6
malization or any other linear transformation that is bounded and does not severely distort the distribution of scores, especially among the top-ranking documents, results in a rank-equivalent function. At most, for any given value of the ranking metric of interest such as NDCG, we should observe a shift of the weight ...
2210.11934.pdf
dc53db166775-0
111:12 Sebastian Bruch, Siyu Gai, and Amir Ingber (a) MS MARCO (b)Quora (c)HotpotQA (d)FiQA Fig. 3. Effect of normalization on the performance of ๐‘“Convex as a function of ๐›ผon the validation set. effect empirically on select datasets. As anticipated, the peak performance in terms of NDCG shifts to the left or right ...
2210.11934.pdf
dc53db166775-1
|ฮ”๐œ”โ€ฒ ๐‘—๐‘–|/|ฮ”๐œ”๐‘—๐‘–|=๐œ†,โˆ€(๐‘‘๐‘–,๐‘‘๐‘—)st๐‘“Convex(๐‘‘๐‘–)>๐‘“Convex(๐‘‘๐‘—). Second, it turns out, close to uniformity (i.e., when ๐œ†isconcentrated around one value) is often sufficient for the effect to materialize in practice. We observe this phenomenon empirically by fixing the parameter ๐›ผin๐‘“Convex with one transformatio...
2210.11934.pdf
dc53db166775-2
example, if๐‘“Convex uses๐œ™tmmand๐‘“โ€ฒ Convexuses๐œ™mmโ€”denoted by ๐œ™tmmโ†’๐œ™mmโ€”for every choice of ๐›ผ,
2210.11934.pdf
415eb4f03f65-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:13 (a) MS MARCO (b)Quora (c)HotpotQA (d)FiQA Fig. 4. Relative expansion rate of semantic scores with respect to lexical scores, ๐œ†, when changing from one transformation to another, with 95% confidence intervals. Prior to visualization, we normalize values of ๏ฟฝ...
2210.11934.pdf
415eb4f03f65-1
distorts the expansion rates. This goes some way to explain why normalization and boundedness are important properties. In the last two sections, we have answered RQ1: Convex combination is an appropriate fusion function and its performance is not sensitive to the choice of normalization so long as the transfor- mation...
2210.11934.pdf
fe3c91102a70-0
111:14 Sebastian Bruch, Siyu Gai, and Amir Ingber Table 1. Recall@1000 and NDCG@1000 (except SciFact andNFCorpus where cutoff is 100) on the test split of various datasets for lexical and semantic search as well as hybrid retrieval using RRF [5] (๐œ‚=60) and TM2C2 ( ๐›ผ=0.8). The symbolsโ€กandโˆ—indicate statistical signific...
2210.11934.pdf
fe3c91102a70-1
NQ 0.886โ€กโˆ—0.978โ€กโˆ—0.985 0.984 0.382โ€กโˆ—0.505โ€ก0.542 0.514โ€ก0.637 Quora 0.992โ€กโˆ—0.999 0.999 0.999 0.800โ€กโˆ—0.889โ€กโˆ—0.901 0.877โ€ก0.936zero-shotNFCorpus 0.283โ€กโˆ—0.314โ€กโˆ—0.348 0.344 0.298โ€กโˆ—0.309โ€กโˆ—0.343 0.326โ€ก0.371 HotpotQA 0.878โ€กโˆ—0.756โ€กโˆ—0.884 0.888 0.682โ€กโˆ—0.520โ€กโˆ—0.699 0.675โ€ก0.767
2210.11934.pdf
fe3c91102a70-2
FEVER 0.969โ€กโˆ—0.931โ€กโˆ—0.972 0.972 0.689โ€กโˆ—0.558โ€กโˆ—0.744 0.721โ€ก0.814 SciFact 0.900โ€กโˆ—0.932โ€กโˆ—0.958 0.955 0.698โ€กโˆ—0.681โ€กโˆ—0.753 0.730โ€ก0.796 DBPedia 0.540โ€กโˆ—0.408โ€กโˆ—0.564 0.567 0.415โ€กโˆ—0.425โ€กโˆ—0.512 0.489โ€ก0.553 FiQA 0.720โ€กโˆ—0.908 0.907 0.904 0.315โ€กโˆ—0.467โ€ก0.496 0.464โ€ก0.561 replaced with minimum feasible value regardless of the candidat...
2210.11934.pdf
fe3c91102a70-3
brevity. 5 ANALYSIS OF RECIPROCAL RANK FUSION Chen et al. [ 5] show that RRF performs better and more reliably than a convex combination of normalized scores. RRF is computed as follows: ๐‘“RRF(๐‘ž,๐‘‘)=1 ๐œ‚+๐œ‹Lex(๐‘ž,๐‘‘)+1 ๐œ‚+๐œ‹Sem(๐‘ž,๐‘‘), (7) where๐œ‚is a free parameter. The authors of [ 5] take a non-parametric view of R...
2210.11934.pdf
fe3c91102a70-4
together, a quantity that is always larger than the number of parameters in a convex combination. Let us begin by comparing the performance of RRF and TM2C2 empirically to get a sense of their relative efficacy. We first verify whether hybrid retrieval leads to significant gains in in-domain and out-of-domain experimen...
2210.11934.pdf
fe3c91102a70-5
any given query. Our results show that hybrid retrieval using RRF outperforms pure-lexical and pure-semantic retrieval on most datasets. This fusion method is particularly effective on out-of-domain datasets,
2210.11934.pdf
82bb45ff1fd9-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:15 (a) in-domain (b) out-of-domain Fig. 5. Difference in NDCG@ 1000 of TM2C2 and RRF (positive indicates better ranking quality by TM2C2) as a function of ๐›ผ. When๐›ผ=0the model is rank-equivalent to lexical search while ๐›ผ=1is rank-equivalent to semantic search....
2210.11934.pdf
82bb45ff1fd9-1
out-of-domain datasets in Figure 5(b). These figures also compare the performance of TM2C2 with RRF by reporting the difference between NDCG of the two methods. These plots show that there always exists an interval of ๐›ผfor which๐‘“TM2C2โ‰ป๐‘“RRFwithโ‰ปindicating better rank quality. 5.1 Effect of Parameters Chen et al. righ...
2210.11934.pdf
82bb45ff1fd9-2
approach when computing ranks. The second issue is that, unlike TM2C2, RRF ignores the raw scores and discards information about their distribution. In this regime, whether or not a document has a low or high semantic score does not matter so long as its rank in ๐‘…๐‘˜ Semstays the same. It is arguable in this case wheth...
2210.11934.pdf
4ea204941815-0
111:16 Sebastian Bruch, Siyu Gai, and Amir Ingber (a) MS MARCO (b)Quora (c)NQ (d)FiQA (e)HotpotQA (f)Fever Fig. 6. Visualization of the reciprocal rank determined by lexical ( ๐‘Ÿ๐‘Ÿ(๐œ‹Lex)=1/(60+๐œ‹Lex)) and semantic (๐‘Ÿ๐‘Ÿ(๐œ‹Sem)=1/(60+๐œ‹Sem)) retrieval for query-document pairs sampled from the validation split of eac...
2210.11934.pdf
4ea204941815-1
document pairs as before. From the figure, we can see that samples are pulled towards one of the poles at(0,0)and(1/61,1/61). The former attracts a higher concentration of negative samples while the latter positive samples. While this separation is somewhat consistent across datasets,
2210.11934.pdf
e88b15fd1cbc-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:17 (a) MS MARCO (b)HotpotQA Fig. 7. Difference in NDCG@1000 of ๐‘“RRFwith distinct values ๐œ‚Lexand๐œ‚Sem, and๐‘“RRFwith๐œ‚Lex=๐œ‚Sem=60 (positive indicates better ranking quality by the former). On MS MARCO, an in-domain dataset, NDCG improves when ๐œ‚Lex>๐œ‚Sem, while...
2210.11934.pdf
e88b15fd1cbc-1
consistently across domains. We argued earlier that RRF is parametric and that it, in fact, has as many parameters as there are retrieval functions to fuse. To see this more clearly, let us rewrite Equation (7) as follows: ๐‘“RRF(๐‘ž,๐‘‘)=1 ๐œ‚Lex+๐œ‹Lex(๐‘ž,๐‘‘)+1 ๐œ‚Sem+๐œ‹Sem(๐‘ž,๐‘‘). (8) We study the effect of parameters on ...
2210.11934.pdf
e88b15fd1cbc-2
Crucially, performance improves off-diagonal, where the parameter takes on different values for the semantic and lexical components. On MS MARCO, shown in Figure 7(a), NDCG improves when ๐œ‚Lex>๐œ‚Sem, while the opposite effect can be seen for HotpotQA , an out-of-domain dataset. This can be easily explained by the fact ...
2210.11934.pdf
4346c658ecf5-0
111:18 Sebastian Bruch, Siyu Gai, and Amir Ingber (a)๐œ‚Lex=60,๐œ‚Sem=60 (b)๐œ‚Lex=10,๐œ‚Sem=4 (c)๐œ‚Lex=3,๐œ‚Sem=5 Fig. 8. Effect of ๐‘“RRFwith select configurations of ๐œ‚Lexand๐œ‚Semon pairs of ranks from lexical and semantic systems. When ๐œ‚Lex>๐œ‚Sem, the fusion function discounts the lexical systemโ€™s contribution. Table ...
2210.11934.pdf
4346c658ecf5-1
paired two-tailed ๐‘ก-test. NDCG Dataset TM2C2 RRF(60,60)RRF(5,5)RRF(10,4)in-domainMS MARCO 0.454 0.425โ€ก0.435โ€กโˆ—0.451โˆ— NQ 0.542 0.514โ€ก0.521โ€กโˆ—0.528โ€กโˆ— Quora 0.901 0.877โ€ก0.885โ€กโˆ—0.896โˆ—zero-shotNFCorpus 0.343 0.326โ€ก0.335โ€กโˆ—0.327โ€ก HotpotQA 0.699 0.675โ€ก0.693โˆ—0.621โ€กโˆ— FEVER 0.744 0.721โ€ก0.727โ€กโˆ—0.649โ€กโˆ— SciFact 0.753 0.730โ€ก0.738โ€ก0.71...
2210.11934.pdf
4346c658ecf5-2
DBPedia 0.512 0.489โ€ก0.489โ€ก0.480โ€กโˆ— FiQA 0.496 0.464โ€ก0.470โ€กโˆ—0.482โ€กโˆ— indeed lead to gains in NDCG on in-domain datasets, the tuned function does not generalize well to out-of-domain datasets. The poor generalization can be explained by the reversal of patterns observed in Figure 7 where ๐œ‚Lex>๐œ‚Semsuits in-domain datasets...
2210.11934.pdf
4346c658ecf5-3
of raw scores, it loses valuable information in the process of fusing retrieval systems. In our final research question, RQ3, we investigate if this indeed matters in practice.
2210.11934.pdf
6c143f58ddf2-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:19 (a) in-domain (b) out-of-domain Fig. 9. The difference in NDCG@1000 of ๐‘“SRRF and๐‘“RRFwith๐œ‚=60(positive indicates better ranking quality bySRRF ) as a function of ๐›ฝ. The notion of โ€œpreservingโ€ information is well captured by the concept of Lipschitz continu...
2210.11934.pdf
6c143f58ddf2-1
thus trivial to approximate this quantity using a generalized sigmoid with parameter ๐›ฝ:๐œŽ๐›ฝ(๐‘ฅ)= 1/(1+exp(โˆ’๐›ฝ๐‘ฅ)). As๐›ฝapproaches 1, the sigmoid takes its usual Sshape, while ๐›ฝโ†’โˆž produces a very close approximation of the indicator. Interestingly, the Lipschitz constant of ๐œŽ๐›ฝ(ยท)is, in fact, ๐›ฝ. As๐›ฝincreases, the a...
2210.11934.pdf
6c143f58ddf2-2
๐œ‚+หœ๐œ‹Lex(๐‘ž,๐‘‘)+1 ๐œ‚+หœ๐œ‹Sem(๐‘ž,๐‘‘), (9) where หœ๐œ‹o(๐‘ž,๐‘‘๐‘–)=0.5+ร ๐‘‘๐‘—โˆˆ๐‘…๐‘˜o(๐‘ž)๐œŽ๐›ฝ(๐‘“o(๐‘ž,๐‘‘๐‘—)โˆ’๐‘“o(๐‘ž,๐‘‘๐‘–)). By increasing ๐›ฝwe increase the Lipschitz constant of ๐‘“SRRF. This is the lever we need to test the idea that Lipschitz continuity matters and that functions that do not distort the distributional propertie...
2210.11934.pdf
6c143f58ddf2-3
that functions that do not distort the distributional properties of raw scores lead to better ranking quality. 6A function ๐‘“is Lipschitz continous with constant ๐ฟif||๐‘“(๐‘ฆ)โˆ’๐‘“(๐‘ฅ)||๐‘œโ‰ค๐ฟ||๐‘ฆโˆ’๐‘ฅ||๐‘–for some norm||ยท|| ๐‘œand||ยท|| ๐‘– on the output and input space of ๐‘“.
2210.11934.pdf
7f308cf6cc65-0
111:20 Sebastian Bruch, Siyu Gai, and Amir Ingber (a) in-domain (b) out-of-domain Fig. 10. The difference in NDCG@1000 of ๐‘“SRRF and๐‘“RRFwith๐œ‚=5(positive indicates better ranking quality bySRRF ) as a function of ๐›ฝ. Figures 9 and 10 visualize the difference between SRRF and RRF for two settings of ๐œ‚selected based o...
2210.11934.pdf
7f308cf6cc65-1
While we acknowledge the possibility that the approximation in Equation (9) may cause a change in ranking quality, we expected that change to be a degradation, not an improvement. However, given we do observe gains by smoothing the function, and that the only other difference between SRRF and RRF is their Lipschitz con...
2210.11934.pdf
7f308cf6cc65-2
property in our analysis of the convex combination fusion function. It is trivial to show why this property is crucial. Homogeneity : The order induced by a fusion function must be unaffected by a positive re- scaling of query and document vectors. That is: ๐‘“Hybrid(๐‘ž,๐‘‘)๐œ‹=๐‘“Hybrid(๐‘ž,๐›พ๐‘‘)๐œ‹=๐‘“Hybrid(๐›พ๐‘ž,๐‘‘)where ๏ฟฝ...
2210.11934.pdf
b436cf251fee-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:21 Table 3. Mean NDCG@1000 (NDCG@100 for SciFact andNFCorpus ) on the test split of various datasets for hybrid retrieval using TM2C2 ( ๐›ผ=0.8),RRF (๐œ‚), and SRRF( ๐œ‚,๐›ฝ). The parameters ๐›ฝare fixed to values that maximize NDCG on the validation split of in-domai...
2210.11934.pdf
b436cf251fee-1
NQ 0.542 0.514โ€ก0.516โ€ก0.521โ€ก0.517โ€ก Quora 0.901 0.877โ€ก0.889โ€กโˆ—0.885โ€ก0.889โ€กโˆ—zero-shotNFCorpus 0.343 0.326โ€ก0.338โ€กโˆ—0.335โ€ก0.339โ€ก HotpotQA 0.699 0.675โ€ก0.695โˆ—0.693โ€ก0.705โ€กโˆ— FEVER 0.744 0.721โ€ก0.725โ€ก0.727โ€ก0.735โ€กโˆ— SciFact 0.753 0.730โ€ก0.740โ€ก0.738โ€ก0.740โ€ก DBPedia 0.512 0.489โ€ก0.501โ€กโˆ—0.489โ€ก0.492โ€ก FiQA 0.496 0.464โ€ก0.468โ€ก0.470โ€ก0.469โ€ก
2210.11934.pdf
b436cf251fee-2
Boundedness : Recall that, a convex combination without score normalization is often ineffective and inconsistent because BM25 is unbounded and that lexical and semantic scores are on different scales. To see this effect we turn to Figure 11. We observe in Figure 11(a) that, for in-domain datasets, adding the unnormali...
2210.11934.pdf
b436cf251fee-3
produced by the lexical retriever. To avoid that pitfall, we require that ๐‘“Hybrid be bounded:|๐‘“Hybrid|โ‰ค๐‘€for some๐‘€>0. As we have seen before, normalizing the raw scores addresses this issue. Lipschitz Continuity : We argued that because RRF does not take into consideration the raw scores, it distorts their distribut...
2210.11934.pdf
b436cf251fee-4
resource-constrained research labs to innovate. Given the strong evidence supporting the idea that hybrid retrieval is most valuable when applied to out-of-domain datasets [ 5], we believe that ๐‘“Hybrid should be robust to distributional shifts and should not need training or fine-tuning on target
2210.11934.pdf
94824fc6dcf4-0
111:22 Sebastian Bruch, Siyu Gai, and Amir Ingber (a) in-domain (b) out-of-domain Fig. 11. The difference in NDCG of convex combination of unnormalized scores and a pure semantic search (positive indicates better ranking quality by a convex combination) as a function of ๐›ผ. datasets. This implies that either the funct...
2210.11934.pdf
94824fc6dcf4-1
๐›ผ=0.8from Table 1. Additionally, we take the train split of each dataset and sample from it progressively larger subsets (with a step size of 5%), and use it to tune the parameters of each function. We then measure NDCG of the tuned functions on the test split. For the depicted datasets as well as all other datasets i...
2210.11934.pdf
94824fc6dcf4-2
by RRF-CC and defined as: ๐‘“RRF(๐‘ž,๐‘‘)=(1โˆ’๐›ผ)1 ๐œ‚Lex+๐œ‹Lex(๐‘ž,๐‘‘)+๐›ผ1 ๐œ‚Sem+๐œ‹Sem(๐‘ž,๐‘‘), (10) where๐›ผ,๐œ‚Lex, and๐œ‚Semare tunable parameters. The question this particular formulation tries to answer is whether adding an additional weight to the combination of the RRF terms affects retrieval quality. From the figure, it...
2210.11934.pdf
0e96be142e14-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:23 (a) MS MARCO (b)Quora (c)HotpotQA (d)Fever Fig. 12. Sample efficiency of TM2C2 and the parameterized variants of RRF (single parameter where ๐œ‚Sem= ๐œ‚Lex, and two parameters where we allow different values of ๐œ‚Semand๐œ‚Lex, and a third variation that is a co...
2210.11934.pdf
0e96be142e14-1
7 CONCLUSION We studied the behavior of two popular functions that fuse together lexical and semantic retrieval to produce hybrid retrieval, and identified their advantages and pitfalls. Importantly, we inves- tigated several questions and claims in prior work. We established theoretically that the choice of normalizat...
2210.11934.pdf
8fb80edd1315-0
111:24 Sebastian Bruch, Siyu Gai, and Amir Ingber We believe that a convex combination with theoretical minimum-maximum normalization (TM2C2) indeed enjoys properties that are important in a fusion function. Its parameter, too, can be tuned sample-efficiently or set to a reasonable value based on domain knowledge. In o...
2210.11934.pdf
8fb80edd1315-1
input, and indeed can be extended, both theoretically and empirically, to a setting where we have more than just lexical and semantic scores, it is nonetheless important to conduct experiments and validate that our findings generalize. We believe, however, that our current assumptions are practical and are reflective o...
2210.11934.pdf
8fb80edd1315-2
Retrieval (Dublin, Ireland). 997โ€“1000. [3]Sebastian Bruch, Claudio Lucchese, and Franco Maria Nardini. 2022. ReNeuIR: Reaching Efficiency in Neural Information Retrieval. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (Madrid, Spain). 3462โ€“3465. [4]Seb...
2210.11934.pdf
8fb80edd1315-3
Research, ECIR 2022, Stavanger, Norway, April 10โ€“14, 2022, Proceedings, Part I (Stavanger, Norway). 95โ€“110. [6]Gordon V. Cormack, Charles L A Clarke, and Stefan Buettcher. 2009. Reciprocal Rank Fusion Outperforms Condorcet and Individual Rank Learning Methods. 758โ€“759. [7]Van Dang, Michael Bendersky, and W Bruce Croft....
2210.11934.pdf
8fb80edd1315-4
for Computational Linguistics, Minneapolis, Minnesota, 4171โ€“4186. [9]Thibault Formal, Carlos Lassance, Benjamin Piwowarski, and Stรฉphane Clinchant. 2022. From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective. In Proceedings of the 45th International ACM
2210.11934.pdf
3ca2c422a8e4-0
Negative Sampling: Making Sparse Neural IR Models More Effective. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (Madrid, Spain). 2353โ€“2359. [10] Kalervo Jรคrvelin and Jaana Kekรคlรคinen. 2000. IR evaluation methods for retrieving highly relevant document...
2210.11934.pdf
c98cf58f7990-0
An Analysis of Fusion Functions for Hybrid Retrieval 111:25 Empirical Methods in Natural Language Processing (EMNLP) . [13] Saar Kuzi, Mingyang Zhang, Cheng Li, Michael Bendersky, and Marc Najork. 2020. Leveraging Semantic and Lexical Matching to Improve the Recall of Document Retrieval Systems: A Hybrid Approach. ArXi...
2210.11934.pdf
c98cf58f7990-1
arXiv:2010.06467 [cs.IR] [16] Tie-Yan Liu. 2009. Learning to Rank for Information Retrieval. Found. Trends Inf. Retr. 3, 3 (2009), 225โ€“331. [17] Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2021. Sparse, Dense, and Attentional Representations for Text Retrieval. Transactions of the Association fo...
2210.11934.pdf
c98cf58f7990-2
Information Retrieval 16, 5 (2013), 584โ€“628. [21] Yu. A. Malkov and D. A. Yashunin. 2016. Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs. [22] Antonio Mallia, Michal Siedlaczek, Joel Mackenzie, and Torsten Suel. 2019. PISA: Performant Indexes and Search for Acad...
2210.11934.pdf
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
5