id stringlengths 14 14 | text stringlengths 89 1.21k | source stringclasses 1
value |
|---|---|---|
147d8b44b127-0 | 111An Analysis of Fusion Functions for Hybrid Retrieval
SEBASTIAN BRUCH, Pinecone, USA
SIYU GAIโ,University of California, Berkeley, USA
AMIR INGBER, Pinecone, Israel
We study hybrid search in text retrieval where lexical and semantic search are fused together with the intuition
that the two are complementary in how th... | 2210.11934.pdf |
147d8b44b127-1 | federated search .
Additional Key Words and Phrases: Hybrid Retrieval, Lexical and Semantic Search, Fusion Functions
1 INTRODUCTION
Retrieval is the first stage in a multi-stage ranking system [ 1,2,43], where the objective is to find
the top-๐set of documents, that are the most relevant to a given query ๐, from a la... | 2210.11934.pdf |
147d8b44b127-2 | Words ( BoW ) and compute the similarity of two pieces of text using a statistical measure such as
the term frequency-inverse document frequency (TF-IDF) family, with BM25 [ 32,33] being its most
prominent member. We refer to retrieval with a BoW model as lexical search and the similarity
scores computed by such a syst... | 2210.11934.pdf |
147d8b44b127-3 | vector similarity or distance. We refer to this method as semantic search and the similarity scores
computed by such a system as semantic scores .
Hypothesizing that lexical and semantic search are complementary in how they model relevance,
recent works [ 5,12,13,18,19,41] began exploring methods to fusetogether lexica... | 2210.11934.pdf |
147d8b44b127-4 | University of California, Berkeley, Berkeley, CA, USA; Amir Ingber, Pinecone, Tel Aviv, Israel, ingber@pinecone.io.arXiv:2210.11934v1 [cs.IR] 21 Oct 2022 | 2210.11934.pdf |
d2f12e5b35df-0 | 111:2 Sebastian Bruch, Siyu Gai, and Amir Ingber
It is becoming increasingly clear that hybrid search does indeed lead to meaningful gains in
retrieval quality, especially when applied to out-of-domain datasets [ 5,39]โsettings in which the
semantic retrieval component uses a model that was not trained or fine-tuned on... | 2210.11934.pdf |
d2f12e5b35df-1 | (such as dot product) may be unbounded, often they are normalized with min-max scaling [ 15,39]
prior to fusion.
A recent study [ 5] argues that convex combination is sensitive to its parameter ๐ผand the
choice of score normalization.1They claim and show empirically, instead, that Reciprocal Rank
Fusion ( RRF) [6] may ... | 2210.11934.pdf |
d2f12e5b35df-2 | a reasonable choice and study its sensitivity to the normalization protocol. We show that, while
normalization is essential to create a bounded function and thereby bestow consistency to the
fusion across domains, the specific choice of normalization is a rather small detail: There always
exist convex combinations of s... | 2210.11934.pdf |
d2f12e5b35df-3 | information. Observe that the distance between raw scores plays no role in determining their
hybrid scoreโa behavior we find counter-intuitive in a metric space where distance does matter.
Examining this property constitutes our third and final research question (RQ3).
Finally, we empirically demonstrate an unsurprisin... | 2210.11934.pdf |
a1a34c5d3489-0 | research in this field. Our analysis leads us to believe that the convex combination formulation is
theoretically sound, empirically effective, sample-efficient, and robust to domain shift. Moreover,
1c.f. Section 3.1 in [ 5]: โThis fusion method is sensitive to the score scales ...which needs careful score normalizati... | 2210.11934.pdf |
f957f16ac078-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:3
unlike the parameters in RRF, the parameter(s) of a convex function are highly interpretable and, if
no training samples are available, can be adjusted to incorporate domain knowledge.
We organized the remainder of this article as follows. In Section 2, we revi... | 2210.11934.pdf |
f957f16ac078-1 | 38]. Conventional wisdom has that retrieval must be recall-oriented while improving ranking quality
may be left to the re-ranking stages, which are typically Learning to Rank ( LtR) models [ 16,23,
28,38,40]. There is indeed much research on the trade-offs between recall and precision in such
multi-stage cascades [ 7,2... | 2210.11934.pdf |
f957f16ac078-2 | inversely proportionate to the fraction of documents that contain the termโand adds the scores of
query terms to arrive at the final similarity or relevance score. Because BM25, like other lexical
scoring functions, insists on an exact match of terms, even a slight typo can throw the function off.
This vocabulary misma... | 2210.11934.pdf |
f957f16ac078-3 | cient for first-stage retrieval, however, making them more suitable for re-ranking stages. But by
cleverly disentangling the query and document transformations into the so-called dual-encoder
architecture, where, in the resulting design, the โembeddingโ of a document can be computed
independently of queries, we can pre... | 2210.11934.pdf |
357f2ffd45bc-0 | obtain the vector representation of the query during inference. At a high level, these models project
queries and documents onto a low-dimensional vector space where semantically-similar points
stay closer to each other. By doing so we transform the retrieval problem to one of similarity search
or Approximate Nearest N... | 2210.11934.pdf |
f16a465ce965-0 | 111:4 Sebastian Bruch, Siyu Gai, and Amir Ingber
packages or through a managed service such as Pinecone2, creating an opportunity to use deep
models and vector representations for first-stage retrieval [ 12,42]โa setup that we refer to as
semantic search.
Semantic search, however, has its own limitations. Previous stud... | 2210.11934.pdf |
f16a465ce965-1 | many existing fusion functions in experiments, but none compares the main ideas comprehensively.
We review the popular fusion functions from these works in the subsequent sections and, through
a comparative study, elaborate what about their behavior may or may not be problematic.
3 SETUP
In the sections that follow, we... | 2210.11934.pdf |
f16a465ce965-2 | vectors in R|๐|, with|๐|denoting the size of the vocabulary, and ๐Lexis typically BM25. A retrieval
system ois the spaceQรD equipped with a metric ๐o(ยท,ยท)โwhich need not be a proper metric.
We denote the set of top- ๐documents retrieved for query ๐by retrieval system oby๐
๐
o(๐). We
write๐o(๐,๐)to denote the... | 2210.11934.pdf |
f16a465ce965-3 | where 1๐is1when the predicate ๐holds and 0otherwise. In words, and ignoring the subtleties
introduced by the presence of score ties, the rank of document ๐is the count of documents whose
score is larger than the score of ๐.
Hybrid retrieval operates on the product space ofรo๐with metric ๐Fusion :ร๐o๐โR. Without... | 2210.11934.pdf |
f16a465ce965-4 | one of the top- ๐sets (i.e.,๐โU๐(๐)but๐โ๐
๐
o๐(๐)for some o๐), we compute its missing score
(i.e.,๐o๐(๐,๐)) prior to fusion.
2http://pinecone.io | 2210.11934.pdf |
2509e69bc750-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:5
3.2 Empirical Setup
Datasets : We evaluate our methods on a variety of publicly available benchmark datasets, both in
in-domain and out-of-domain, zero-shot settings. One of the datasets is the MS MARCO Passage
Retrieval v1 dataset [ 25], a publicly available r... | 2210.11934.pdf |
2509e69bc750-1 | andFiQA (financial). For details of and statistics for each dataset, we refer the reader to [36].
Lexical search : We use PISA [ 22] for keyword-based lexical retrieval. We tokenize queries
and documents by space and apply stemming available in PISAโwe do not employ any other
preprocessing steps such as stopword remova... | 2210.11934.pdf |
2509e69bc750-2 | exceeding a total of 1 billion pairs of text, including NQ, MS MARCO Passage, and Quora . As such,
we consider all experiments on these three datasets as in-domain, and the rest as out-of-domain.
We use the exact search for inner product algorithm ( IndexFlatIP ) from FAISS [ 11] to retrieve top
1000 approximate neares... | 2210.11934.pdf |
2509e69bc750-3 | cut-off) rather than shallow metrics per the discussion in [ 39] to understand the performance of
each system more completely.
4 ANALYSIS OF CONVEX COMBINATION OF RETRIEVAL SCORES
We are interested in understanding the behavior and properties of fusion functions. In the remainder
of this work, we study through that len... | 2210.11934.pdf |
daf800e846e5-0 | 111:6 Sebastian Bruch, Siyu Gai, and Amir Ingber
An interesting property of this fusion is that it takes into account the distribution of scores. In
other words, the distance between lexical (or semantic) scores of two documents plays a significant
role in determining their final hybrid score. One disadvantage, however... | 2210.11934.pdf |
daf800e846e5-1 | below:
๐mm(๐o(๐,๐))=๐o(๐,๐)โ๐๐
๐๐โ๐๐, (2)
where๐๐=min๐โU๐(๐)๐o(๐,๐)and๐๐=max๐โU๐(๐)๐o(๐,๐). We note that, min-max scaling is
thede facto method in the literature, but other choices of ๐o(ยท)in the more general expression
below:
๐Convex(๐,๐)=๐ผ๐Sem(๐Sem(๐,๐))+( 1โ๐ผ)๐Lex(๐Lex(๐,๐)... | 2210.11934.pdf |
daf800e846e5-2 | are valid as well so long as ๐Sem,๐Lex:RโRare monotone in their argument. For example, for
reasons that will become clearer later, we can redefine the normalization by replacing the minimum
of the set with the theoretical minimum of the function (i.e., the maximum value that is always less
than or equal to all values... | 2210.11934.pdf |
daf800e846e5-3 | Another popular choice is the standard score (z-score) normalization which is defined as follows:
๐z(๐o(๐,๐))=๐o(๐,๐)โ๐
๐, (5)
where๐and๐denote the mean and standard deviation of the set of scores ๐o(๐,ยท)for query๐.
We will return to normalization shortly, but we make note of one small but important fact:... | 2210.11934.pdf |
daf800e846e5-4 | 4.1 Suitability of Convex Combination
A convex combination of scores is a natural choice for creating a mixture of two retrieval systems,
but is it a reasonable choice? It has been established in many past empirical studies that ๐Convex with
min-max normalization often serves as a strong baseline. So the answer to our... | 2210.11934.pdf |
6598f6af8980-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:7
(a) MS MARCO
(b)Quora
(c)NQ
(d)FiQA
(e)HotpotQA
(f)Fever
Fig. 1. Visualization of the normalized lexical ( ๐tmm(๐Lex)) and semantic ( ๐tmm(๐Sem)) scores of query-
document pairs sampled from the validation split of each dataset. Shown in red are up to 20... | 2210.11934.pdf |
8eacecb53293-0 | 111:8 Sebastian Bruch, Siyu Gai, and Amir Ingber
(a)๐ผ=0.6
(b)๐ผ=0.8
Fig. 2. Effect of ๐Convex on pairs of lexical and semantic scores.
From these figures, it is clear that positive and negative samples form clusters that are, with
some error, separable by a linear function. What is different between datasets is the ... | 2210.11934.pdf |
8eacecb53293-1 | of negative samples where lexical scores vanish.
This empirical evidence suggests that lexical and semantic scores may indeed be complementaryโ
an observation that is in agreement with prior work [ 5]โand a line may be a reasonable choice for
distinguishing between positive and negative samples. But while these figures... | 2210.11934.pdf |
8eacecb53293-2 | positives and negatives that is unsurprisingly more in tune with the ๐ผ=0.8setting of๐Convex than
๐ผ=0.6.
4.2 Role of Normalization
We have thus far used min-max normalization to be consistent with the literature. In this section,
we ask the question first raised by Chen et al. [ 5] on whether and to what extent the c... | 2210.11934.pdf |
a09d9440f156-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:9
โข๐mm: Min-max scaling of Equation (2);
โข๐tmm: Theoretical min-max scaling of Equation (4);
โข๐z: z-score normalization of Equation (5);
โข๐mmโLex: Min-max scaling of lexical scores, unnormalized semantic scores;
โข๐tmmโLex: Theoretical min-max normalized lexi... | 2210.11934.pdf |
a09d9440f156-1 | For example, when ๐Sem(๐ฅ)=๐๐ฅ+๐and๐Lex(๐ฅ)=๐๐ฅ+๐are linear transformations of scores
for some positive coefficients ๐,๐and real intercepts ๐,๐, then they can be reduced to the following
rank-equivalent form:
๐Convex(๐,๐)๐=(๐๐ผ)๐Sem(๐,๐)+๐(1โ๐ผ)๐Lex(๐,๐).
In fact, letting ๐ผโฒ=๐๐ผ/[๐๐ผ+๐(1โ๐ผ)]t... | 2210.11934.pdf |
a09d9440f156-2 | normalization protocol. More formally:
Lemma 4.2. For every query, given an arbitrary ๐ผ, there exists a ๐ผโฒsuch that the convex combination
of min-max normalized scores with parameter ๐ผis rank-equivalent to a convex combination of z-score
normalized scores with ๐ผโฒ, and vice versa.
Proof. Write๐oand๐ofor the minimu... | 2210.11934.pdf |
a09d9440f156-3 | ๐
Lex๐Lex(๐,๐)
๐=1
๐Sem๐Lex๐ผ
๐
Sem๐Sem(๐,๐)+1โ๐ผ
๐
Lex๐Lex(๐,๐)โ๐ผ
๐
Sem๐Semโ1โ๐ผ
๐
Lex๐Lex
๐=๐ผ
๐
Sem๐Lex ๐Sem(๐,๐)โ๐Sem
๐Sem+1โ๐ผ
๐
Lex๐Sem ๐Lex(๐,๐)โ๐Lex
๐Lex,
where in every step we either added a constant or multiplied the expression by a positive constant,
both rank-preserving oper... | 2210.11934.pdf |
a09d9440f156-4 | ๐
Sem๐Lex/(๐ผ
๐
Sem๐Lex+1โ๐ผ
๐
Lex๐Sem)
completes the proof. The other direction is similar. โก
The fact above implies that the problem of tuning ๐ผfor a query in a min-max normalization
regime is equivalent to learning ๐ผโฒin a z-score normalized setting. In other words, there is a
one-to-one relationship between the... | 2210.11934.pdf |
79a058f189f6-0 | 111:10 Sebastian Bruch, Siyu Gai, and Amir Ingber
The question we wish to answer is as follows: Under what conditions is ๐Convex with parameter
๐ผand a pair of normalization functions (๐Sem,๐Sem)rank-equivalent to an ๐โฒ
Convexof a new pair of
normalization functions (๐โฒ
Sem,๐โฒ
Lex)with weight ๐ผโฒ? That is, for a ... | 2210.11934.pdf |
79a058f189f6-1 | in the domains of ๐and๐we have that|๐(๐ฆ)โ๐(๐ฅ)|โฅ๐ฟ|๐(๐ฆ)โ๐(๐ฅ)|for some๐ฟโฅ1.
For example, ๐mm(ยท)is an expansion with respect to ๐tmm(ยท)with a factor ๐ฟthat depends on the
range of the scores. As another example, ๐z(ยท)is an expansion with respect to ๐mm(ยท).
Definition 4.4. For two pairs of functions ๐,๐:RโR... | 2210.11934.pdf |
79a058f189f6-2 | |๐โฒ(๐ฆ)โ๐โฒ(๐ฅ)|
|๐(๐ฆ)โ๐(๐ฅ)|=๐|๐โฒ(๐ฆ)โ๐โฒ(๐ฅ)|
|๐(๐ฆ)โ๐(๐ฅ)|.
When๐is independent of the points ๐ฅand๐ฆ, we call this relative expansion uniform :
|ฮ๐โฒ|/|ฮ๐|
|ฮ๐โฒ|/|ฮ๐|=๐,โ๐ฅ,๐ฆ.
As an example, if ๐and๐are min-max scaling and ๐โฒand๐โฒare z-score normalization, then
their respective rate of expansion i... | 2210.11934.pdf |
79a058f189f6-3 | rank-equivalent on a collection of queries ๐:
๐Convex =๐ผ๐(๐Sem(๐,๐))+( 1โ๐ผ)๐(๐Lex(๐,๐)),
and
๐โฒ
Convex =๐ผโฒ๐โฒ(๐Sem(๐,๐))+( 1โ๐ผโฒ)๐โฒ(๐Lex(๐,๐)),
if for the monotone functions ๐,๐,๐โฒ,๐โฒ:RโR,๐โฒexpands with respect to ๐more rapidly than ๐โฒ
expands with respect to ๐with a uniform rate ๐.
Proof.... | 2210.11934.pdf |
79a058f189f6-4 | ranked above ๐๐according to ๐Convex . Shortening ๐o(๐,๐๐)to๐(๐)
ofor brevity, we have that:
๐(๐)
Convex>๐(๐)
Convex=โ๐ผ
(๐(๐(๐)
Sem)โ๐(๐(๐)
Sem))
| {z }
ฮ๐๐ ๐+(๐(๐(๐)
Lex)โ๐(๐(๐)
Lex))
| {z }
ฮ๐๐๐
>๐(๐(๐)
Lex)โ๐... | 2210.11934.pdf |
79a058f189f6-5 | Lex)
This holds if and only if we have the following:
(
๐ผ>1/(1+ฮ๐๐ ๐
ฮ๐๐๐),ifฮ๐๐ ๐+ฮ๐๐๐>0,
๐ผ<1/(1+ฮ๐๐ ๐
ฮ๐๐๐),otherwise.(6)
Observe that, because of the monotonicity of a convex combination and the monotonicity of
the normalization functions, the case ฮ๐๐ ๐<0andฮ๐๐๐>0(which implies that the se... | 2210.11934.pdf |
525a513a60ab-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:11
the opposite case ฮ๐๐ ๐>0andฮ๐๐๐<0always leads to the correct order regardless of the weight
in the convex combination. We consider the other two cases separately below.
Case 1: ฮ๐๐ ๐>0andฮ๐๐๐>0. Because of the monotonicity property, we can deduce ... | 2210.11934.pdf |
525a513a60ab-1 | By assumption, using Definition 4.4, we observe that:
ฮ๐โฒ
๐ ๐
ฮ๐๐ ๐โฅฮ๐โฒ
๐๐
ฮ๐๐๐=โฮ๐โฒ
๐ ๐
ฮ๐โฒ
๐๐โฅฮ๐๐ ๐
ฮ๐๐๐.
As such, the lower-bound on ๐ผโฒimposed by documents ๐๐and๐๐of query๐,๐ฟโฒ
๐ ๐(๐), is smaller than
the lower-bound on ๐ผ,๐ฟ๐ ๐(๐). Like๐ผ, this case does not additionally constrai... | 2210.11934.pdf |
525a513a60ab-2 | the upper-bound does not change: ๐โฒ
๐ ๐(๐)=๐๐ ๐(๐)=1).
Case 2: ฮ๐๐ ๐<0,ฮ๐๐๐<0. Once again, due to monotonicity, it is easy to see that ฮ๐โฒ
๐ ๐<0and
ฮ๐โฒ
๐๐<0. Equation (6) tells us that, for the order to be preserved under ๐โฒ
Convex, we must similarly
have that:
๐ผโฒ<1/(1+ฮ๐โฒ
๐ ๐
ฮ๐โฒ
๐๐).
Once ... | 2210.11934.pdf |
525a513a60ab-3 | For๐โฒ
Convexto induce the same order as ๐Convex among all pairs of documents for all queries in ๐,
the intersection of the intervals produced by the constraints on ๐ผโฒhas to be non-empty:
๐ผโฒโร
๐[max
๐ ๐๐ฟโฒ
๐ ๐(๐),min
๐ ๐๐โฒ
๐ ๐(๐)]=[max
๐,๐ ๐๐ฟโฒ
๐ ๐(๐),min
๐,๐ ๐๐โฒ
๐ ๐(๐)]โ โ
.
We next prove th... | 2210.11934.pdf |
525a513a60ab-4 | We next prove that ๐ผโฒis always non-empty to conclude the proof of the theorem.
By Equation (6) and the existence of ๐ผ, we know that max ๐,๐ ๐๐ฟ๐ ๐(๐)โคmin ๐,๐ ๐๐๐ ๐(๐). Suppose
that documents ๐๐and๐๐of query๐1maximize the lower-bound, and that documents ๐๐and๐๐of
query๐2minimize the upper-bound.... | 2210.11934.pdf |
525a513a60ab-5 | ฮ๐๐๐
Because of the uniformity of the relative expansion rate, we can deduce that:
ฮ๐โฒ
๐ ๐
ฮ๐โฒ
๐๐โฅฮ๐โฒ
๐๐
ฮ๐โฒ๐๐=โmax
๐,๐ ๐๐ฟโฒ
๐ ๐(๐)โคmin
๐,๐ ๐๐โฒ
๐ ๐(๐).
โก
It is easy to show that the theorem above also holds when the condition is updated to reflect a
shift of lower- and upper-bounds to the ri... | 2210.11934.pdf |
525a513a60ab-6 | malization or any other linear transformation that is bounded and does not severely distort the
distribution of scores, especially among the top-ranking documents, results in a rank-equivalent
function. At most, for any given value of the ranking metric of interest such as NDCG, we should
observe a shift of the weight ... | 2210.11934.pdf |
dc53db166775-0 | 111:12 Sebastian Bruch, Siyu Gai, and Amir Ingber
(a) MS MARCO
(b)Quora
(c)HotpotQA
(d)FiQA
Fig. 3. Effect of normalization on the performance of ๐Convex as a function of ๐ผon the validation set.
effect empirically on select datasets. As anticipated, the peak performance in terms of NDCG shifts
to the left or right ... | 2210.11934.pdf |
dc53db166775-1 | |ฮ๐โฒ
๐๐|/|ฮ๐๐๐|=๐,โ(๐๐,๐๐)st๐Convex(๐๐)>๐Convex(๐๐).
Second, it turns out, close to uniformity (i.e., when ๐isconcentrated around one value) is often
sufficient for the effect to materialize in practice. We observe this phenomenon empirically by fixing
the parameter ๐ผin๐Convex with one transformatio... | 2210.11934.pdf |
dc53db166775-2 | example, if๐Convex uses๐tmmand๐โฒ
Convexuses๐mmโdenoted by ๐tmmโ๐mmโfor every choice of ๐ผ, | 2210.11934.pdf |
415eb4f03f65-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:13
(a) MS MARCO
(b)Quora
(c)HotpotQA
(d)FiQA
Fig. 4. Relative expansion rate of semantic scores with respect to lexical scores, ๐, when changing from one
transformation to another, with 95% confidence intervals. Prior to visualization, we normalize values of ๏ฟฝ... | 2210.11934.pdf |
415eb4f03f65-1 | distorts the expansion rates. This goes some way to explain why normalization and boundedness
are important properties.
In the last two sections, we have answered RQ1: Convex combination is an appropriate fusion
function and its performance is not sensitive to the choice of normalization so long as the transfor-
mation... | 2210.11934.pdf |
fe3c91102a70-0 | 111:14 Sebastian Bruch, Siyu Gai, and Amir Ingber
Table 1. Recall@1000 and NDCG@1000 (except SciFact andNFCorpus where cutoff is 100) on the test
split of various datasets for lexical and semantic search as well as hybrid retrieval using RRF [5] (๐=60)
and TM2C2 ( ๐ผ=0.8). The symbolsโกandโindicate statistical signific... | 2210.11934.pdf |
fe3c91102a70-1 | NQ 0.886โกโ0.978โกโ0.985 0.984 0.382โกโ0.505โก0.542 0.514โก0.637
Quora 0.992โกโ0.999 0.999 0.999 0.800โกโ0.889โกโ0.901 0.877โก0.936zero-shotNFCorpus 0.283โกโ0.314โกโ0.348 0.344 0.298โกโ0.309โกโ0.343 0.326โก0.371
HotpotQA 0.878โกโ0.756โกโ0.884 0.888 0.682โกโ0.520โกโ0.699 0.675โก0.767 | 2210.11934.pdf |
fe3c91102a70-2 | FEVER 0.969โกโ0.931โกโ0.972 0.972 0.689โกโ0.558โกโ0.744 0.721โก0.814
SciFact 0.900โกโ0.932โกโ0.958 0.955 0.698โกโ0.681โกโ0.753 0.730โก0.796
DBPedia 0.540โกโ0.408โกโ0.564 0.567 0.415โกโ0.425โกโ0.512 0.489โก0.553
FiQA 0.720โกโ0.908 0.907 0.904 0.315โกโ0.467โก0.496 0.464โก0.561
replaced with minimum feasible value regardless of the candidat... | 2210.11934.pdf |
fe3c91102a70-3 | brevity.
5 ANALYSIS OF RECIPROCAL RANK FUSION
Chen et al. [ 5] show that RRF performs better and more reliably than a convex combination of
normalized scores. RRF is computed as follows:
๐RRF(๐,๐)=1
๐+๐Lex(๐,๐)+1
๐+๐Sem(๐,๐), (7)
where๐is a free parameter. The authors of [ 5] take a non-parametric view of R... | 2210.11934.pdf |
fe3c91102a70-4 | together, a quantity that is always larger than the number of parameters in a convex combination.
Let us begin by comparing the performance of RRF and TM2C2 empirically to get a sense of their
relative efficacy. We first verify whether hybrid retrieval leads to significant gains in in-domain and
out-of-domain experimen... | 2210.11934.pdf |
fe3c91102a70-5 | any given query.
Our results show that hybrid retrieval using RRF outperforms pure-lexical and pure-semantic
retrieval on most datasets. This fusion method is particularly effective on out-of-domain datasets, | 2210.11934.pdf |
82bb45ff1fd9-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:15
(a) in-domain
(b) out-of-domain
Fig. 5. Difference in NDCG@ 1000 of TM2C2 and RRF (positive indicates better ranking quality by TM2C2)
as a function of ๐ผ. When๐ผ=0the model is rank-equivalent to lexical search while ๐ผ=1is rank-equivalent
to semantic search.... | 2210.11934.pdf |
82bb45ff1fd9-1 | out-of-domain datasets in Figure 5(b). These figures also compare the performance of TM2C2 with
RRF by reporting the difference between NDCG of the two methods. These plots show that there
always exists an interval of ๐ผfor which๐TM2C2โป๐RRFwithโปindicating better rank quality.
5.1 Effect of Parameters
Chen et al. righ... | 2210.11934.pdf |
82bb45ff1fd9-2 | approach when computing ranks.
The second issue is that, unlike TM2C2, RRF ignores the raw scores and discards information
about their distribution. In this regime, whether or not a document has a low or high semantic score
does not matter so long as its rank in ๐
๐
Semstays the same. It is arguable in this case wheth... | 2210.11934.pdf |
4ea204941815-0 | 111:16 Sebastian Bruch, Siyu Gai, and Amir Ingber
(a) MS MARCO
(b)Quora
(c)NQ
(d)FiQA
(e)HotpotQA
(f)Fever
Fig. 6. Visualization of the reciprocal rank determined by lexical ( ๐๐(๐Lex)=1/(60+๐Lex)) and semantic
(๐๐(๐Sem)=1/(60+๐Sem)) retrieval for query-document pairs sampled from the validation split of eac... | 2210.11934.pdf |
4ea204941815-1 | document pairs as before. From the figure, we can see that samples are pulled towards one of the
poles at(0,0)and(1/61,1/61). The former attracts a higher concentration of negative samples
while the latter positive samples. While this separation is somewhat consistent across datasets, | 2210.11934.pdf |
e88b15fd1cbc-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:17
(a) MS MARCO
(b)HotpotQA
Fig. 7. Difference in NDCG@1000 of ๐RRFwith distinct values ๐Lexand๐Sem, and๐RRFwith๐Lex=๐Sem=60
(positive indicates better ranking quality by the former). On MS MARCO, an in-domain dataset, NDCG
improves when ๐Lex>๐Sem, while... | 2210.11934.pdf |
e88b15fd1cbc-1 | consistently across domains.
We argued earlier that RRF is parametric and that it, in fact, has as many parameters as there are
retrieval functions to fuse. To see this more clearly, let us rewrite Equation (7) as follows:
๐RRF(๐,๐)=1
๐Lex+๐Lex(๐,๐)+1
๐Sem+๐Sem(๐,๐). (8)
We study the effect of parameters on ... | 2210.11934.pdf |
e88b15fd1cbc-2 | Crucially, performance improves off-diagonal, where the parameter takes on different values for
the semantic and lexical components. On MS MARCO, shown in Figure 7(a), NDCG improves when
๐Lex>๐Sem, while the opposite effect can be seen for HotpotQA , an out-of-domain dataset. This
can be easily explained by the fact ... | 2210.11934.pdf |
4346c658ecf5-0 | 111:18 Sebastian Bruch, Siyu Gai, and Amir Ingber
(a)๐Lex=60,๐Sem=60
(b)๐Lex=10,๐Sem=4
(c)๐Lex=3,๐Sem=5
Fig. 8. Effect of ๐RRFwith select configurations of ๐Lexand๐Semon pairs of ranks from lexical and semantic
systems. When ๐Lex>๐Sem, the fusion function discounts the lexical systemโs contribution.
Table ... | 2210.11934.pdf |
4346c658ecf5-1 | paired two-tailed ๐ก-test.
NDCG
Dataset TM2C2 RRF(60,60)RRF(5,5)RRF(10,4)in-domainMS MARCO 0.454 0.425โก0.435โกโ0.451โ
NQ 0.542 0.514โก0.521โกโ0.528โกโ
Quora 0.901 0.877โก0.885โกโ0.896โzero-shotNFCorpus 0.343 0.326โก0.335โกโ0.327โก
HotpotQA 0.699 0.675โก0.693โ0.621โกโ
FEVER 0.744 0.721โก0.727โกโ0.649โกโ
SciFact 0.753 0.730โก0.738โก0.71... | 2210.11934.pdf |
4346c658ecf5-2 | DBPedia 0.512 0.489โก0.489โก0.480โกโ
FiQA 0.496 0.464โก0.470โกโ0.482โกโ
indeed lead to gains in NDCG on in-domain datasets, the tuned function does not generalize well
to out-of-domain datasets.
The poor generalization can be explained by the reversal of patterns observed in Figure 7 where
๐Lex>๐Semsuits in-domain datasets... | 2210.11934.pdf |
4346c658ecf5-3 | of raw scores, it loses valuable information in the process of fusing retrieval systems. In our final
research question, RQ3, we investigate if this indeed matters in practice. | 2210.11934.pdf |
6c143f58ddf2-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:19
(a) in-domain
(b) out-of-domain
Fig. 9. The difference in NDCG@1000 of ๐SRRF and๐RRFwith๐=60(positive indicates better ranking quality
bySRRF ) as a function of ๐ฝ.
The notion of โpreservingโ information is well captured by the concept of Lipschitz continu... | 2210.11934.pdf |
6c143f58ddf2-1 | thus trivial to approximate this quantity using a generalized sigmoid with parameter ๐ฝ:๐๐ฝ(๐ฅ)=
1/(1+exp(โ๐ฝ๐ฅ)). As๐ฝapproaches 1, the sigmoid takes its usual Sshape, while ๐ฝโโ produces
a very close approximation of the indicator. Interestingly, the Lipschitz constant of ๐๐ฝ(ยท)is, in fact,
๐ฝ. As๐ฝincreases, the a... | 2210.11934.pdf |
6c143f58ddf2-2 | ๐+ห๐Lex(๐,๐)+1
๐+ห๐Sem(๐,๐), (9)
where ห๐o(๐,๐๐)=0.5+ร
๐๐โ๐
๐o(๐)๐๐ฝ(๐o(๐,๐๐)โ๐o(๐,๐๐)). By increasing ๐ฝwe increase the Lipschitz
constant of ๐SRRF. This is the lever we need to test the idea that Lipschitz continuity matters and
that functions that do not distort the distributional propertie... | 2210.11934.pdf |
6c143f58ddf2-3 | that functions that do not distort the distributional properties of raw scores lead to better ranking
quality.
6A function ๐is Lipschitz continous with constant ๐ฟif||๐(๐ฆ)โ๐(๐ฅ)||๐โค๐ฟ||๐ฆโ๐ฅ||๐for some norm||ยท|| ๐and||ยท|| ๐
on the output and input space of ๐. | 2210.11934.pdf |
7f308cf6cc65-0 | 111:20 Sebastian Bruch, Siyu Gai, and Amir Ingber
(a) in-domain
(b) out-of-domain
Fig. 10. The difference in NDCG@1000 of ๐SRRF and๐RRFwith๐=5(positive indicates better ranking quality
bySRRF ) as a function of ๐ฝ.
Figures 9 and 10 visualize the difference between SRRF and RRF for two settings of ๐selected
based o... | 2210.11934.pdf |
7f308cf6cc65-1 | While we acknowledge the possibility that the approximation in Equation (9) may cause a change
in ranking quality, we expected that change to be a degradation, not an improvement. However,
given we do observe gains by smoothing the function, and that the only other difference between
SRRF and RRF is their Lipschitz con... | 2210.11934.pdf |
7f308cf6cc65-2 | property in our analysis of the convex combination fusion function. It is trivial to show why this
property is crucial.
Homogeneity : The order induced by a fusion function must be unaffected by a positive re-
scaling of query and document vectors. That is: ๐Hybrid(๐,๐)๐=๐Hybrid(๐,๐พ๐)๐=๐Hybrid(๐พ๐,๐)where
๏ฟฝ... | 2210.11934.pdf |
b436cf251fee-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:21
Table 3. Mean NDCG@1000 (NDCG@100 for SciFact andNFCorpus ) on the test split of various datasets
for hybrid retrieval using TM2C2 ( ๐ผ=0.8),RRF (๐), and SRRF( ๐,๐ฝ). The parameters ๐ฝare fixed to values
that maximize NDCG on the validation split of in-domai... | 2210.11934.pdf |
b436cf251fee-1 | NQ 0.542 0.514โก0.516โก0.521โก0.517โก
Quora 0.901 0.877โก0.889โกโ0.885โก0.889โกโzero-shotNFCorpus 0.343 0.326โก0.338โกโ0.335โก0.339โก
HotpotQA 0.699 0.675โก0.695โ0.693โก0.705โกโ
FEVER 0.744 0.721โก0.725โก0.727โก0.735โกโ
SciFact 0.753 0.730โก0.740โก0.738โก0.740โก
DBPedia 0.512 0.489โก0.501โกโ0.489โก0.492โก
FiQA 0.496 0.464โก0.468โก0.470โก0.469โก | 2210.11934.pdf |
b436cf251fee-2 | Boundedness : Recall that, a convex combination without score normalization is often ineffective
and inconsistent because BM25 is unbounded and that lexical and semantic scores are on different
scales. To see this effect we turn to Figure 11.
We observe in Figure 11(a) that, for in-domain datasets, adding the unnormali... | 2210.11934.pdf |
b436cf251fee-3 | produced by the lexical retriever.
To avoid that pitfall, we require that ๐Hybrid be bounded:|๐Hybrid|โค๐for some๐>0. As we
have seen before, normalizing the raw scores addresses this issue.
Lipschitz Continuity : We argued that because RRF does not take into consideration the raw
scores, it distorts their distribut... | 2210.11934.pdf |
b436cf251fee-4 | resource-constrained research labs to innovate. Given the strong evidence supporting the idea that
hybrid retrieval is most valuable when applied to out-of-domain datasets [ 5], we believe that ๐Hybrid
should be robust to distributional shifts and should not need training or fine-tuning on target | 2210.11934.pdf |
94824fc6dcf4-0 | 111:22 Sebastian Bruch, Siyu Gai, and Amir Ingber
(a) in-domain
(b) out-of-domain
Fig. 11. The difference in NDCG of convex combination of unnormalized scores and a pure semantic search
(positive indicates better ranking quality by a convex combination) as a function of ๐ผ.
datasets. This implies that either the funct... | 2210.11934.pdf |
94824fc6dcf4-1 | ๐ผ=0.8from Table 1. Additionally, we take the train split of each dataset and sample from it
progressively larger subsets (with a step size of 5%), and use it to tune the parameters of each
function. We then measure NDCG of the tuned functions on the test split. For the depicted datasets
as well as all other datasets i... | 2210.11934.pdf |
94824fc6dcf4-2 | by RRF-CC and defined as:
๐RRF(๐,๐)=(1โ๐ผ)1
๐Lex+๐Lex(๐,๐)+๐ผ1
๐Sem+๐Sem(๐,๐), (10)
where๐ผ,๐Lex, and๐Semare tunable parameters. The question this particular formulation tries to
answer is whether adding an additional weight to the combination of the RRF terms affects retrieval
quality. From the figure, it... | 2210.11934.pdf |
0e96be142e14-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:23
(a) MS MARCO
(b)Quora
(c)HotpotQA
(d)Fever
Fig. 12. Sample efficiency of TM2C2 and the parameterized variants of RRF (single parameter where ๐Sem=
๐Lex, and two parameters where we allow different values of ๐Semand๐Lex, and a third variation that is a
co... | 2210.11934.pdf |
0e96be142e14-1 | 7 CONCLUSION
We studied the behavior of two popular functions that fuse together lexical and semantic retrieval
to produce hybrid retrieval, and identified their advantages and pitfalls. Importantly, we inves-
tigated several questions and claims in prior work. We established theoretically that the choice
of normalizat... | 2210.11934.pdf |
8fb80edd1315-0 | 111:24 Sebastian Bruch, Siyu Gai, and Amir Ingber
We believe that a convex combination with theoretical minimum-maximum normalization
(TM2C2) indeed enjoys properties that are important in a fusion function. Its parameter, too,
can be tuned sample-efficiently or set to a reasonable value based on domain knowledge. In o... | 2210.11934.pdf |
8fb80edd1315-1 | input, and indeed can be extended, both theoretically and empirically, to a setting where we have
more than just lexical and semantic scores, it is nonetheless important to conduct experiments
and validate that our findings generalize. We believe, however, that our current assumptions are
practical and are reflective o... | 2210.11934.pdf |
8fb80edd1315-2 | Retrieval (Dublin, Ireland). 997โ1000.
[3]Sebastian Bruch, Claudio Lucchese, and Franco Maria Nardini. 2022. ReNeuIR: Reaching Efficiency in Neural Information
Retrieval. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information
Retrieval (Madrid, Spain). 3462โ3465.
[4]Seb... | 2210.11934.pdf |
8fb80edd1315-3 | Research, ECIR 2022, Stavanger, Norway, April 10โ14, 2022, Proceedings, Part I (Stavanger, Norway). 95โ110.
[6]Gordon V. Cormack, Charles L A Clarke, and Stefan Buettcher. 2009. Reciprocal Rank Fusion Outperforms Condorcet
and Individual Rank Learning Methods. 758โ759.
[7]Van Dang, Michael Bendersky, and W Bruce Croft.... | 2210.11934.pdf |
8fb80edd1315-4 | for Computational Linguistics, Minneapolis, Minnesota, 4171โ4186.
[9]Thibault Formal, Carlos Lassance, Benjamin Piwowarski, and Stรฉphane Clinchant. 2022. From Distillation to Hard
Negative Sampling: Making Sparse Neural IR Models More Effective. In Proceedings of the 45th International ACM | 2210.11934.pdf |
3ca2c422a8e4-0 | Negative Sampling: Making Sparse Neural IR Models More Effective. In Proceedings of the 45th International ACM
SIGIR Conference on Research and Development in Information Retrieval (Madrid, Spain). 2353โ2359.
[10] Kalervo Jรคrvelin and Jaana Kekรคlรคinen. 2000. IR evaluation methods for retrieving highly relevant document... | 2210.11934.pdf |
c98cf58f7990-0 | An Analysis of Fusion Functions for Hybrid Retrieval 111:25
Empirical Methods in Natural Language Processing (EMNLP) .
[13] Saar Kuzi, Mingyang Zhang, Cheng Li, Michael Bendersky, and Marc Najork. 2020. Leveraging Semantic and Lexical
Matching to Improve the Recall of Document Retrieval Systems: A Hybrid Approach. ArXi... | 2210.11934.pdf |
c98cf58f7990-1 | arXiv:2010.06467 [cs.IR]
[16] Tie-Yan Liu. 2009. Learning to Rank for Information Retrieval. Found. Trends Inf. Retr. 3, 3 (2009), 225โ331.
[17] Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2021. Sparse, Dense, and Attentional Representations
for Text Retrieval. Transactions of the Association fo... | 2210.11934.pdf |
c98cf58f7990-2 | Information Retrieval 16, 5 (2013), 584โ628.
[21] Yu. A. Malkov and D. A. Yashunin. 2016. Efficient and robust approximate nearest neighbor search using Hierarchical
Navigable Small World graphs.
[22] Antonio Mallia, Michal Siedlaczek, Joel Mackenzie, and Torsten Suel. 2019. PISA: Performant Indexes and Search for
Acad... | 2210.11934.pdf |
End of preview. Expand in Data Studio
No dataset card yet
- Downloads last month
- 5