diff --git "a/2406/2406.16745.md" "b/2406/2406.16745.md" new file mode 100644--- /dev/null +++ "b/2406/2406.16745.md" @@ -0,0 +1,13960 @@ +Title: Bandits with Preference Feedback: A Stackelberg Game Perspective + +URL Source: https://arxiv.org/html/2406.16745 + +Markdown Content: +Back to arXiv + +This is experimental HTML to improve accessibility. We invite you to report rendering errors. +Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. +Learn more about this project and help improve conversions. + +Why HTML? +Report Issue +Back to Abstract +Download PDF + Abstract +1Introduction +2Related Work +3Problem Setting +4Kernelized Confidence Sequences with Direct Logistic Feedback +5Main Results: Bandits with Preference Feedback +6Experiments +7Conclusion +License: CC BY-NC-SA 4.0 +arXiv:2406.16745v3 [cs.LG] 18 Dec 2025 +Bandits with Preference Feedback: A Stackelberg Game Perspective +Barna Pásztor⋆,1,2  Parnian Kassraie⋆,1  Andreas Krause1,2 +1ETH Zurich  2ETH AI Center +{bpasztor, pkassraie, krausea}@ethz.ch +Abstract + +Bandits with preference feedback present a powerful tool for optimizing unknown target functions when only pairwise comparisons are allowed instead of direct value queries. This model allows for incorporating human feedback into online inference and optimization and has been employed in systems for fine-tuning large language models. The problem is well understood in simplified settings with linear target functions or over finite small domains that limit practical interest. Taking the next step, we consider infinite domains and nonlinear (kernelized) rewards. In this setting, selecting a pair of actions is quite challenging and requires balancing exploration and exploitation at two levels: within the pair, and along the iterations of the algorithm. We propose MaxMinLCB, which emulates this trade-off as a zero-sum Stackelberg game, and chooses action pairs that are informative and yield favorable rewards. MaxMinLCB consistently outperforms existing algorithms and satisfies an anytime-valid rate-optimal regret guarantee. This is due to our novel preference-based confidence sequences for kernelized logistic estimators. + +$\star$ +1Introduction + +In standard bandit optimization, a learner repeatedly interacts with an unknown environment that gives numerical feedback on the chosen actions according to a utility function +𝑓 +. However, in applications such as fine-tuning large language models, drug testing, or search engine optimization, the quantitative value of design choices or test outcomes are either not directly observable, or are known to be inaccurate, or systematically biased, e.g., if they are provided by human feedback (casper2023open). A solution is to optimize for the target based on comparative feedback provided for a pair of queries, which is proven to be more robust to certain biases and uncertainties in the queries (ji2023provable). + +Bandits with preference feedback, or dueling bandits, address this problem and propose strategies for choosing query/action pairs that yield a high utility over the horizon of interactions. At the core of such algorithms is uncertainty quantification and inference for +𝑓 + in regions of interest, which is closely tied to exploration and exploitation dilemma over a course of queries. Observing only comparative feedback poses an additional challenge, as we now need to balance this trade-off jointly over two actions. This challenge is further exacerbated when optimizing over vast or infinite action domains. As a remedy, prior work often grounds one of the actions by choosing it either randomly or greedily, and tries to balance exploration-exploitation for the second action as a reaction to the first (ailon2014reducing; zoghi2014relative; kirschner2021bias; mehta2023kernelized). This approach works well for simple utility functions over low-dimensional domains, however does not scale to more complex problems. + +Aiming to solve this problem, we focus on continuous domains in the Euclidean vector space and complex utility functions that belong to the Reproducing Kernel Hilbert Space (RKHS) of a potentially non-smooth kernel. We propose MaxMinLCB, a sample-efficient algorithm that at every step chooses the actions jointly, by playing a zero-sum Stackelberg (a.k.a Leader-Follower) game. We choose the Lower Confidence Bound (LCB) of +𝑓 + as the objective of this game which the Leader aims to maximize and the Follower to minimize. The equilibrium of this game yields an action pair in which the first action is a favorable candidate to maximize +𝑓 + and the second action is the strongest competitor against the first. Our choice of using the LCB as the objective leads to robustness against uncertainty when selecting the first action. Moreover, it makes the second action an optimistic choice as a competitor, from its own perspective. We observe empirically that this approach creates a natural exploration scheme, and in turn, yields a more sample-efficient algorithm compared to standard baselines. + +Our game-theoretic strategy leads to an efficient bandit solver, if the LCB is a valid and tight lower bound on the utility function. To this end, we construct a confidence sequence for +𝑓 + given pairwise preference feedback, by modeling the noisy comparative observations with a logistic-type likelihood function. Our confidence sequence is anytime valid and holds uniformly over the domain, under the assumption that +𝑓 + resides in an RKHS. We improve prior work by removing or relaxing assumptions on the utility while maintaining the same rate of convergence. This result on preference-based confidence sequences may be of independent interest, as it targets the loss function that is typically used for Reinforcement Learning with Human Feedback. + +Contributions Our main contributions are: + +• + +We propose a novel game-theoretic acquisition function for pairwise action selection with preference feedback. + +• + +We construct preference-based confidence sequences for kernelized utility functions that are tight and anytime valid. + +• + +Together this creates MaxMinLCB, an algorithm for bandit optimization with preference feedback over continuous domains. MaxMinLCB satisfies +𝒪 +​ +( +𝛾 +𝑇 +​ +𝑇 +) + regret, where +𝑇 + is the horizon and +𝛾 +𝑇 + is the information gain of the kernel. + +• + +We benchmark MaxMinLCB over a set of standard optimization problems and consistently outperform the common baselines from the literature. + +2Related Work + +Learning with indirect feedback was first studied in the supervised preference learning setting (aiolli2004learning; chu2005preference). Subsequently, online and sequential settings were considered, motivated by applications in which the feedback is provided in an online manner, e.g., by a human (yue2012k; yue2009interactively; houlsby2011bayesian). bengs2021preference surveys this field comprehensively; here we include a brief background. + +Referred to as dueling bandits, a rich body of work considers (finite) multi-armed domains and learns a preference matrix specifying the relation among the arms. Such work often relies on efficient sorting or tournament systems based on the frequency of wins for each action (jamieson2011active; zoghi2014relative2; komiyama2015optimal; wu2016double; falahatgar2017maximum). Rather than jointly selecting the arms, such strategies often simplify the problem by selecting one at random (zoghi2014relative; zimmert2018factored), greedily (chen2017dueling), or from the set of previously selected arms (ailon2014reducing). In contrast, we jointly optimize both actions by choosing them as the equilibrium of a two-player zero-sum Stackelberg game, enabling a more efficient exploration/exploitation trade-off. + +The multi-armed dueling setting, which is reducible to multi-armed bandits (ailon2014reducing), naturally fails to scale to infinite compact domains, since regularity among “similar” arms is not exploited. To go beyond finite domains, utility-based dueling bandits consider an unknown latent function that captures the underlying preference, instead of relying on a preference matrix. The preference feedback is then modeled as the difference in the utility of two chosen actions passed through a link function. Early work is limited to convex domains and imposes strong regularity assumptions (yue2009interactively; kumagai2017regret). These assumptions are then relaxed to general compact domains if the utility function is linear (dudik2015contextual; saha2021optimal; saha2022efficient). Constructing valid confidence sets from comparative feedback is a challenging task. However, it is strongly related to uncertainty quantification with direct logistic feedback, which is extensively analyzed by the literature on logistic and generalized linear bandits (filippi2010parametric; faury2020improved; foster2018contextual; beygelzimer2019bandit; faury2022jointly; lee2024improved). + +Preference-based bandit optimization with linear utility functions is well understood and is even extended to reinforcement learning with preference feedback on trajectories (pacchiano2021dueling; zhan2023provable; zhu2024principled; ji2023provable; munos2023nash). However, such approaches have limited practical interest, since they cannot capture real-world problems with complex nonlinear utility functions. Alternatively, Reproducing Kernel Hilbert Spaces (RKHS) provide a rich model class for the utility, e.g., if the chosen kernel is universal. Many have proposed heuristic algorithms for bandits and Bayesian optimization in kernelized settings, albeit without providing theoretical guarantees brochu2010tutorial; gonzalez2017preferential; sui2017multi; tucker2020preference; mikkola2020projective; takeno2023towards. + +There have been attempts to prove convergence of kernelized algorithms for preference-based bandits (xu2020zeroth; kirschner2021bias; mehta2023kernelized; mehta2023sample). Such works employ a regression likelihood model which requires them to assume that both the utility and the probability of preference, as a function of actions, lie in an RKHS. In doing so, they use a regression model for solving a problem that is inherently a classification. While the model is valid, it does not result in a sample-efficient algorithm. In contrast, we use a kernelized logistic negative log-likelihood loss to infer the utility function, and provide confidence sets for its minimizer. In a concurrent work, xu2024principled also consider the kernelized logistic likelihood model and propose a variant of the MultiSBM algorithm (ailon2014reducing) which uses likelihood ratio confidence sets. The theoretical approach and resulting algorithm bear significant differences, and the regret guarantee has a strictly worse dependency on the time horizon +𝑇 +, by a factor of +𝑇 +1 +/ +4 +. This is discussed in more detail in Section˜5. + +3Problem Setting + +Consider an agent which repeatedly interacts with an environment: at step +𝑡 + the agent selects two actions +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +∈ +𝒳 + and only observes stochastic binary feedback +𝑦 +𝑡 +∈ +{ +0 +, +1 +} + indicating if +𝒙 +𝑡 +≻ +𝒙 +𝑡 +′ +, that is, if action +𝒙 +𝑡 + is preferred over action +𝒙 +𝑡 +′ +. Formally, +ℙ +​ +( +𝑦 +𝑡 += +1 +| +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) += +ℙ +​ +( +𝒙 +𝑡 +≻ +𝒙 +𝑡 +′ +) +, and +𝑦 +𝑡 += +0 + with probability +1 +− +ℙ +​ +( +𝒙 +𝑡 +≻ +𝒙 +𝑡 +′ +) +. Based on the preference history +𝐻 +𝑡 += +{ +( +𝒙 +1 +, +𝒙 +1 +′ +, +𝑦 +1 +) +, +… +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +, +𝑦 +𝑡 +) +} +, the agent aims to sequentially select favorable action pairs. Over a horizon of +𝑇 + steps, the success of the agent is measured through the cumulative dueling regret + + +𝑅 +D +​ +( +𝑇 +) += +∑ +𝑡 += +1 +𝑇 +ℙ +​ +( +𝒙 +⋆ +≻ +𝒙 +𝑡 +) ++ +ℙ +​ +( +𝒙 +⋆ +≻ +𝒙 +𝑡 +′ +) +− +1 +2 +, + +(1) + +which is the average sub-optimality gap between the chosen pair and a globally preferred action +𝒙 +⋆ +. To better understand this notion of regret, consider the scenario where actions +𝒙 +𝑡 + and +𝒙 +𝑡 +′ + are both optimal. Then the probabilities are equal to +0.5 + and the dueling regret will not grow further, since the regret incurred at step +𝑡 + is zero. This formulation of +𝑅 +D +​ +( +𝑇 +) + is commonly used in the literature of dueling Bandits and RL with preference feedback (urvoy2013generic; pacchiano2021dueling; zhu2024principled) and is adapted from yue2012k. Our goal is to design an algorithm that satisfies a sublinear dueling regret, where +𝑅 +D +​ +( +𝑇 +) +/ +𝑇 +→ +0 + as +𝑇 +→ +∞ +. This implies that given enough evidence, the algorithm will converge to a globally preferred action. To this end, we take a utility-based approach and consider an unknown utility function +𝑓 +: +𝒳 +→ +ℝ +, which reflects the preference via + + +ℙ +​ +( +𝒙 +𝑡 +≻ +𝒙 +𝑡 +′ +) +≔ +𝑠 +​ +( +𝑓 +​ +( +𝒙 +𝑡 +) +− +𝑓 +​ +( +𝒙 +𝑡 +′ +) +) + +(2) + +where +𝑠 +: +ℝ +→ +[ +0 +, +1 +] + is the sigmoid function1, i.e. +𝑠 +​ +( +𝑎 +) += +( +1 ++ +𝑒 +− +𝑎 +) +− +1 +. Referred to as the Bradley-Terry model (bradley1952rank), this probabilistic model for binary feedback is widely used in the literature for logistic and generalized bandits (filippi2010parametric; faury2020improved). Under the utility-based model, +𝒙 +⋆ += +arg +​ +max +𝒙 +∈ +𝒳 +⁡ +𝑓 +​ +( +𝒙 +) + and we can draw connections to a classic bandit problem with direct feedback over the utility +𝑓 +. In particular, saha2021optimal shows that the dueling regret of (1) is equivalent up to constant factors, to the average utility regret of the two actions, that is +∑ +𝑡 += +1 +𝑇 +𝑓 +​ +( +𝒙 +⋆ +) +− +[ +𝑓 +​ +( +𝒙 +𝑡 +) ++ +𝑓 +​ +( +𝒙 +𝑡 +′ +) +] +/ +2 +. + +Throughout this paper, we make two key assumptions over the environment. We assume that the domain +𝒳 +⊂ +ℝ +𝑑 +0 + is compact, and that the utility function lies in +ℋ +𝑘 +, a Reproducing Kernel Hilbert Space corresponding to some kernel function +𝑘 +∈ +𝒳 +× +𝒳 +→ +ℝ + with a bounded RKHS norm +‖ +𝑓 +‖ +𝑘 +≤ +𝐵 +. Without a loss of generality, we further suppose that the kernel function is normalized and +𝑘 +​ +( +𝒙 +, +𝒙 +) +≤ +1 + everywhere in the domain. Our set of assumptions extends the prior literature on logistic bandits and dueling bandits from linear rewards or finite action spaces, to continuous domains with non-parametric rewards. + +While our theoretical framework targets euclidean domains, our methodology may be used on general domains of text or images, given vector embeddings obtained via unsupervised learning. Solving a bandit problem on top of embeddings from a pretrained language model is common practice in further fine-tuning of such models (e.g., nguyen2024predicting; mehta2023sample), and is further demonstrated in our Yelp experiment (c.f Section˜6.3). Lastly, we highlight that our results may be smoothly extended to contextual bandits with stochastic context, by simply modifying the signature of the kernel function to +𝑘 +′ +​ +( +𝒙 +, +𝒙 +′ +, +𝒛 +) +: +𝒳 +× +𝒳 +× +𝒵 +→ +ℝ +, where +𝒛 +∈ +𝒵 + denotes the context. This setting accommodates applications in active learning for fine-tuning of large language models, where the context is the prompt and the two actions are two alternative responses. + +4Kernelized Confidence Sequences with Direct Logistic Feedback + +As a warm-up, we consider a hypothetical scenario where +𝒙 +𝑡 +′ += +𝒙 +null + for all +𝑡 +≥ +1 + such that +𝑓 +​ +( +𝒙 +null +) += +0 +. Therefore at every step, we suggest an action +𝒙 +𝑡 + and receive a noisy binary feedback +𝑦 +𝑡 +, which is equal to one with probability +𝑠 +​ +( +𝑓 +​ +( +𝒙 +𝑡 +) +) +. This example reduces our problem to logistic bandits which has been rigorously analyzed for linear rewards (filippi2010parametric; faury2020improved). We extend prior work to the non-parametric setting by proposing a tractable loss function for estimating the utility function, a.k.a. reward. We present novel confidence intervals that quantify the uncertainty on the logistic predictions uniformly over the action domain. In doing so, we propose confidence sequences for the kernelized logistic likelihood model that are of independent interest for developing sample-efficient solvers for online and active classification. + +The feedback +𝑦 +𝑡 + is a Bernoulli random variable, and its likelihood depends on the utility function as +𝑠 +​ +( +𝑓 +​ +( +𝒙 +𝑡 +) +) +𝑦 +𝑡 +​ +[ +1 +− +𝑠 +​ +( +𝑓 +​ +( +𝒙 +𝑡 +) +) +] +1 +− +𝑦 +𝑡 +. Then given history +𝐻 +𝑡 +, we can estimate +𝑓 + by +𝑓 +𝑡 +, the minimizer of the regularized negative log-likelihood loss + + +ℒ +𝑘 +L +​ +( +𝑓 +; +𝐻 +𝑡 +) +≔ +∑ +𝜏 += +1 +𝑡 +− +𝑦 +𝜏 +​ +log +⁡ +[ +𝑠 +​ +( +𝑓 +​ +( +𝒙 +𝜏 +) +) +] +− +( +1 +− +𝑦 +𝜏 +) +​ +log +⁡ +[ +1 +− +𝑠 +​ +( +𝑓 +​ +( +𝒙 +𝜏 +) +) +] ++ +𝜆 +2 +​ +‖ +𝑓 +‖ +𝑘 +2 + +(3) + +where +𝜆 +> +0 + is the regularization coefficient. The regularization term ensures that +‖ +𝑓 +𝑡 +‖ +𝑘 + is finite and bounded. For simplicity, we assume throughout the main text that +‖ +𝑓 +𝑡 +‖ +𝑘 +≤ +𝐵 +. However, we do not need to rely on this assumption to give theoretical guarantees. In the appendix, we present a more rigorous analysis by projecting +𝑓 +𝑡 + back into the RKHS ball of radius +𝐵 + to ensure that the +𝐵 +-boundedness condition is met, instead of assuming it. We do not perform this projection in our experiments. + +Solving for +𝑓 +𝑡 + may seem intractable at first glance since the loss is defined over functions in the large space of +ℋ +𝑘 +. However, it is common knowledge that the solution has a parametric form and may be calculated by using gradient descent. This is a direct application of the Representer Theorem (scholkopf2001generalized) and is detailed in Proposition˜1. + +Proposition 1 (Logistic Representer Theorem). + +The regularized negative log-likelihood loss of (3) has a unique minimizer +𝑓 +𝑡 +, which takes the form +𝑓 +𝑡 +​ +( +⋅ +) += +∑ +𝜏 += +1 +𝑡 +𝛼 +𝜏 +​ +𝑘 +​ +( +⋅ +, +𝐱 +𝜏 +) + where +( +𝛼 +1 +, +… +​ +𝛼 +𝑡 +) +≕ +𝛂 +𝑡 +∈ +ℝ +𝑡 + is the minimizer of the strictly convex loss + + +ℒ +𝑘 +L +​ +( +𝜶 +; +𝐻 +𝑡 +) += +∑ +𝜏 += +1 +𝑡 +− +𝑦 +𝜏 +​ +log +⁡ +[ +𝑠 +​ +( +𝜶 +⊤ +​ +𝒌 +𝑡 +​ +( +𝒙 +𝜏 +) +) +] +− +( +1 +− +𝑦 +𝜏 +) +​ +log +⁡ +[ +1 +− +𝑠 +​ +( +𝜶 +⊤ +​ +𝒌 +𝑡 +​ +( +𝒙 +𝜏 +) +) +] ++ +𝜆 +2 +​ +‖ +𝜶 +‖ +2 +2 + + +with +𝐤 +𝑡 +​ +( +𝐱 +) += +( +𝑘 +​ +( +𝐱 +1 +, +𝐱 +) +, +… +, +𝑘 +​ +( +𝐱 +𝑡 +, +𝐱 +) +) +∈ +ℝ +𝑡 +. + +Given +𝑓 +𝑡 +, we may predict the expected feedback for a point +𝒙 + as +𝑠 +​ +( +𝑓 +𝑡 +​ +( +𝒙 +) +) +. Centered around this prediction, we construct confidence sets of the form +[ +𝑠 +​ +( +𝑓 +𝑡 +​ +( +𝒙 +) +) +± +𝛽 +𝑡 +​ +( +𝛿 +) +​ +𝜎 +𝑡 +​ +( +𝒙 +) +] +, and show their uniform anytime validity. The width of the sets are characterized by +𝜎 +𝑡 +​ +( +𝒙 +) + defined as + + +𝜎 +𝑡 +2 +​ +( +𝒙 +) +≔ +𝑘 +​ +( +𝒙 +, +𝒙 +) +− +𝒌 +𝑡 +⊤ +​ +( +𝒙 +) +​ +( +𝐾 +𝑡 ++ +𝜆 +​ +𝜅 +​ +𝑰 +𝑡 +) +− +1 +​ +𝒌 +𝑡 +​ +( +𝒙 +) + +(4) + +where +𝜅 += +sup +𝑎 +≤ +𝐵 +1 +/ +𝑠 +˙ +​ +( +𝑎 +) +, with +𝑠 +˙ +​ +( +𝑎 +) += +𝑠 +​ +( +𝑎 +) +​ +( +1 +− +𝑠 +​ +( +𝑎 +) +) + denoting the derivative of the sigmoid function, and +𝐾 +𝑡 +∈ +ℝ +𝑡 +× +𝑡 + is the kernel matrix satisfying +[ +𝐾 +𝑡 +] +𝑖 +, +𝑗 += +𝑘 +​ +( +𝒙 +𝑖 +, +𝒙 +𝑗 +) +. Our first main result shows that for a careful choice of +𝛽 +𝑡 +​ +( +𝛿 +) +, these sets contain +𝑠 +​ +( +𝑓 +​ +( +𝒙 +) +) + simultaneously for all +𝒙 +∈ +𝒳 + and +𝑡 +≥ +1 + with probability greater than +1 +− +𝛿 +. + +Theorem 2 (Kernelized Logistic Confidence Sequences). + +Assume +𝑓 +∈ +ℋ +𝑘 + and +‖ +𝑓 +‖ +𝑘 +≤ +𝐵 +. Let +0 +< +𝛿 +< +1 + and set + + +𝛽 +𝑡 +​ +( +𝛿 +) +≔ +4 +​ +𝐿 +​ +𝐵 ++ +2 +​ +𝐿 +​ +2 +​ +𝜅 +𝜆 +​ +( +𝛾 +𝑡 ++ +log +⁡ +1 +/ +𝛿 +) +, + +(5) + +where +𝛾 +𝑡 +≔ +max +𝐱 +1 +, +… +, +𝐱 +𝑡 +⁡ +1 +2 +​ +log +​ +det +( +𝐈 +𝑡 ++ +( +𝜆 +​ +𝜅 +) +− +1 +​ +𝐾 +𝑇 +) +, and +𝐿 +≔ +sup +𝑎 +≤ +𝐵 +𝑠 +˙ +​ +( +𝑎 +) +. Then + + +ℙ +( +∀ +𝑡 +≥ +1 +, +𝒙 +∈ +𝒳 +: +| +𝑠 +( +𝑓 +𝑡 +( +𝒙 +) +) +− +𝑠 +( +𝑓 +( +𝒙 +) +) +| +≤ +𝛽 +𝑡 +( +𝛿 +) +𝜎 +𝑡 +( +𝒙 +) +) +≥ +1 +− +𝛿 +. + + +Function-valued confidence sets around the kernelized ridge estimator are analyzed and used extensively to design bandit algorithms with noisy feedback on the true reward values (valko2013finite; srinivas2009gaussian; chowdhury2017kernelized; whitehouse2023improved). However, under noisy logistic feedback, this literature falls short as the proposed confidence sets are no longer valid for the kernelized logistic estimator +𝑓 +𝑡 +. One could still estimate +𝑓 + using a kernelized ridge estimator and benefit from this line of work. However, as empirically demonstrated in Figure˜1(a), this will not be a sample-efficient approach. + +Proof Sketch. When minimizing the kernelized logistic loss, we do not have a closed-form solution for +𝑓 +𝑡 +, and can only formulate it using the fact that the gradient of the loss evaluated at +𝑓 +𝑡 + is the null operator, i.e., +∇ +ℒ +​ +( +𝑓 +𝑡 +; +𝐻 +𝑡 +) +: +ℋ +→ +ℋ += +𝟎 +. The key idea of our proof is to construct confidence intervals as +ℋ +-valued ellipsoids in the gradient space and show that the gradient operator evaluated at +𝑓 + belongs to it with high probability (c.f. Lemma˜8). We then translate this back into intervals around point estimates +𝑠 +​ +( +𝑓 +𝑡 +​ +( +𝒙 +) +) + uniformly for all points +𝒙 +∈ +𝒳 +. The complete proof is deferred to Appendix˜A, and builds on the results of faury2020improved and whitehouse2023improved. + +Logistic Bandits. Such confidence sets are an integral tool for action selection under uncertainty, and bandit algorithms often rely on them to balance exploration against exploitation. To demonstrate how Theorem˜2 may be used for bandit optimization with direct logistic feedback, we consider LGP-UCB, the kernelized Logistic GP-UCB algorithm. Presented in Algorithm˜2, it extends the optimistic algorithm of faury2020improved from the linear to the kernelized setting, by using the confidence bound of Theorem˜2 to calculate an optimistic estimate of the reward. We proceed to show that LGP-UCB attains a sublinear logistic regret, which is commonly defined as + + +𝑅 +L +​ +( +𝑇 +) += +∑ +𝑖 += +1 +𝑇 +𝑠 +​ +( +𝑓 +​ +( +𝒙 +⋆ +) +) +− +𝑠 +​ +( +𝑓 +​ +( +𝒙 +𝑡 +) +) +. + + +To the best of our knowledge, the following corollary presents the first regret bound for logistic bandits in the kernelized setting and may be of independent interest. + +Corollary 3. + +Let +𝛿 +∈ +( +0 +, +1 +] + and choose the exploration coefficients +𝛽 +𝑡 +​ +( +𝛿 +) + as described in Theorem˜2 for all +𝑡 +≥ +0 +. Then LGP-UCB satisfies the anytime cumulative regret guarantee of + + +ℙ +( +∀ +𝑇 +≥ +0 +: +𝑅 +L +( +𝑇 +) +≤ +𝐶 +𝐿 +𝛽 +𝑇 +( +𝛿 +) +𝑇 +​ +𝛾 +𝑡 +) +≥ +1 +− +𝛿 +. + + +where +𝐶 +𝐿 +≔ +8 +/ +log +⁡ +( +1 ++ +( +𝜆 +​ +𝜅 +) +− +1 +) +. + +5Main Results: Bandits with Preference Feedback + +We return to our main problem setting in which a pair of actions, +𝒙 +𝑡 + and +𝒙 +𝑡 +′ +, are chosen and the observed response is a noisy binary indicator of +𝒙 +𝑡 + yielding a higher utility than +𝒙 +𝑡 +′ +. While this type of feedback is more consistent in practice, it creates quite a challenging problem compared to the logistic problem of Section˜4. The search space for action pairs +𝒳 +× +𝒳 + is significantly larger than +𝒳 +, and the observed preference feedback of +𝑠 +​ +( +𝑓 +​ +( +𝒙 +𝑡 +) +− +𝑓 +​ +( +𝒙 +𝑡 +′ +) +) + conveys only relative information between two actions. We start by presenting a solution to estimate +𝑓 + and obtain valid confidence sets under preference feedback. Using these confidence sets we then design the MaxMinLCB algorithm which chooses action pairs that are not only favorable, i.e., yield high utility, but are also informative for improving the utility confidence sets. + +5.1Preference-based Confidence Sets + +We consider the probabilistic model of +𝑦 +𝑡 + as stated in Section˜3, and write the corresponding regularized negative loglikelihood loss as + + +ℒ +𝑘 +D +​ +( +𝑓 +; +𝐻 +𝑡 +) +≔ +∑ +𝜏 += +1 +𝑡 + +− +𝑦 +𝜏 +​ +log +⁡ +[ +𝑠 +​ +( +𝑓 +​ +( +𝒙 +𝜏 +) +− +𝑓 +​ +( +𝒙 +𝜏 +′ +) +) +] + +(6) + + +− +( +1 +− +𝑦 +𝜏 +) +​ +log +⁡ +[ +1 +− +𝑠 +​ +( +𝑓 +​ +( +𝒙 +𝜏 +) +− +𝑓 +​ +( +𝒙 +𝜏 +′ +) +) +] ++ +𝜆 +2 +​ +‖ +𝑓 +‖ +𝑘 +2 +. + + +This loss may be optimized over different function classes and is commonly used for linear dueling bandits (e.g., saha2021optimal), and has been notably successful in reinforcement learning with human feedback (christiano2017deep). We proceed to show that the preference-based loss +ℒ +𝑘 +D + is equivalent to +ℒ +𝑘 +D +L +, the standard logistic loss (3) invoked with a specific kernel function +𝑘 +D +. This will allow us to cast the problem of inference with preference feedback as a kernelized logistic regression problem. To this end, we define the dueling kernel as + + +𝑘 +D +​ +( +( +𝒙 +1 +, +𝒙 +1 +′ +) +, +( +𝒙 +2 +, +𝒙 +2 +′ +) +) +≔ +𝑘 +​ +( +𝒙 +1 +, +𝒙 +2 +) ++ +𝑘 +​ +( +𝒙 +1 +′ +, +𝒙 +2 +′ +) +− +𝑘 +​ +( +𝒙 +1 +, +𝒙 +2 +′ +) +− +𝑘 +​ +( +𝒙 +1 +′ +, +𝒙 +2 +) + + +for all +( +𝒙 +1 +, +𝒙 +1 +′ +) +, +( +𝒙 +2 +, +𝒙 +2 +′ +) +∈ +𝒳 +× +𝒳 +, and let +ℋ +𝑘 +D + be the RKHS corresponding to it. While the two function spaces +ℋ +𝑘 +D + and +ℋ +𝑘 + are defined over different input domains, we can show that they are isomorphic, under simple regularity conditions. + +Proposition 4. + +Let +𝑓 +: +𝒳 +→ +ℝ +. Consider a kernel +𝑘 + and the sequence of its eigenfunctions +( +𝜙 +𝑖 +) +𝑖 += +1 +∞ +. Assume the eigenfunctions are zero-mean, i.e. +∫ +𝐱 +∈ +𝒳 +𝜙 +𝑖 +​ +( +𝐱 +) +​ +d +𝐱 += +0 +. Then +𝑓 +∈ +ℋ +𝑘 +, if and only if there exists +ℎ +∈ +ℋ +𝑘 +D + such that +ℎ +​ +( +𝐱 +, +𝐱 +′ +) += +𝑓 +​ +( +𝐱 +) +− +𝑓 +​ +( +𝐱 +′ +) +. Moreover, +‖ +ℎ +‖ +𝑘 +D += +‖ +𝑓 +‖ +𝑘 +. + +The proof is left to Section˜C.1. The assumption on eigenfunctions in Proposition˜4 is primarily made to simplify the equivalence class. In particular, the relative preference function +ℎ + can only capture the utility +𝑓 + up to a bias, i.e., if a constant bias +𝑏 + is added to all values of +𝑓 +, the corresponding +ℎ + function will not change. The value of +𝑏 + may not be recovered by drawing queries from +ℎ +, however, this will not cause issues in terms of identifying +arg +​ +max + of +𝑓 + through querying values of +ℎ +. Therefore, we set +𝑏 += +0 + by assuming that eigenfunctions of +𝑘 + are zero-mean. This assumption automatically holds for all kernels that are translation or rotation invariant over symmetric domains, since their eigenfunctions are periodic +𝐿 +2 +​ +( +𝒳 +) + basis functions, e.g., Matérn kernels and sinusoids. + +Proposition˜4 allows us to re-write the preference-based loss function of (6) as a logistic-type loss + + +ℒ +𝑘 +D +L +​ +( +ℎ +; +𝐻 +𝑡 +) += +∑ +𝜏 += +1 +𝑡 +− +𝑦 +𝜏 +​ +log +⁡ +[ +𝑠 +​ +( +ℎ +​ +( +𝒙 +𝜏 +, +𝒙 +𝜏 +′ +) +) +] +− +( +1 +− +𝑦 +𝜏 +) +​ +log +⁡ +[ +1 +− +𝑠 +​ +( +ℎ +​ +( +𝒙 +𝜏 +, +𝒙 +𝜏 +′ +) +) +] ++ +𝜆 +2 +​ +‖ +ℎ +‖ +𝑘 +D +2 +, + + +that is equivalent to (3) up to the choice of kernel. We define the minimizer +ℎ +𝑡 +≔ +arg +​ +min +⁡ +ℒ +𝑘 +D +L +​ +( +ℎ +; +𝐻 +𝑡 +) + and use it to construct anytime valid confidence sets for the utility +𝑓 + given only preference feedback. + +Corollary 5 (Kernelized Preference-based Confidence Sequences). + +Assume +𝑓 +∈ +ℋ +𝑘 + and +‖ +𝑓 +‖ +𝑘 +≤ +𝐵 +. Choose +0 +< +𝛿 +< +1 + and set +𝛽 +𝑡 +D +​ +( +𝛿 +) + and +𝜎 +𝑡 +D + as in equations (4) and (5), with +𝑘 +𝐷 + used as the kernel function. Then, + + +ℙ +( +∀ +𝑡 +≥ +1 +, +𝒙 +, +𝒙 +′ +∈ +𝒳 +: +| +𝑠 +( +ℎ +𝑡 +( +𝒙 +, +𝒙 +′ +) +) +− +𝑠 +( +𝑓 +( +𝒙 +) +− +𝑓 +( +𝒙 +′ +) +) +| +≤ +𝛽 +𝑡 +D +( +𝛿 +) +𝜎 +𝑡 +𝐷 +( +𝒙 +, +𝒙 +′ +) +) +≥ +1 +− +𝛿 +. + + +where +ℎ +𝑡 += +arg +​ +min +⁡ +ℒ +𝑘 +D +L +​ +( +ℎ +; +𝐻 +𝑡 +) +. + +Corollary˜5 gives valid confidence sets for kernelized utility functions under preference feedback and immediately improves prior results on linear dueling bandits and kernelized dueling bandits with regression-type loss, to kernelized setting with logistic-type likelihood. To demonstrate this, in Section˜C.3 we present the kernelized extensions of MaxInP (saha2021optimal, Algorithm 3), and IDS (kirschner2021bias, Algorithm 4) and prove the corresponding regret guarantees (cf. Theorems 15 and 16). Corollary˜5 holds almost immediately by invoking Theorem˜2 with the dueling kernel +𝑘 +D + and applying Proposition˜4. A proof is provided in Section˜C.1 for completeness. + +Comparison to Prior Work. A line of previous work assumes that both +𝑓 + and the probability +𝑠 +​ +( +𝑓 +​ +( +𝒙 +) +) + are +𝐵 +-bounded members of +ℋ +𝑘 +. This allows them to directly estimate +𝑠 +​ +( +𝑓 +​ +( +𝒙 +) +) + via kernelized linear regression (xu2020zeroth; mehta2023kernelized; kirschner2021bias). The resulting confidence intervals are then around the least squares estimator, which does not align with the logistic estimator +𝑓 +𝑡 +. This model does not encode the fact that +𝑠 +​ +( +𝑓 +​ +( +𝒙 +) +) + only takes values in +[ +0 +, +1 +] + and considers a sub-Gaussian distribution for +𝑦 +𝑡 +, instead of the Bernoulli formulation when calculating the likelihood. Therefore, the resulting algorithms require more samples to learn an accurate reward estimate. In a concurrent work, xu2024principled also consider the loss function of Equation˜6 and present likelihood-ratio confidence sets. The width of the sets at time +𝑇 +, scales with +𝑇 +log +𝒩 +( +ℋ +𝑘 +; +1 +/ +𝑇 +) + where the second term is the metric entropy of the +𝐵 +-bounded RKHS at resolution +1 +/ +𝑇 +, that is, the log-covering number of this function class, using balls of radius +1 +/ +𝑇 +. It is known that +log +⁡ +𝒩 +​ +( +ℋ +𝑘 +; +1 +/ +𝑇 +) +≍ +𝛾 +𝑇 + as defined in Theorem˜2. This may be easily verified using wainwright2019high and (vakili2021information, Definition 1). Noting the definition of +𝛽 +𝑡 +D +, we see that likelihood ratio sets of xu2024principled are wider than Corollary˜5. Consequently, the presented regret guarantee in this work is looser by a factor of +𝑇 +1 +/ +4 + compared to our bound in Theorem˜6. + +Algorithm 1 MaxMinLCB +Input +( +𝛽 +𝑡 +D +) +𝑡 +≥ +1 +. +for +𝑡 +≥ +1 + do +  Play the most potent pair +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) + according to the Stackelberg game + +𝒙 +𝑡 + += +arg +⁡ +max +𝒙 +∈ +ℳ +𝑡 +⁡ +𝑠 +​ +( +ℎ +𝑡 +​ +( +𝒙 +, +𝒙 +′ +​ +( +𝒙 +) +) +) +− +𝛽 +𝑡 +D +​ +𝜎 +𝑡 +D +​ +( +𝒙 +, +𝒙 +′ +​ +( +𝒙 +) +) + + +s.t.  +​ +𝒙 +′ +​ +( +𝒙 +) + += +arg +⁡ +min +𝒙 +′ +∈ +ℳ +𝑡 +⁡ +𝑠 +​ +( +ℎ +𝑡 +​ +( +𝒙 +, +𝒙 +′ +) +) +− +𝛽 +𝑡 +D +​ +𝜎 +𝑡 +D +​ +( +𝒙 +, +𝒙 +′ +) + + +and  +​ +𝒙 +𝑡 +′ + += +𝒙 +′ +​ +( +𝒙 +𝑡 +) +. + +  Observe +𝑦 +𝑡 + and append history. +  Update +ℎ +𝑡 ++ +1 + and +𝜎 +𝑡 ++ +1 +D + and the set of plausible maximizers + +ℳ +𝑡 ++ +1 += +{ +𝒙 +∈ +𝒳 +| +∀ +𝒙 +′ +∈ +𝒳 +: +𝑠 +​ +( +ℎ +𝑡 ++ +1 +​ +( +𝒙 +, +𝒙 +′ +) +) ++ +𝛽 +𝑡 ++ +1 +D +​ +𝜎 +𝑡 ++ +1 +D +​ +( +𝒙 +, +𝒙 +′ +) +≥ +0.5 +} +. + +end for +5.2Action Selection Strategy + +Let +𝒙 +′ +​ +( +𝒙 +) += +arg +⁡ +min +𝒙 +′ +∈ +ℳ +𝑡 +⁡ +LCB +𝑡 +​ +( +𝒙 +, +𝒙 +′ +) + denote a response function. We propose MaxMinLCB in Algorithm˜1 for the preference feedback bandit problem that selects +𝒙 +𝑡 + and +𝒙 +𝑡 +′ + jointly as + + +𝒙 +𝑡 + += +arg +⁡ +max +𝒙 +∈ +ℳ +𝑡 +⁡ +LCB +𝑡 +​ +( +𝒙 +, +𝒙 +′ +​ +( +𝒙 +) +) + (Leader) +(7) + + +𝒙 +𝑡 +′ + += +𝒙 +′ +​ +( +𝒙 +𝑡 +) + (Follower) + +where the lower-confidence bound +LCB +𝑡 +​ +( +𝒙 +, +𝒙 +′ +) += +𝑠 +​ +( +ℎ +𝑡 +​ +( +𝒙 +, +𝒙 +′ +) +) +− +𝛽 +𝑡 +D +​ +𝜎 +𝑡 +D +​ +( +𝒙 +, +𝒙 +′ +) + presents a pessimistic estimate of +ℎ + and +ℳ +𝑡 += +{ +𝒙 +∈ +𝒳 +| +∀ +𝒙 +′ +∈ +𝒳 +: +𝑠 +​ +( +ℎ +𝑡 +​ +( +𝒙 +, +𝒙 +′ +) +) ++ +𝛽 +𝑡 +D +​ +𝜎 +𝑡 +D +​ +( +𝒙 +, +𝒙 +′ +) +≥ +0.5 +} + is the set of potentially optimal actions. The second action is chosen as +𝒙 +𝑡 +′ += +𝒙 +′ +​ +( +𝒙 +𝑡 +) +. Equation˜7 forms a zero-sum Stackelberg (Leader–Follower) game where the actions +𝒙 +𝑡 + and +𝒙 +𝑡 +′ + are chosen sequentially (stackelberg1952theory). First, the Leader selects +𝒙 +𝑡 +, then the Follower selects +𝒙 +𝑡 +′ + depending on the choice of +𝒙 +𝑡 +. Importantly, due to the sequential nature of action selection, +𝒙 +𝑡 + is chosen by the Leader such that the Follower’s action selection function, +𝒙 +′ +​ +( +⋅ +) +, is accounted for in the selection of +𝒙 +𝑡 +. Sequential optimization problems are known to be computationally NP-hard even for linear functions (jeroslow1985polynomial). However, due to their importance in practical applications, there are algorithms that can efficiently approximate a solution over large domains (sinha2017review; ghadimi2018approximation; dagreou2022framework; camacho2023metaheuristics). + +MaxMinLCB builds on a simple insight: if the utility +𝑓 + is known, both the Leader and the Follower will choose +𝒙 +⋆ + yielding an objective value +0.5 + for both players, and zero dueling regret. Since MaxMinLCB has no access to +𝑓 +, it leverages the confidence sets of Corollary˜5 and uses a pessimistic approach by considering the LCB instead. There are two crucial properties of the Follower specific to this game. First, the Follower can not do worse than the Leader with respect to the +LCB +𝑡 +. In any scenario, the Follower can match the Leader’s action which results in +LCB +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) += +0.5 +. Second, for sufficiently tight confidence sets, the Follower will not select sub-optimal actions. In this case, the Leader’s best action must be optimal as it anticipates the Follower’s response and Equation˜7 recovers the optimal actions. Therefore, the objective value of the game considered in Equation˜7 is always less than, or equal to the objective of the game with known utility function +𝑓 +, i.e., +LCB +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +≤ +0.5 += +𝑓 +​ +( +𝒙 +⋆ +, +𝒙 +⋆ +) + and the gap shrinks with the confidence sets. Overall, the Stackelberg game in Equation˜7 can be considered as a lower approximation of the game played with known utility function +𝑓 +. + +The primary challenge for MaxMinLCB is to sample action pairs that sufficiently shrink the confidence sets for the optimal actions without accumulating too much regret. MaxMinLCB balances this exploration-exploitation trade-off naturally with its game theoretic formulation. We view the selection of +𝒙 +𝑡 + to be exploitative by trying to maximize the unknown utility +𝑓 +​ +( +𝒙 +𝑡 +) + and minimizing regret. On the other hand, +𝒙 +𝑡 +′ + is chosen to be the most competitive opponent to +𝒙 +𝑡 +, i.e., testing whether the condition +LCB +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +≥ +0.5 + holds. Note that +LCB +𝑡 + is pessimistic concerning +𝒙 +𝑡 + making it robust against the uncertainty in the confidence set estimation. At the same time, +LCB +𝑡 + is an optimistic estimate for +𝒙 +𝑡 +′ + encouraging exploration. In our main theoretical result, we prove that under the assumptions of Corollary˜5, MaxMinLCB achieves sublinear regret on the dueling bandit problem. + +Theorem 6. + +Suppose the utility function +𝑓 + lies in +ℋ +𝑘 + with a norm bounded by +𝐵 +, and that kernel +𝑘 + satisfies the assumption of Proposition˜4. Let +𝛿 +∈ +( +0 +, +1 +] + and choose the exploration coefficient +𝛽 +𝑡 +D +​ +( +𝛿 +) + as in Corollary˜5. Then MaxMinLCB satisfies the anytime dueling regret of + + +ℙ +( +∀ +𝑇 +≥ +0 +: +𝑅 +D +( +𝑇 +) +≤ +𝐶 +3 +𝛽 +𝑇 +D +( +𝛿 +) +𝑇 +​ +𝛾 +𝑇 +D += +𝒪 +( +𝛾 +𝑇 +D +𝑇 +) +) +≥ +1 +− +𝛿 + + +where +𝛾 +𝑇 +D + is the +𝑇 +-step information gain of kernel +𝑘 +D + and +𝐶 +3 += +( +8 ++ +2 +​ +𝜅 +) +/ +log +⁡ +( +1 ++ +4 +​ +( +𝜆 +​ +𝜅 +) +− +1 +) +. + +The proof is left to Section˜C.2. The information gain +𝛾 +𝑇 +D + in Theorem˜6 quantifies the structural complexity of the RKHS corresponding to +𝑘 +D + and its dependence on +𝑇 + is fairly understood for kernels commonly used in applications of bandit optimization. As an example, for a Matérn kernel of smoothness +𝜈 + defined over a +𝑑 +-dimensional domain, +𝛾 +𝑇 += +𝒪 +~ +​ +( +𝑇 +𝑑 +/ +( +2 +​ +𝜈 ++ +𝑑 +) +) + (Remark 2, vakili2021information) and the corresponding regret bound grows sublinearly with +𝑇 +. + +Restricting the optimization domain to +ℳ +𝑡 +⊂ +𝒳 + is common in the literature (zoghi2014relative; saha2021optimal) despite being challenging in applications with large or continuous domains. We conjecture that MaxMinLCB would enjoy similar regret guarantees without restricting the selection domain to +ℳ +𝑡 + as done in Equation˜7. This claim is supported by our experiments in Section˜6.2 which are carried out without such restriction on the optimization domain. + +(a)Logistic feedback +(b)Preference feedback +Figure 1: Regret of learning the Ackley function with logistic and preference feedback. (a) Same UCB algorithms, each using a different confidence set. LGP-UCB performs best, showcasing the power of Theorem˜2. (b): Algorithms with different acquisition functions, all using our confidence sets. MaxMinLCB is more sample-efficient. +6Experiments + +Our experiments are on finding the maxima of test functions commonly used in (non-convex) optimization literature (jamil2013literature), given only preference feedback. These functions cover challenging optimization landscapes including several local optima, plateaus, and valleys, allowing us to test the versatility of MaxMinLCB. We use the Ackley function for illustration in the main text and provide the regret plots for the remainder of the functions in Appendix˜E. For all experiments, we set the horizon +𝑇 += +2000 + and evaluate all algorithms on a uniform mesh over the input domain of size +100 +. Additionally, we conducted experiments on the Yelp restaurant review dataset to demonstrate the applicability of MaxMinLCB on real-world data and its scaling to larger domains. All experiments are run across +20 + random seeds and reported values are averaged over the seeds, together with standard error. The environments and algorithms are implemented2 end-to-end in JAX (jax2018github). + +6.1Benchmarking Confidence Sets + +Performance of MaxMinLCB relies on validity and tightness of the LCB. We evaluate the quality of our kernelized confidence bounds, using the potentially simpler task of bandit optimization given logistic feedback. To this end, we fix the acquisition function for the logistic bandit algorithms to the Upper Confidence Bound (UCB) function, and benchmark different methods for calculating the confidence bound. We refer to the algorithm instantiated with the confidence sets of Theorem˜2 as LGP-UCB (c.f. Algorithm˜2). The Ind-UCB approach assumes that actions are uncorrelated, and maintains an independent confidence interval for each action as in lattimore2020bandit. This demonstrates how LGP-UCB utilizes the correlation between actions. We also implement Log-UCB1 (faury2020improved) that assumes that +𝑓 + is a linear function, i.e., +𝑓 +​ +( +𝒙 +) += +𝜃 +𝑇 +​ +𝒙 + to highlight the improvements gained by kernelization. Last, we compare LGP-UCB with GP-UCB (srinivas2009gaussian) that estimates probabilities +𝑠 +​ +( +𝑓 +​ +( +⋅ +) +) + via a kernelized ridge regression task. This comparison highlights the benefits of using our kernelized logistic estimator (Proposition˜1) over regression-based approaches (xu2020zeroth; kirschner2021bias; mehta2023kernelized; mehta2023sample). Figure˜1(a) shows that the cumulative regret of LGP-UCB is the lowest among the baselines. GP-UCB performs closest to LGP-UCB, however, it accumulates regret linearly during the initial steps. Note that GP-UCB and LGP-UCB differ in the estimation of the utility function +𝑓 +𝑡 + while estimating the width of the confidence bounds similarly. This result suggests that using the logistic-type loss (3) to infer the utility function is advantageous. As expected, Ind-UCB converges at a slower rate than LGP-UCB and GP-UCB due to ignoring the correlation between arms while Log-UCB1’s regret grows linearly as the Ackley function is misspecified under the assumption of linearity. We defer the results on the rest of the utility functions to Table˜2 in Appendix˜E and the figures therein. + +6.2Benchmarking Acquisition Functions +Table 1:Benchmarking +𝑅 +𝑇 +D + for a variety of test utility functions, +𝑇 += +2000 +. The top 3 rows show results for smoother functions without steep gradients and local optima while the bottom 5 rows show the results for more challenging problems. +𝑓 + MaxMinLCB Doubler MultiSBM MaxInP RUCB IDS +Branin +𝟏𝟎𝟒 +± +13 + +114 +± +9 + +𝟖𝟗 +± +13 + +340 +± +2 + +𝟏𝟎𝟏 +± +14 + +163 +± +22 + +Matyas +125 +± +5 + +136 +± +4 + +𝟏𝟎𝟔 +± +7 + +136 +± +6 + +𝟏𝟎𝟔 +± +6 + +128 +± +5 + +Rosenbrock +𝟐𝟕 +± +4 + +44 +± +12 + +𝟐𝟓 +± +5 + +109 +± +2 + +58 +± +7 + +58 +± +13 + +Ackley +𝟓𝟔 +± +2 + +72 +± +2 + +65 +± +0.5 + +120 +± +1 + +84 +± +0.7 + +111 +± +9 + +Eggholder +𝟏𝟏𝟑 +± +6 + +154 +± +4 + +134 +± +3 + +230 +± +34 + +213 +± +40 + +141 +± +12 + +Hoelder +𝟏𝟒𝟏 +± +26 + +𝟏𝟓𝟒 +± +3 + +𝟏𝟑𝟔 +± +15 + +204 +± +20 + +200 +± +28 + +𝟏𝟑𝟐 +± +15 + +Michalewicz +𝟏𝟑𝟖 +± +14 + +183 +± +11 + +𝟏𝟓𝟓 +± +10 + +260 +± +40 + +269 +± +46 + +188 +± +21 + +Yelp +𝟏𝟕𝟓 +± +22 + +263 +± +28 + +𝟏𝟗𝟗 +± +25 + +409 +± +15 + +214 +± +22 + +255 +± +22 + +In this section, we compare MaxMinLCB with other utility-based bandit algorithms. To isolate the benefits of our acquisition function, we instantiate all algorithms with the same confidence sets, and use our improved preferred-based bound of Corollary˜5. Therefore, our implementation differs from the corresponding references, while we refer to the algorithms by their original name. We consider the following baselines. Doubler and MultiSBM (ailon2014reducing) choose +𝒙 +𝑡 + as a reference action from the recent history of actions and pair it with +𝒙 +𝑡 +′ + which maximizes the joint UCB (cf. Algorithm˜5 and 6). RUCB (zoghi2014relative) chooses +𝒙 +𝑡 +′ + similarly, however, it selects the reference action uniformly at random from +ℳ +𝑡 + (Algorithm˜7). MaxInP (saha2021optimal) also maintains the set of plausible maximizers +ℳ +𝑡 +, and at each time step, it selects the pair of actions that maximize +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +, +𝒙 +′ +) + (Algorithm˜3). IDS (kirschner2021bias) selects the reference action greedily by maximizing +𝑓 +𝑡 +, and pairs it with an informative action (Algorithm˜4). Notably, all algorithms, with the exception of MaxInP, choose one of the actions independently and use it as a reference point when selecting the other one. Figure˜3 illustrates the differences in action selection between UCB, maximum information, and MaxMinLCB approaches. We note that POP-BO (xu2024principled) and MultiSBM only differ in the estimation of the confidence set. Since we deploy the same confidence set for all acquisition functions, the two algorithms are equivalent and we use MultiSBM in our results, however, comparisons hold for POP-BO as well. + +Figure˜1(b) benchmarks the algorithms using the Ackley utility function, where MaxMinLCB outperforms the baselines. All algorithms suffer from close-to-linear regret during the initial stages of learning, suggesting that there is an inevitable exploration phase. Notably, MaxMinLCB, IDS, and Doubler are the first to select actions with high utility, while RUCB and MaxInP explore for longer. Table˜1 shows the dueling regret for all utility functions. MaxMinLCB consistently outperforms the baselines across the analyzed functions and achieves a low standard error, supporting its efficiency in balancing exploration and exploitation in the preference feedback setting. Only MultiSBM shows comparable performance to MaxMinLCB and even outperforms it on the Matyas function which is a relatively smooth function posing a simple optimization problem. However, MaxMinLCB achieves lower regret on the Ackley and Eggholder functions which obtain many local optima and steeper gradients showing that MaxMinLCB is preferable for challenging optimization problems. Other acquisition functions work well only in certain cases, e.g., IDS achieves the smallest regret for optimizing Matyas, while RUCB excels on the Branin function. This indicates the challenges each utility function offers and the performance of the action selection is task dependent. The consistent performance of MaxMinLCB demonstrates its robustness against the underlying unknown utility function. + +6.3Real-world Experiment + +To demonstrate the scalability and applicability of MaxMinLCB, we conduct an experiment on the Yelp dataset of restaurant reviews, which consists of +275 + restaurants and +20 + users after the pre-processing stage. For each user, we want to find the restaurants that best fit their preference, via making sequential recommendations and receiving comparative feedback. We define the action space +𝒳 + by assigning to each restaurant their respective +32 +-dimensional embedding of their reviews, i.e., +𝒳 +⊆ +ℝ +32 +. The dataset provides utility values for users in the form of ratings on the scale of +1 + to +5 +, however, not all users rated every restaurant. + +We estimate missing ratings using collaborative filtering (schafer2007collaborative). Further details on the data processing are deferred to Section˜D.1. Figure˜2 shows that the results of this larger problem align with previous conclusions. MaxMinLCB remains the best-performing algorithm with MultiSBM following second. The cumulative regret is also reported in Table˜1. Note that neither of the algorithms is tuned or modified for this experiment. These results are only intended to demonstrate that 1) the computations easily scale, and 2) the kernelized approach is still applicable in a text-based domain, by using high-quality vector embeddings. + +Figure 2: LGP-UCB is more sample-efficient when making restaurant recommendations based on Yelp open dataset with preference feedback. All baselines use the confidence sets of Cor. 5. +7Conclusion + +We addressed the problem of bandit optimization with preference feedback over large domains and complex targets. We proposed MaxMinLCB, which takes a game-theoretic approach to the problem of action selection under comparative feedback, and naturally balances exploration and exploitation by constructing a zero-sum Stackelberg game between the action pairs. MaxMinLCB achieves a sublinear regret for kernelized utilities, and performs competitively across a range of experiments. Lastly, by uncovering the equivalence of learning with logistic or comparative feedback, we propose kernelized preference-based confidence sets, which may be employed in adjacent problems, such as reinforcement learning with human feedback. The technical setup considered in this work serves as a foundation for a number of applications in mechanism design, such as preference elicitation and welfare optimization from multiple feedback sources for social choice theory, which we leave as future work. + +Acknowledgments and Disclosure of Funding + +We thank Scott Sussex for his thorough feedback. This research was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and Innovation Program Grant agreement No. 815943. Barna Pásztor was supported by an ETH AI Center doctoral fellowship, and Parnian Kassraie by a Google PhD Fellowship. + +Contents of Appendix +1Introduction +2Related Work +3Problem Setting +4Kernelized Confidence Sequences with Direct Logistic Feedback +5Main Results: Bandits with Preference Feedback +6Experiments +7Conclusion +Appendix AProofs for Bandits with Logistic Feedback + +We have written the equations in the main text in terms of kernel matrices and function evaluations, for easier readability. For the purpose of the proof however, we mainly rely on entities in the Hilbert space. Consider the operator +𝜙 +: +𝒳 +→ +ℋ + which corresponds to kernel +𝑘 + and satisfies +𝑘 +​ +( +𝒙 +, +⋅ +) += +𝜙 +​ +( +𝒙 +) +. Then by Mercer’s theorem, any +𝑓 +∈ +ℋ +𝑘 + may be written as +𝑓 += +𝜽 +⊤ +​ +𝜙 +, where +𝜽 +∈ +ℓ +2 +​ +( +ℕ +) + and has a +𝐵 +-bounded +ℓ +2 + norm. For a sequence of points +𝒙 +1 +, +… +, +𝒙 +𝑡 +∈ +𝒳 +, we define the (infinite-dimensional) feature map +Φ +𝑡 += +[ +𝜙 +​ +( +𝒙 +1 +) +, +⋯ +, +𝜙 +​ +( +𝒙 +𝑡 +) +] +⊤ +, which gives rise to the kernel matrix +𝐾 +𝑡 +: +ℝ +𝑡 +→ +ℝ +𝑡 + and the covariance operator +𝑆 +𝑡 +: +ℋ +𝑘 +→ +ℋ +𝑘 +, respectively defined as +𝐾 +𝑡 += +Φ +𝑡 +​ +Φ +𝑡 +⊤ + and +𝑆 +𝑡 += +Φ +𝑡 +⊤ +​ +Φ +𝑡 +. Let +𝑰 +𝑡 + denote the +𝑡 +-dimensional identity matrix, and +𝑰 +ℋ + be the identity operator acting on the RKHS. Then it is widely known that the covariance and kernel operators are connected via +det +( +𝑰 +ℋ ++ +𝜌 +− +2 +​ +𝑆 +𝑡 +) += +det +( +𝑰 +𝑡 ++ +𝜌 +− +2 +​ +𝐾 +𝑡 +) + for any +𝑡 +≥ +1 + and +𝜌 +≠ +0 +. For operators on the Hilbert space, +det +( +𝐴 +) + refer to a Fredholm determinant [c.f. lax2002functional]. Lastly, in the appendix, we refer to the true unknown utility function as +𝑓 +⋆ +​ +( +𝒙 +) += +𝜙 +⊤ +​ +( +𝒙 +) +​ +𝜽 +⋆ +. In the main text, the true utility is simply referred to as +𝑓 +. + +To analyze our function-valued confidence sequences, we start by re-writing the logistic loss function + + +ℒ +​ +( +𝜽 +; +𝐻 +𝑡 +) += + +∑ +𝜏 += +1 +𝑡 +− +𝑦 +𝜏 +​ +log +⁡ +𝑠 +​ +( +𝜽 +⊤ +​ +𝜙 +​ +( +𝒙 +𝜏 +) +) +− +∑ +𝜏 += +1 +𝑡 +( +1 +− +𝑦 +𝜏 +) +​ +log +⁡ +( +1 +− +𝑠 +​ +( +𝜽 +⊤ +​ +𝜙 +​ +( +𝒙 +𝜏 +) +) +) ++ +𝜆 +2 +​ +‖ +𝜽 +‖ +2 +2 + + +which is strictly convex in +𝜽 + and has a unique minimizer +𝜽 +𝑡 + which satisfies + + +∇ +ℒ +​ +( +𝜽 +𝑡 +; +𝐻 +𝑡 +) += +∑ +𝜏 += +1 +𝑡 +− +𝑦 +𝜏 +​ +𝜙 +​ +( +𝒙 +𝜏 +) ++ +𝑔 +𝑡 +​ +( +𝜽 +𝑡 +) += +0 + + +where +𝑔 +𝑡 +​ +( +𝜽 +) +: +ℋ +→ +ℋ + is a linear operator defined as + + +𝑔 +𝑡 +​ +( +𝜽 +) +≔ +∑ +𝜏 += +1 +𝑡 +𝜙 +​ +( +𝒙 +𝜏 +) +​ +𝑠 +​ +( +𝜽 +⊤ +​ +𝜙 +​ +( +𝒙 +𝜏 +) +) ++ +𝜆 +​ +𝜽 +. + + +In the main text, we assumed that minimizer of +ℒ + satisfies the norm boundedness condition. Here, we present a more rigorous analysis which does not assume so. Instead, we work with a projected estimator defined via + + +𝜽 +𝑡 +𝑃 + += +arg +​ +min +‖ +𝜽 +‖ +2 +≤ +𝐵 +⁡ +‖ +𝒈 +𝑡 +​ +( +𝜽 +) +− +𝒈 +𝑡 +​ +( +𝜽 +𝑡 +) +‖ +𝑉 +𝑡 +− +1 +. + +(8) + +where +𝑉 +𝑡 += +𝑆 +𝑡 ++ +𝜅 +​ +𝜆 +​ +𝑰 +ℋ + and +𝜽 +𝑡 + is the minimizer of +ℒ +​ +( +𝜽 +; +𝐻 +𝑡 +) +. Roughly put, +𝜽 +𝑡 +𝑃 +∈ +ℓ +2 +​ +( +ℕ +) + is a norm +𝐵 +-bounded alternative to +𝜽 +𝑡 +, who also satisfies a small +∇ +ℒ +, and therefore, is expected to result in an accurate decision boundary. We will present our proof in terms of +𝜽 +𝑡 +𝑃 +. This also proves the results in the main text, since +𝜽 +𝑡 +𝑃 += +𝜽 +𝑡 + if +𝜽 +𝑡 + itself happens to have a +𝐵 +-bounded norm, as assumed in the main text. + +Our analysis relies on a concentration bound for +ℋ +-valued martingale sequences stated in abbasi2013online and later in whitehouse2023improved. Below, we have adapted the statement to match our notation. + +Lemma 7 (Corollary 1 whitehouse2023improved). + +Suppose the sequence +( +𝐱 +𝑡 +) +𝑡 +≥ +1 + is +( +ℱ +𝑡 +) +𝑡 +≥ +1 +-adapted, where +ℱ +𝑡 +≔ +𝜎 +​ +( +𝐱 +1 +, +… +, +𝐱 +𝑡 +, +𝜀 +1 +, +… +, +𝜀 +𝑡 +− +1 +) + and +𝜀 +𝑡 + are i.i.d. zero-mean +𝜎 +-subGaussian noise. Consider the RKHS +ℋ + corresponding to a kernel +𝑘 +​ +( +𝐱 +, +𝐱 +′ +) += +𝜙 +⊤ +​ +( +𝐱 +) +​ +𝜙 +​ +( +𝐱 +′ +) +. Then, for any +𝜌 +> +0 + and +𝛿 +∈ +( +0 +, +1 +) +, we have that, with probability at least +1 +− +𝛿 +, simultaneously for all +𝑡 +≥ +0 +, + + +‖ +∑ +𝜏 +≤ +𝑡 +𝜀 +𝜏 +​ +𝜙 +​ +( +𝒙 +𝜏 +) +‖ +𝑉 +𝑡 +− +1 +≤ +𝜎 +​ +2 +​ +log +⁡ +( +1 +𝛿 +​ +det +( +𝑰 +𝑡 ++ +𝜌 +− +2 +​ +𝐾 +𝑡 +) +) + + +where +𝑉 +𝑡 += +𝑆 +𝑡 ++ +𝜌 +2 +​ +𝐈 +ℋ +. + +The following lemma, which extends faury2020improved to +ℋ +-valued operators, expresses the closeness of +𝜽 +𝑡 + and +𝜽 +⋆ + in the gradient space, with respect to the norm of the covariance matrix. + +Lemma 8 (Gradient Space Confidence Bounds). + +Set +0 +< +𝛿 +< +1 +. Under the assumptions of Theorem˜2 + + +ℙ +( +∀ +𝑡 +≥ +0 +: +∥ +𝒈 +𝑡 +( +𝜽 +𝑡 +) +− +𝒈 +𝑡 +( +𝜽 +⋆ +) +∥ +𝑉 +𝑡 +− +1 +≤ +1 +2 +2 +​ +log +⁡ +1 +/ +𝛿 ++ +2 +​ +𝛾 +𝑇 ++ +𝜆 +𝜅 +𝐵 +) +≥ +1 +− +𝛿 + + +where +𝑉 +𝑡 += +𝑆 +𝑡 ++ +𝜅 +​ +𝜆 +​ +𝐈 +ℋ +. + +Proof of Lemma˜8. + +Recall that +𝒈 +𝑡 +​ +( +𝜽 +) += +∑ +𝜏 +≤ +𝑡 +𝑠 +​ +( +𝜽 +⊤ +​ +𝜙 +​ +( +𝒙 +𝜏 +) +) +​ +𝜙 +​ +( +𝒙 +𝜏 +) ++ +𝜆 +​ +𝜽 +. Then it is straighforward to show that + + +∇ +ℒ +​ +( +𝜽 +; +𝐻 +𝑡 +) += +∑ +𝜏 +≤ +𝑡 +𝑦 +𝜏 +​ +𝜙 +​ +( +𝒙 +𝜏 +) +− +𝑔 +𝑡 +​ +( +𝜽 +) +. + + +Since +𝜽 +𝑡 + is a minimizer of +ℒ +𝑡 +, it holds that +𝒈 +𝑡 +​ +( +𝜽 +𝑡 +) += +∑ +𝜏 +≤ +𝑡 +𝑦 +𝜏 +​ +𝜙 +​ +( +𝒙 +𝜏 +) +. This allows us to write, + + +‖ +𝒈 +𝑡 +​ +( +𝜽 +𝑡 +) +− +𝒈 +𝑡 +​ +( +𝜽 +⋆ +) +‖ +𝑉 +𝑡 +− +1 + += +‖ +∑ +𝜏 +≤ +𝑡 +( +𝑦 +𝜏 +− +𝑠 +​ +( +𝜙 +⊤ +​ +( +𝒙 +𝜏 +) +​ +𝜽 +⋆ +) +) +​ +𝜙 +​ +( +𝒙 +𝜏 +) +− +𝜆 +​ +𝜽 +⋆ +‖ +𝑉 +𝑡 +− +1 + + +≤ +‖ +∑ +𝜏 +≤ +𝑡 +𝜀 +𝜏 +​ +𝜙 +​ +( +𝒙 +𝜏 +) +‖ +𝑉 +𝑡 +− +1 ++ +𝜆 +​ +‖ +𝜽 +⋆ +‖ +𝑉 +𝑡 +− +1 + +(9) + +where +𝜀 +𝜏 += +𝑦 +𝜏 +− +𝑠 +​ +( +𝜙 +⊤ +​ +( +𝒙 +𝜏 +) +​ +𝜽 +⋆ +) + is a history dependent random variable in +[ +0 +, +1 +] + due to the data model. To bound the first term, we recognize that any random variable in +[ +0 +, +1 +] + is +𝜎 + sub-Gaussian with +𝜎 += +0.5 + and apply Lemma˜7. We obtain that for all +𝑡 +≥ +0 +, with probability greater than +1 +− +𝛿 + + +‖ +∑ +𝜏 +≤ +𝑡 +𝜀 +𝜏 +​ +𝜙 +​ +( +𝒙 +𝜏 +) +‖ +𝑉 +𝑡 +− +1 + +≤ +1 +2 +​ +2 +​ +log +⁡ +( +1 +𝛿 +​ +det +( +𝑰 +𝑡 ++ +( +𝜆 +​ +𝜅 +) +− +1 +​ +𝐾 +𝑡 +) +) + + +≤ +1 +2 +​ +2 +​ +log +⁡ +1 +/ +𝛿 ++ +2 +​ +𝛾 +𝑇 + + +since +𝛾 +𝑡 +( +𝜌 +) += +sup +𝒙 +1 +, +… +, +𝒙 +𝑡 +1 +2 +log +det +( +𝑰 +𝑡 ++ +𝜌 +− +2 +𝐾 +𝑡 +) +) +. To bound the second term in (9), note that +𝑆 +𝑡 += +Φ +𝑡 +⊤ +​ +Φ +𝑡 + is PSD and therefore +𝑉 +𝑡 +≥ +𝜅 +​ +𝜆 +​ +𝑰 +ℋ +. Then + + +𝜆 +​ +‖ +𝜽 +⋆ +‖ +𝑉 +𝑡 +− +1 +≤ +𝜆 +𝜆 +​ +𝜅 +​ +‖ +𝜽 +⋆ +‖ +2 +≤ +𝜆 +𝜅 +​ +𝐵 +. + + +concluding the proof. ∎ + +The following lemma shows the validity of our parameter-space confidence sets. + +Lemma 9. + +Set +0 +< +𝛿 +< +1 + and consider the confidence sets + + +Θ +𝑡 +​ +( +𝛿 +) + +≔ +{ +‖ +𝜽 +‖ +≤ +𝐵 +, +‖ +𝜽 +− +𝜽 +𝑡 +𝑃 +‖ +𝑉 +𝑡 +≤ +2 +​ +𝜆 +​ +𝜅 +​ +𝐵 ++ +𝜅 +​ +2 +​ +log +⁡ +1 +/ +𝛿 ++ +2 +​ +𝛾 +𝑇 +} +. + + +Then, + + +ℙ +( +∀ +𝑡 +≥ +0 +: +𝜽 +⋆ +∈ +Θ +𝑡 +( +𝛿 +) +) +≥ +1 +− +𝛿 + +Proof of Lemma˜9. + +We check if +𝜽 +⋆ +∈ +Θ +𝑡 +​ +( +𝛿 +) + by bounding the following norm + + +‖ +𝜽 +⋆ +− +𝜽 +𝑡 +𝑃 +‖ +𝑉 +𝑡 + +≤ +𝜅 +​ +‖ +𝑔 +𝑡 +​ +( +𝜽 +⋆ +) +− +𝑔 +𝑡 +​ +( +𝜽 +𝑡 +𝑃 +) +‖ +𝑉 +𝑡 +− +1 + +(Lem. 12) + + +≤ +𝜅 +​ +( +‖ +𝑔 +𝑡 +​ +( +𝜽 +⋆ +) +− +𝑔 +𝑡 +​ +( +𝜽 +𝑡 +) +‖ +𝑉 +𝑡 +− +1 ++ +‖ +𝑔 +𝑡 +​ +( +𝜽 +𝑡 +) +− +𝑔 +𝑡 +​ +( +𝜽 +𝑡 +𝑃 +) +‖ +𝑉 +𝑡 +− +1 +) + + +≤ +2 +​ +𝜅 +​ +‖ +𝑔 +𝑡 +​ +( +𝜽 +⋆ +) +− +𝑔 +𝑡 +​ +( +𝜽 +𝑡 +) +‖ +𝑉 +𝑡 +− +1 + +(Eq. 8) + + +≤ +𝜅 +​ +2 +​ +log +⁡ +1 +/ +𝛿 ++ +2 +​ +𝛾 +𝑇 ++ +2 +​ +𝜆 +​ +𝜅 +​ +𝐵 + +(Lem. 8) + +∎ + +Lastly, we prove a formal extension of Theorem˜2. + +Theorem 10 (Theorem˜2 - Formal). + +Set +0 +< +𝛿 +< +1 + and consider the confidence sets +ℰ +𝑡 +​ +( +𝛿 +) +⊂ +ℋ +𝑘 + + +ℰ +𝑡 +​ +( +𝛿 +) += +{ +𝑓 +​ +( +⋅ +) += +𝜽 +⊤ +​ +𝜙 +​ +( +⋅ +) +: +𝜽 +∈ +Θ +𝑡 +​ +( +𝛿 +) +} +. + + +Then, simultanously for all +𝐱 +∈ +𝒳 +, +𝑓 +∈ +ℰ +𝑡 +​ +( +𝛿 +) + and +𝑡 +≥ +0 + + +| +𝑠 +​ +( +𝑓 +​ +( +𝒙 +) +) +− +𝑠 +​ +( +𝑓 +⋆ +​ +( +𝒙 +) +) +| +≤ +𝛽 +𝑡 +​ +( +𝛿 +) +​ +𝜎 +��� +​ +( +𝒙 +) + + +with probability greater than +1 +− +𝛿 +, where + + +𝛽 +𝑡 +​ +( +𝛿 +) +≔ +4 +​ +𝐿 +​ +𝐵 ++ +2 +​ +𝐿 +​ +𝜅 +𝜆 +​ +2 +​ +log +⁡ +1 +/ +𝛿 ++ +2 +​ +𝛾 +𝑇 + +Proof of Theorem˜10. + +For simplicity in notation let us define + + +𝛽 +~ +𝑡 +​ +( +𝛿 +) +≔ +2 +​ +𝜆 +​ +𝜅 +​ +𝐵 ++ +𝜅 +​ +2 +​ +log +⁡ +1 +/ +𝛿 ++ +2 +​ +𝛾 +𝑡 +. + + +Suppose +𝑓 += +𝜽 +⊤ +​ +𝜙 +​ +( +⋅ +) + is in +ℰ +𝑡 +​ +( +𝛿 +) +. Then + + +| +𝑠 +​ +( +𝜙 +⊤ +​ +( +𝒙 +) +​ +𝜽 +⋆ +) +− +𝑠 +​ +( +𝜙 +⊤ +​ +( +𝒙 +) +​ +𝜽 +) +| + += +| +𝛼 +​ +( +𝒙 +; +𝜽 +, +𝜽 +⋆ +) +​ +𝜙 +⊤ +​ +( +𝒙 +) +​ +( +𝜽 +− +𝜽 +⋆ +) +| + +Lem. 11 + + +≤ +𝐿 +​ +| +𝜙 +⊤ +​ +( +𝒙 +) +​ +( +𝜽 +− +𝜽 +⋆ +) +| + +𝑠 + is +𝐿 +-Lipschitz + + +≤ +𝐿 +​ +‖ +𝜙 +​ +( +𝒙 +) +‖ +𝑉 +𝑡 +− +1 +​ +‖ +𝜽 +− +𝜽 +⋆ +‖ +𝑉 +𝑡 + + +≤ +𝐿 +​ +‖ +𝜙 +​ +( +𝒙 +) +‖ +𝑉 +𝑡 +− +1 +​ +( +‖ +𝜽 +− +𝜽 +𝑡 +𝑃 +‖ +𝑉 +𝑡 ++ +‖ +𝜽 +𝑡 +𝑃 +− +𝜽 +⋆ +‖ +𝑉 +𝑡 +) + + +≤ +𝐿 +​ +‖ +𝜙 +​ +( +𝒙 +) +‖ +𝑉 +𝑡 +− +1 +​ +( +𝛽 +~ +𝑡 +​ +( +𝛿 +) ++ +‖ +𝜽 +𝑡 +𝑃 +− +𝜽 +⋆ +‖ +𝑉 +𝑡 +) + +𝜽 +∈ +Θ +𝑡 +​ +( +𝛿 +) + + +≤ +w.h.p. +2 +​ +𝐿 +​ +𝛽 +~ +𝑡 +​ +( +𝛿 +) +​ +‖ +𝜙 +​ +( +𝒙 +) +‖ +𝑉 +𝑡 +− +1 + +Lem. 9 + + +≤ +2 +​ +𝐿 +​ +𝛽 +~ +𝑡 +​ +( +𝛿 +) +𝜆 +​ +𝜅 +​ +𝜎 +𝑡 +​ +( +𝒙 +) + +Lem. 13 + + += +𝜎 +𝑡 +​ +( +𝒙 +) +​ +( +4 +​ +𝐿 +​ +𝐵 ++ +2 +​ +𝐿 +​ +𝜅 +𝜆 +​ +2 +​ +log +⁡ +1 +/ +𝛿 ++ +2 +​ +𝛾 +𝑇 +) + + +where the third to last inequality holds with probability greater than +1 +− +𝛿 + , but the rest of the inequalities hold deterministically. ∎ + +Algorithm 2 LGP-UCB +Initialize Set +( +𝛽 +𝑡 +) +𝑡 +≥ +1 + according to Theorem˜2. +for +𝑡 +≥ +1 + do +  Choose an optimistic action via + +𝑥 +𝑡 += +arg +​ +max +𝑥 +∈ +𝒳 +⁡ +𝑠 +​ +( +𝑓 +𝑡 +− +1 +​ +( +𝒙 +) +) ++ +𝛽 +𝑡 +− +1 +​ +( +𝛿 +) +​ +𝜎 +𝑡 +− +1 +​ +( +𝒙 +) + +  Observe +𝑦 +𝑡 + and append history. +  Calculate +𝑓 +𝑡 + acc. to Proposition˜1 and update +𝜎 +𝑡 + acc. to Theorem˜2. +end for + +Given the confidence set of Theorem˜2, we give extend the LGP-UCB algorithm of faury2020improved to the kernelized setting (c.f. Algorithm˜2) and prove that it satisfies sublinear regret. + +Proof of Corollary˜3. + +Recall that if +𝒙 +𝑡 + is the maximizer of the UCB, then + + +𝑠 +​ +( +𝜙 +⊤ +​ +( +𝒙 +⋆ +) +​ +𝜽 +𝑡 +𝑃 +) +− +𝑠 +​ +( +𝜙 +⊤ +​ +( +𝒙 +𝑡 +) +​ +𝜽 +𝑡 +𝑃 +) +≤ +𝜎 +𝑡 +​ +( +𝒙 +𝑡 +) +​ +𝛽 +𝑡 +​ +( +𝛿 +) +− +𝜎 +𝑡 +​ +( +𝒙 +⋆ +) +​ +𝛽 +𝑡 +​ +( +𝛿 +) + + +Then using Theorem˜10, we obtain the following for the regret at step +𝑡 +, + + +𝑟 +𝑡 + += +𝑠 +​ +( +𝜙 +⊤ +​ +( +𝒙 +⋆ +) +​ +𝜽 +⋆ +) +− +𝑠 +​ +( +𝜙 +⊤ +​ +( +𝒙 +𝑡 +) +​ +𝜽 +⋆ +) + + += +𝑠 +​ +( +𝜙 +⊤ +​ +( +𝒙 +⋆ +) +​ +𝜽 +⋆ +) +− +𝑠 +​ +( +𝜙 +⊤ +​ +( +𝒙 +⋆ +) +​ +𝜽 +𝑡 +𝑃 +) ++ +𝑠 +​ +( +𝜙 +⊤ +​ +( +𝒙 +𝑡 +) +​ +𝜽 +𝑡 +𝑃 +) +− +𝑠 +​ +( +𝜙 +⊤ +​ +( +𝒙 +𝑡 +) +​ +𝜽 +⋆ +) + + ++ +𝑠 +​ +( +𝜙 +⊤ +​ +( +𝒙 +⋆ +) +​ +𝜽 +𝑡 +𝑃 +) +− +𝑠 +​ +( +𝜙 +⊤ +​ +( +𝒙 +𝑡 +) +​ +𝜽 +𝑡 +𝑃 +) + + +≤ +𝜎 +𝑡 +​ +( +𝒙 +⋆ +) +​ +𝛽 +𝑡 +​ +( +𝛿 +) ++ +𝜎 +𝑡 +​ +( +𝒙 +𝑡 +) +​ +𝛽 +𝑡 +​ +( +𝛿 +) ++ +𝜎 +𝑡 +​ +( +𝒙 +𝑡 +) +​ +𝛽 +𝑡 +​ +( +𝛿 +) +− +𝜎 +𝑡 +​ +( +𝒙 +⋆ +) +​ +𝛽 +𝑡 +​ +( +𝛿 +) + + +≤ +2 +​ +𝛽 +𝑡 +​ +( +𝛿 +) +​ +𝜎 +𝑡 +​ +( +𝒙 +𝑡 +) + + +with probability greater than +1 +− +𝛿 + for all +𝑡 +≥ +0 +. Which allows us to bound the cumulative regret as, + + +𝑅 +𝑇 += +∑ +𝑡 += +1 +𝑇 +𝑟 +𝑡 + +≤ +𝑇 +​ +∑ +𝑡 += +1 +𝑇 +𝑟 +𝑡 +2 + + +≤ +2 +​ +𝛽 +𝑇 +​ +( +𝛿 +) +​ +𝑇 +​ +∑ +𝑡 += +1 +𝑇 +𝜎 +𝑡 +2 +​ +( +𝒙 +𝑡 +) + +𝛽 +𝑡 +​ +( +𝛿 +) +≤ +𝛽 +𝑇 +​ +( +𝛿 +) + + +≤ +𝐶 +1 +​ +𝛽 +𝑇 +​ +( +𝛿 +) +​ +𝑇 +​ +𝛾 +𝑡 + +Lem. 14 + +where +𝐶 +1 +≔ +8 +/ +log +⁡ +( +1 ++ +( +𝜆 +​ +𝜅 +) +− +1 +) +. ∎ + +Appendix BHelper Lemmas for Appendix˜A +Lemma 11 (Mean-Value Theorem). + +For any +𝐱 +∈ +𝒳 + and +𝛉 +1 +, +𝛉 +2 +∈ +ℓ +2 +​ +( +ℕ +) + it holds that + + +𝑠 +​ +( +𝜽 +2 +⊤ +​ +𝜙 +​ +( +𝒙 +) +) +− +𝑠 +​ +( +𝜽 +1 +⊤ +​ +𝜙 +​ +( +𝒙 +) +) += +𝛼 +​ +( +𝒙 +; +𝜽 +1 +, +𝜽 +2 +) +​ +( +𝜽 +2 +− +𝜽 +1 +) +⊤ +​ +𝜙 +​ +( +𝒙 +) + + +where + + +𝛼 +​ +( +𝒙 +; +𝜽 +1 +, +𝜽 +2 +) += +∫ +0 +1 +𝑠 +˙ +​ +( +𝜈 +​ +𝜽 +2 +⊤ +​ +𝜙 +​ +( +𝒙 +) ++ +( +1 +− +𝜈 +) +​ +𝜽 +1 +⊤ +​ +𝜙 +​ +( +𝒙 +) +) +​ +d +𝜈 + +(10) +Proof of Lemma˜11. + +For any differentiable function +𝑠 +: +ℝ +→ +ℝ + by the fundamental theorem of calculus we have + + +𝑠 +​ +( +𝑧 +2 +) +− +𝑠 +​ +( +𝑧 +1 +) += +∫ +𝑧 +1 +𝑧 +2 +𝑠 +˙ +​ +( +𝑧 +) +​ +d +𝑧 +. + + +We change the variable of integration to +𝜈 += +( +𝑧 +− +𝑧 +1 +) +/ +( +𝑧 +2 +− +𝑧 +1 +) +, then +𝑧 += +𝜈 +​ +𝑧 +2 ++ +( +1 +− +𝜈 +) +​ +𝑧 +1 + and re-writing the integral in terms of +𝜈 + gives, + + +𝑠 +​ +( +𝑧 +2 +) +− +𝑠 +​ +( +𝑧 +1 +) += +( +𝑧 +2 +− +𝑧 +1 +) +​ +∫ +0 +1 +𝑠 +˙ +​ +( +𝜈 +​ +𝑧 +2 ++ +( +1 +− +𝜈 +) +​ +𝑧 +1 +) +​ +d +𝜈 +. + + +Letting +𝑧 +1 += +𝜽 +1 +⊤ +​ +𝜙 +​ +( +𝒙 +) + and +𝑧 +2 += +𝜽 +2 +⊤ +​ +𝜙 +​ +( +𝒙 +) + concludes the proof. ∎ + +Lemma 12 (Gradients to Parameters Conversion). + +For all +𝑡 +≥ +0 + and norm +𝐵 +-bounded +𝛉 +1 +, +𝛉 +2 +∈ +ℓ +2 +​ +( +ℕ +) + + +‖ +𝜽 +1 +− +𝜽 +2 +‖ +𝑉 +𝑡 +≤ +𝜅 +​ +‖ +𝑔 +𝑡 +​ +( +𝜽 +1 +) +− +𝑔 +𝑡 +​ +( +𝜽 +2 +) +‖ +𝑉 +𝑡 +− +1 +. + +Proof of Lemma˜12. + +We prove the lemma through an auxiliary operator +𝐺 +𝑡 +​ +( +𝜽 +1 +, +𝜽 +2 +) + operating on +ℋ +𝑘 + + +𝐺 +𝑡 +​ +( +𝜽 +1 +, +𝜽 +2 +) += +𝜆 +​ +𝑰 +ℋ ++ +∑ +𝜏 +≤ +𝑡 +𝛼 +​ +( +𝒙 +𝜏 +; +𝜽 +1 +, +𝜽 +2 +) +​ +𝜙 +​ +( +𝒙 +𝜏 +) +​ +𝜙 +⊤ +​ +( +𝒙 +𝜏 +) + + +where +𝛼 + is defined in (10). + +Step 1. First we establish how we can go back and forth between the operator norms defined based on +𝐺 +𝑡 + and +𝑉 +𝑡 +. Recall that +𝜅 += +sup +𝑧 +≤ +𝐵 +1 +𝑠 +˙ +​ +( +𝑧 +) +. Therefore, +𝜅 +− +1 +≤ +𝑠 +˙ +​ +( +𝑧 +) + for all +𝑧 +< +𝐵 +, implying that +𝛼 +​ +( +𝒙 +; +𝜽 +1 +, +𝜽 +2 +) +≥ +∫ +0 +1 +𝜅 +− +1 +​ +d +𝜈 += +𝜅 +− +1 +. We can then conclude, + + +𝐺 +𝑡 +​ +( +𝜽 +1 +, +𝜽 +2 +) +≥ +𝜆 +​ +𝑰 +ℋ ++ +∑ +𝜏 +≤ +𝑡 +𝜅 +− +1 +​ +𝜙 +​ +( +𝒙 +𝜏 +) +​ +𝜙 +⊤ +​ +( +𝒙 +𝜏 +) += +𝜅 +− +1 +​ +𝑉 +𝑡 +. + +(11) + +Step 2. Now by the definition of +𝑔 +𝑡 +​ +( +𝜽 +) +, + + +𝑔 +𝑡 +​ +( +𝜽 +2 +) +− +𝑔 +𝑡 +​ +( +𝜽 +1 +) + += +𝜆 +​ +( +𝜽 +2 +− +𝜽 +1 +) ++ +∑ +𝜏 +≤ +𝑡 +𝜙 +​ +( +𝒙 +𝜏 +) +​ +[ +𝑠 +​ +( +𝜽 +2 +⊤ +​ +𝜙 +​ +( +𝒙 +𝜏 +) +) +− +𝑠 +​ +( +𝜽 +1 +⊤ +​ +𝜙 +​ +( +𝒙 +𝜏 +) +) +] + + += +𝜆 +​ +( +𝜽 +2 +− +𝜽 +1 +) ++ +∑ +𝜏 +≤ +𝑡 +𝜙 +​ +( +𝒙 +𝜏 +) +​ +[ +𝛼 +​ +( +𝒙 +𝜏 +; +𝜽 +1 +, +𝜽 +2 +) +​ +𝜙 +⊤ +​ +( +𝒙 +𝜏 +) +​ +( +𝜽 +2 +− +𝜽 +1 +) +] + +(Lem. 11) + + += +( +𝜆 +​ +𝑰 +ℋ ++ +∑ +𝜏 +≤ +𝑡 +𝛼 +​ +( +𝒙 +𝜏 +; +𝜽 +1 +, +𝜽 +2 +) +​ +𝜙 +​ +( +𝒙 +𝜏 +) +​ +𝜙 +⊤ +​ +( +𝒙 +𝜏 +) +) +​ +( +𝜽 +2 +− +𝜽 +1 +) + + += +𝐺 +𝑡 +​ +( +𝜽 +1 +, +𝜽 +2 +) +​ +( +𝜽 +2 +− +𝜽 +1 +) + + +Therefore, + + +‖ +𝑔 +𝑡 +​ +( +𝜽 +2 +) +− +𝑔 +𝑡 +​ +( +𝜽 +1 +) +‖ +𝐺 +𝑡 +− +1 +​ +( +𝜽 +1 +, +𝜽 +2 +) + += +[ +𝑔 +𝑡 +​ +( +𝜽 +2 +) +− +𝑔 +𝑡 +​ +( +𝜽 +1 +) +] +⊤ +​ +( +𝜽 +1 +− +𝜽 +2 +) + + += +( +𝜽 +2 +− +𝜽 +1 +) +⊤ +​ +𝐺 +𝑡 +​ +( +𝜽 +2 +− +𝜽 +1 +) + + += +‖ +𝜽 +2 +− +𝜽 +1 +‖ +𝐺 +𝑡 +​ +( +𝜽 +1 +, +𝜽 +2 +) +. + +(12) + +Step 3. Putting together the previous two steps, we can bound the +𝑉 +𝑡 +-norm over the parameters to the +𝑉 +𝑡 +− +1 + role in the gradients, + + +‖ +𝜽 +1 +− +𝜽 +2 +‖ +𝑉 +𝑡 + +≤ +( +11 +) +𝜅 +​ +‖ +𝜽 +1 +− +𝜽 +2 +‖ +𝐺 +𝑡 +​ +( +𝜽 +1 +, +𝜽 +2 +) + + +≤ +( +12 +) +𝜅 +​ +‖ +𝑔 +𝑡 +​ +( +𝜽 +1 +) +− +𝑔 +𝑡 +​ +( +𝜽 +2 +) +‖ +𝐺 +𝑡 +− +1 +​ +( +𝜽 +1 +, +𝜽 +2 +) + + +≤ +( +11 +) +𝜅 +​ +‖ +𝑔 +𝑡 +​ +( +𝜽 +1 +) +− +𝑔 +𝑡 +​ +( +𝜽 +2 +) +‖ +𝑉 +𝑡 +− +1 + + +concluding the proof. ∎ + +The following two lemmas are standard results in kernelized bandits [srinivas2009gaussian, chowdhury2017kernelized, e.g.,]. We include it here for completeness. + +Lemma 13. + +Let +𝜎 +𝑡 + be as defined in (4). Then +𝜆 +​ +𝜅 +​ +‖ +𝜙 +​ +( +𝐱 +) +‖ +𝑉 +𝑡 +− +1 += +𝜎 +𝑡 +​ +( +𝐱 +) +, for any +𝐱 +∈ +𝒳 +. + +Proof of Lemma˜13. + +We start by stating some identities which will later be of use. First note that + + +( +Φ +𝑡 +⊤ +​ +Φ +𝑡 ++ +𝜆 +​ +𝜅 +​ +𝑰 +ℋ +) +​ +Φ +𝑡 +⊤ += +Φ +𝑡 +⊤ +​ +( +Φ +𝑡 +​ +Φ +𝑡 +⊤ ++ +𝜆 +​ +𝜅 +​ +𝑰 +𝑡 +) + + +which gives + + +Φ +𝑡 +⊤ +​ +( +Φ +𝑡 +​ +Φ +𝑡 +⊤ ++ +𝜆 +​ +𝜅 +​ +𝑰 +𝑡 +) +− +1 += +( +Φ +𝑡 +⊤ +​ +Φ +𝑡 ++ +𝜆 +​ +𝜅 +​ +𝑰 +ℋ +) +− +1 +​ +Φ +𝑡 +⊤ +. + +(13) + +Moreover, by definition of +𝒌 +𝑡 + we have + + +𝒌 +𝑡 +​ +( +𝒙 +) += +Φ +𝑡 +​ +𝜙 +​ +( +𝒙 +) + +(14) + +which allow us to write + + +( +Φ +𝑡 +⊤ +​ +Φ +𝑡 ++ +𝜆 +​ +𝜅 +​ +𝑰 +ℋ +) +​ +𝜙 +​ +( +𝒙 +) += +Φ +𝑡 +⊤ +​ +𝒌 +𝑡 +​ +( +𝒙 +) ++ +𝜆 +​ +𝜅 +​ +𝜙 +​ +( +𝒙 +) +, + + +and obtain + + +𝜙 +​ +( +𝒙 +) + += +( +Φ +𝑡 +⊤ +​ +Φ +𝑡 ++ +𝜆 +​ +𝜅 +​ +𝑰 +ℋ +) +− +1 +​ +Φ +𝑡 +⊤ +​ +𝒌 +𝑡 +​ +( +𝒙 +) ++ +𝜆 +​ +𝜅 +​ +( +Φ +𝑡 +⊤ +​ +Φ +𝑡 ++ +𝜆 +​ +𝜅 +​ +𝑰 +) +− +1 +​ +𝜙 +​ +( +𝒙 +) + + += +( +13 +) +Φ +𝑡 +⊤ +​ +( +Φ +𝑡 +​ +Φ +𝑡 +⊤ ++ +𝜆 +​ +𝜅 +​ +𝑰 +𝑡 +) +− +1 +​ +𝒌 +𝑡 +​ +( +𝒙 +) ++ +𝜆 +​ +𝜅 +​ +( +Φ +𝑡 +⊤ +​ +Φ +𝑡 ++ +𝜆 +​ +𝜅 +​ +𝑰 +) +− +1 +​ +𝜙 +​ +( +𝒙 +) +. + + +Given the above equation, we conclude the proof by the following chain of equations: + + +𝑘 +​ +( +𝒙 +, +𝒙 +) + += +𝜙 +⊤ +​ +( +𝒙 +) +​ +𝜙 +​ +( +𝒙 +) + + += +( +Φ +𝑡 +⊤ +​ +( +Φ +𝑡 +​ +Φ +𝑡 +⊤ ++ +𝜆 +​ +𝜅 +​ +𝑰 +𝑡 +) +− +1 +​ +𝒌 +𝑡 +​ +( +𝒙 +) ++ +𝜆 +​ +𝜅 +​ +( +Φ +𝑡 +⊤ +​ +Φ +𝑡 ++ +𝜆 +​ +𝜅 +​ +𝑰 +ℋ +) +− +1 +​ +𝜙 +​ +( +𝒙 +) +) +⊤ +​ +𝜙 +​ +( +𝒙 +) + + += +𝒌 +𝑡 +⊤ +​ +( +𝒙 +) +​ +( +Φ +𝑡 +​ +Φ +𝑡 +⊤ ++ +𝜆 +​ +𝜅 +​ +𝑰 +𝑡 +) +− +1 +​ +Φ +𝑡 +​ +𝜙 +​ +( +𝒙 +) ++ +𝜆 +​ +𝜅 +​ +𝜙 +⊤ +​ +( +𝒙 +) +​ +( +Φ +𝑡 +⊤ +​ +Φ +𝑡 ++ +𝜆 +​ +𝜅 +​ +𝑰 +ℋ +) +− +1 +​ +𝜙 +​ +( +𝒙 +) + + += +( +14 +) +𝒌 +𝑡 +⊤ +​ +( +𝒙 +) +​ +( +𝐾 +𝑡 ++ +𝜆 +​ +𝜅 +​ +𝑰 +𝑡 +) +− +1 +​ +𝒌 +𝑡 +​ +( +𝒙 +) ++ +𝜆 +​ +𝜅 +​ +𝜙 +⊤ +​ +( +𝒙 +) +​ +𝑉 +𝑡 +− +1 +​ +𝜙 +​ +( +𝒙 +) + + +To obtain the third equation we have used the fact that for bounded operators on Hilbert spaces, the inverse of the adjoint is equal to the adjoint of the inverse [e.g., Theorem 10.19, axler2020measure]. Re-ordering the equation above we obtain +𝜎 +𝑡 +2 +​ +( +𝒙 +) += +𝜆 +​ +𝜅 +​ +‖ +𝜙 +​ +( +𝒙 +) +‖ +𝑉 +𝑡 +− +1 +2 +, concluding the proof. ∎ + +Lemma 14 (Controlling posterior variance with information gain). + +For all +𝑇 +≥ +1 +, + + +∑ +𝑡 += +1 +𝑇 +𝜎 +𝑡 +2 +​ +( +𝒙 +𝑡 +) +≤ +2 +​ +𝛾 +𝑇 +log +⁡ +( +1 ++ +( +𝜆 +​ +𝜅 +) +− +1 +) +, +∑ +𝑡 += +1 +𝑇 +( +𝜎 +𝑡 +D +​ +( +𝒙 +𝑡 +) +) +2 +≤ +8 +​ +𝛾 +𝑇 +D +log +⁡ +( +1 ++ +4 +​ +( +𝜆 +​ +𝜅 +) +− +1 +) +. + +Proof of Lemma˜14. + +By srinivas2009gaussian, + + +𝛾 +𝑇 += +max +𝒙 +1 +, +… +​ +𝒙 +𝑇 +⁡ +1 +2 +​ +∑ +𝑡 += +1 +𝑇 +log +⁡ +( +1 ++ +( +𝜆 +​ +𝜅 +) +− +1 +​ +𝜎 +𝑡 +− +1 +2 +​ +( +𝒙 +𝑡 +) +) +. + + +Following the technique in srinivas2009gaussian, since +𝜎 +𝑡 +2 +≤ +1 +, then +( +𝜆 +​ +𝜅 +) +− +1 +​ +𝜎 +𝑡 +2 +∈ +[ +0 +, +( +𝜆 +​ +𝜅 +) +− +1 +] +. Now for any +𝑧 +∈ +[ +0 +, +( +𝜆 +​ +𝜅 +) +− +1 +] +, +𝑧 +≤ +𝐶 +​ +log +⁡ +( +1 ++ +𝑧 +) + where +𝐶 += +1 +/ +( +𝜆 +​ +𝜅 +​ +log +⁡ +( +1 ++ +( +𝜆 +​ +𝜅 +) +− +1 +) +) +. We then may write, + + +∑ +𝑡 += +1 +𝑇 +𝜎 +𝑡 +2 +​ +( +𝒙 +𝑡 +) + += +∑ +𝑡 += +1 +𝑇 +𝜆 +​ +𝜅 +​ +( +𝜆 +​ +𝜅 +) +− +1 +​ +𝜎 +𝑡 +2 +​ +( +𝒙 +𝑡 +) + + +≤ +∑ +𝑡 += +1 +𝑇 +𝜆 +​ +𝜅 +​ +𝐶 +​ +log +⁡ +( +1 ++ +( +𝜆 +​ +𝜅 +) +− +1 +​ +𝜎 +𝑡 +2 +​ +( +𝒙 +𝑡 +) +) + + += +∑ +𝑡 += +1 +𝑇 +log +⁡ +( +1 ++ +( +𝜆 +​ +𝜅 +) +− +1 +​ +𝜎 +𝑡 +2 +​ +( +𝒙 +𝑡 +) +) +log +⁡ +( +1 ++ +( +𝜆 +​ +𝜅 +) +− +1 +) + + +Putting both together proves the first inequality of the lemma. As for the dueling case, we can easily check that +𝜎 +𝑡 +D +≤ +2 +, and a similar argument yields the second inequality. ∎ + +Appendix CProofs for Bandits with Preference Feedback + +This section presents the proof of main results in Section˜5, and our additional contributions in the kernelized Preference-based setting. + +C.1Equivalence of Preference-based and Logistic Losses + +We start by establishing the equivalence between the logistic loss (3) and dueling loss (6). + +Proof of Proposition˜4. + +By Mercer’s theorem, we know that the kernel function +𝑘 + has eigenvalue eigenfunction pairs +( +𝜆 +𝑖 +, +𝜙 +~ +𝑖 +) + for +𝑖 +≥ +1 + where +𝜙 +~ +𝑖 + are orthonormal. Then +𝑘 +​ +( +𝒙 +, +𝒙 +′ +) += +∑ +𝑖 +≥ +1 +𝜙 +𝑖 +​ +( +𝒙 +) +​ +𝜙 +𝑖 +​ +( +𝒙 +′ +) + with +𝜙 +𝑖 +​ +( +𝒙 +) += +𝜆 +𝑖 +​ +𝜙 +~ +𝑖 +​ +( +𝒙 +) +. Now applying the definition of +𝑘 +D +, it holds that +𝑘 +D +​ +( +𝒛 +, +𝒛 +′ +) += +∑ +𝑖 +≥ +1 +𝜓 +𝑖 +⊤ +​ +( +𝒛 +) +​ +𝜓 +𝑖 +​ +( +𝒛 +′ +) + where +𝜓 +𝑖 +​ +( +𝒛 +) += +𝜆 +𝑖 +​ +( +𝜙 +𝑖 +​ +( +𝒙 +) +− +𝜙 +𝑖 +​ +( +𝒙 +′ +) +) +. It is straighforward to check that +𝜓 +𝑖 + are the eigenfunctions of +𝑘 +D +, however, they may not be orthonormal. We have, + + +⟨ +𝜓 +𝑖 +, +𝜓 +𝑖 +⟩ +𝐿 +2 + += +2 +​ +𝜆 +𝑖 +​ +( +1 +− +𝑏 +𝑖 +2 +) + + +⟨ +𝜓 +𝑖 +, +𝜓 +𝑗 +⟩ +𝐿 +2 + += +− +2 +​ +𝜆 +𝑖 +​ +𝜆 +𝑗 +​ +𝑏 +𝑖 +​ +𝑏 +𝑗 + + +where +𝑏 +𝑖 += +∫ +𝜙 +~ +𝑖 +​ +( +𝒙 +) +​ +d +​ +( +𝒙 +) +. By the assumption of the proposition, we have +𝑏 +𝑖 += +0 +. However, this assumption holds automatically for all kernels commonly used in applications, e.g. any translation invariant kernel, over many domains, since +𝜙 +~ +𝑖 + for such kernels are a sine basis. + +Now since +𝑓 +∈ +ℋ +𝑘 +, it may be decomposed +𝑓 += +∑ +𝑖 +≥ +1 +𝛽 +𝑖 +​ +𝜙 +𝑖 + and +‖ +𝑓 +‖ +𝑘 +2 += +∑ +𝑖 +≥ +1 +𝛽 +𝑖 +2 +≤ +∞ +. And set the difference function to +ℎ +​ +( +𝒙 +, +𝒙 +′ +) += +∑ +𝑖 +≥ +1 +𝛽 +𝑖 +​ +𝜓 +𝑖 +​ +( +𝒛 +) +. We can then bound the RKHS norm of +ℎ + w.r.t. the kernel +𝑘 +D + as follows + + +‖ +ℎ +‖ +𝑘 +D +2 + += +∑ +𝑖 +≥ +1 +( +⟨ +ℎ +, +𝜓 +𝑖 +⟩ +𝐿 +2 +⟨ +𝜓 +𝑖 +, +𝜓 +𝑖 +⟩ +𝐿 +2 +) +2 + + += +∑ +𝑖 +≥ +1 +( +∑ +𝑗 +≥ +1 +𝛽 +𝑗 +​ +⟨ +𝜓 +𝑗 +, +𝜓 +𝑖 +⟩ +𝐿 +2 +2 +​ +𝜆 +𝑖 +​ +( +1 +− +𝑏 +𝑖 +) +) +2 + + += +∑ +𝑖 +≥ +1 +( +𝛽 +𝑖 +− +𝑏 +𝑖 +𝜆 +𝑖 +​ +( +1 +− +𝑏 +𝑖 +) +​ +∑ +𝑗 +≠ +𝑖 +𝛽 +𝑗 +​ +𝑏 +𝑗 +​ +𝜆 +𝑗 +) +2 + + += +𝑏 +𝑖 += +0 +‖ +𝑓 +‖ +𝑘 +2 +≤ +𝐵 +2 +. + + +Now by Mercer’s theorem, +ℎ +∈ +ℋ +𝑘 +D + since it is decomposable as a sum of +𝑘 +D + eigenfunctions, and attains a +𝐵 +-bounded +𝑘 +D +-norm which we showed to be equal to +‖ +𝑓 +‖ +𝑘 +. The other direction of the statement is proved the same way. ∎ + +Proof of Corollary˜5. + +Consider the utility function +𝑓 + and define +ℎ +​ +( +𝒙 +, +𝒙 +′ +) +≔ +𝑓 +​ +( +𝒙 +) +− +𝑓 +​ +( +𝒙 +′ +) +. Then by Proposition˜4, +ℎ + is in RKHS of +𝑘 +D + with a +𝑘 +D +-norm bounded by +𝐵 +. We may estimate +ℎ + by minimizing +ℒ +𝑘 +D +L +​ +( +⋅ +; +𝐻 +𝑡 +) +. Now invoking Theorem˜2 with the dueling kernel we have, + + +ℙ +( +∀ +𝑡 +≥ +1 +, +𝒙 +, +𝒙 +′ +∈ +𝒳 +: +| +𝑠 +( +ℎ +𝑡 +( +𝒙 +, +𝒙 +′ +) +) +− +𝑠 +( +ℎ +( +𝒙 +, +𝒙 +′ +) +) +| +≤ +𝛽 +𝑡 +D +( +𝛿 +) +𝜎 +𝑡 +𝐷 +( +𝒙 +, +𝒙 +′ +) +) +≥ +1 +− +𝛿 + + +concluding the proof by definition of +ℎ +. ∎ + +C.2Proof of the Preference-based Regret Bound + +Recall Corollary˜5, which states + + +| +𝑠 +​ +( +𝑓 +​ +( +𝒙 +⋆ +) +− +𝑓 +​ +( +𝒙 +𝑡 +) +) +− +𝑠 +​ +( +ℎ +𝑡 +​ +( +𝒙 +⋆ +, +𝒙 +𝑡 +) +) +| +≤ +𝛽 +𝑡 +D +​ +( +𝛿 +) +​ +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +, +𝒙 +′ +) + + +with high probability simultaneously for all +( +𝒙 +, +𝒙 +′ +) + and +𝑡 +≥ +1 +. For simplicity in notation in the rest of this section, we define +𝜔 +𝑡 +​ +( +𝒙 +, +𝒙 +′ +) +≔ +𝛽 +𝑡 +D +​ +( +𝛿 +) +​ +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +, +𝒙 +′ +) + and + + +LCB +𝑡 +​ +( +𝒙 +, +𝒙 +′ +) + += +𝑠 +​ +( +ℎ +𝑡 +​ +( +𝒙 +, +𝒙 +′ +) +) +− +𝜔 +𝑡 +​ +( +𝒙 +, +𝒙 +′ +) +, + + +UCB +𝑡 +​ +( +𝒙 +, +𝒙 +′ +) + += +𝑠 +​ +( +ℎ +𝑡 +​ +( +𝒙 +, +𝒙 +′ +) +) ++ +𝜔 +𝑡 +​ +( +𝒙 +, +𝒙 +′ +) +. + + +Note that +𝜔 +𝑡 +​ +( +𝒙 +, +𝒙 +′ +) += +𝜔 +𝑡 +​ +( +𝒙 +′ +, +𝒙 +) + by the symmetry of the dueling kernel +𝑘 +D +. Furthermore, recall the notation +ℎ +​ +( +𝒙 +, +𝒙 +′ +) += +𝑓 +​ +( +𝒙 +) +− +𝑓 +​ +( +𝒙 +′ +) +. + +Before providing the proof of Theorem˜6, we derive two important inequalities. Note that +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +∈ +ℳ +𝑡 + implies that + + +UCB +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) + +≥ +0.5 + + +𝑠 +​ +( +ℎ +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +) + +≥ +0.5 +− +𝜔 +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) + +(15) + +Additionally note that +LCB +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +) += +0.5 + for all +𝒙 +𝑡 +, therefore, by the definition of +𝒙 +𝑡 +′ +, +LCB +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +≤ +0.5 +. It implies that + + +LCB +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) + +≤ +0.5 + + +𝑠 +​ +( +ℎ +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +) + +≤ +0.5 ++ +𝜔 +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +. + +(16) + +From Equation˜15 and Equation˜16, it follows that + + +| +𝑠 +​ +( +ℎ +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +) +− +0.5 +| +≤ +𝜔 +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) + +(17) + +furthermore, + + +UCB +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +− +0.5 + += +𝑠 +​ +( +ℎ +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +) +− +0.5 ++ +𝜔 +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) + + +≤ +| +𝑠 +​ +( +ℎ +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +) +− +0.5 +| ++ +𝜔 +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) + + +≤ +2 +​ +𝜔 +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) + +(18) + +and similarly + + +0.5 +− +LCB +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +≤ +2 +​ +𝜔 +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +. + +(19) + +From Equation˜18 and Equation˜19, it follows that + + +| +𝑠 +​ +( +𝑓 +​ +( +𝒙 +𝑡 +) +− +𝑓 +​ +( +𝒙 +𝑡 +′ +) +) +− +0.5 +| + +≤ +max +⁡ +{ +UCB +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +− +0.5 +, +0.5 +− +LCB +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +} + + +≤ +2 +​ +𝜔 +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +. + +(20) + +We are going to refer to Equation˜17 and Equation˜20 in the following proof. + +Proof of Theorem˜6. + +Step 1: First, we connect the term of +𝒙 +𝑡 +′ + in the dueling regret defined in Equation˜1 to that of +𝒙 +𝑡 +. Note that both +𝑠 +​ +( +𝑓 +​ +( +𝒙 +∗ +) +− +𝑓 +​ +( +𝒙 +𝑡 +′ +) +) + and +𝑠 +​ +( +𝑓 +​ +( +𝒙 +∗ +) +− +𝑓 +​ +( +𝒙 +𝑡 +) +) + are greater than +0.5 + due to the optimality of +𝒙 +⋆ + and the sigmoid function +𝑠 + is concave on the interval +[ +0.5 +, +∞ +) +. Using the definition of concavity, we get + + +𝑠 +​ +( +𝑓 +​ +( +𝒙 +∗ +) +− +𝑓 +​ +( +𝒙 +𝑡 +′ +) +) + +≤ +𝑠 +​ +( +𝑓 +​ +( +𝒙 +∗ +) +− +𝑓 +​ +( +𝒙 +𝑡 +) +) ++ +𝑠 +˙ +​ +( +𝑓 +​ +( +𝒙 +∗ +) +− +𝑓 +​ +( +𝒙 +𝑡 +) +) +​ +( +𝑓 +​ +( +𝒙 +𝑡 +) +− +𝑓 +​ +( +𝒙 +𝑡 +′ +) +) + + += +𝑠 +​ +( +𝑓 +​ +( +𝒙 +∗ +) +− +𝑓 +​ +( +𝒙 +𝑡 +) +) ++ +𝑠 +​ +( +𝑓 +​ +( +𝒙 +∗ +) +− +𝑓 +​ +( +𝒙 +𝑡 +) +) +​ +𝑠 +​ +( +𝑓 +​ +( +𝒙 +𝑡 +) +− +𝑓 +​ +( +𝒙 +∗ +) +) +​ +( +𝑓 +​ +( +𝒙 +𝑡 +) +− +𝑓 +​ +( +𝒙 +𝑡 +′ +) +) + + +≤ +( +1 ++ +ℎ +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +2 +) +​ +𝑠 +​ +( +𝑓 +​ +( +𝒙 +∗ +) +− +𝑓 +​ +( +𝒙 +𝑡 +) +) + +(21) + +where the second line comes from the derivative of the sigmoid function, +𝑠 +˙ +​ +( +𝑥 +) += +𝑠 +​ +( +𝑥 +) +​ +( +1 +− +𝑠 +​ +( +𝑥 +) +) += +𝑠 +​ +( +𝑥 +) +​ +𝑠 +​ +( +− +𝑥 +) +, and in the last line we use +𝑠 +​ +( +𝑓 +​ +( +𝒙 +𝑡 +) +− +𝑓 +​ +( +𝒙 +∗ +) +) +≤ +0.5 +. + +Using Equation˜21, we can upper bound the dueling regret in Equation˜1 as + + +2 +​ +𝑟 +𝑡 +𝐷 + += +𝑠 +​ +( +𝑓 +​ +( +𝒙 +⋆ +) +− +𝑓 +​ +( +𝒙 +𝑡 +) +) ++ +𝑠 +​ +( +𝑓 +​ +( +𝒙 +⋆ +) +− +𝑓 +​ +( +𝒙 +𝑡 +′ +) +) +− +1 + + +≤ +𝑠 +​ +( +𝑓 +​ +( +𝒙 +⋆ +) +− +𝑓 +​ +( +𝒙 +𝑡 +) +) ++ +( +1 ++ +ℎ +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +2 +) +​ +𝑠 +​ +( +𝑓 +​ +( +𝒙 +⋆ +) +− +𝑓 +​ +( +𝒙 +𝑡 +) +) +− +1 + + +≤ +2 +​ +𝑠 +​ +( +𝑓 +​ +( +𝒙 +⋆ +) +− +𝑓 +​ +( +𝒙 +𝑡 +) +) +− +1 ++ +ℎ +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +2 +​ +𝑠 +​ +( +𝑓 +​ +( +𝒙 +⋆ +) +− +𝑓 +​ +( +𝒙 +𝑡 +) +) + +(22) + +Step 2: Next, we show that the single-step regret is bounded by +𝜔 +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +. + +First, consider the term +𝑠 +​ +( +𝑓 +​ +( +𝒙 +⋆ +) +− +𝑓 +​ +( +𝒙 +𝑡 +) +) + + +𝑠 +​ +( +𝑓 +​ +( +𝒙 +⋆ +) +− +𝑓 +​ +( +𝒙 +𝑡 +) +) +− +0.5 + += +0.5 +− +𝑠 +​ +( +𝑓 +​ +( +𝒙 +𝑡 +) +− +𝑓 +​ +( +𝒙 +⋆ +) +) + Sigmoid definition + +≤ +0.5 +− +LCB +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +⋆ +) + + +≤ +0.5 +− +LCB +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) + Def. of +𝒙 +𝑡 +′ + + +≤ +| +0.5 +− +𝑠 +​ +( +ℎ +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +) +| ++ +𝜔 +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) + Def. of +LCB +𝑡 + + +≤ +2 +​ +𝜔 +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) + +(23) + +Using Equation˜23, we can rewrite the first term in Equation˜22 to get + + +2 +​ +𝑟 +𝑡 +𝐷 +≤ +4 +​ +𝜔 +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) ++ +ℎ +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +2 +​ +𝑠 +​ +( +𝑓 +​ +( +𝒙 +⋆ +) +− +𝑓 +​ +( +𝒙 +𝑡 +) +) +. + +(24) + +It remains to bound the second term of Equation˜24. By the Mean-Value Theorem, +∃ +𝑧 +∈ +[ +0 +, +ℎ +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +] + such that + + +𝑠 +˙ +​ +( +𝑧 +) +​ +( +ℎ +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +− +0 +) + += +𝑠 +​ +( +ℎ +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +) +− +𝑓 +​ +( +0 +) + + +Now since +𝜅 += +sup +𝑧 +≤ +𝐵 +1 +/ +𝑠 +˙ +​ +( +𝑧 +) + then, + + +ℎ +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +≤ +𝜅 +​ +( +𝑠 +​ +( +ℎ +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +) +− +0.5 +) + +(25) + +Combining Equation˜20 with Equation˜25, it follows + + +ℎ +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +≤ +2 +​ +𝜅 +​ +𝜔 +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) + +(26) + +Using Equation˜26 in Equation˜24 and the fact that +𝑠 +​ +( +𝑓 +​ +( +𝒙 +∗ +) +− +𝑓 +​ +( +𝒙 +𝑡 +) +) +≤ +1 +, we get + + +2 +​ +𝑟 +𝑡 +𝐷 +≤ +( +4 ++ +𝜅 +) +​ +𝜔 +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +. + + +Therefore, for the cumulative dueling regret it holds + + +𝑅 +D +​ +( +𝑇 +) += +∑ +𝑡 += +1 +𝑇 +𝑟 +𝑡 +D + +≤ +𝑇 +​ +∑ +𝑡 += +1 +𝑇 +( +𝑟 +𝑡 +D +) +2 + + +≤ +( +2 ++ +𝜅 +/ +2 +) +​ +𝛽 +𝑇 +D +​ +( +𝛿 +) +​ +𝑇 +​ +∑ +𝑡 += +1 +𝑇 +( +𝜎 +𝑡 +D +) +2 +​ +( +𝒙 +𝑡 +, +𝜆 +​ +𝜅 +) + +𝛽 +𝑡 +​ +( +𝛿 +) +≤ +𝛽 +𝑇 +D +​ +( +𝛿 +) + + +≤ +𝐶 +3 +​ +𝛽 +𝑇 +D +​ +( +𝛿 +) +​ +𝑇 +​ +𝛾 +𝑡 +D + +Lem. 14 + +with probability greater than +1 +− +𝛿 + for all +𝑇 +≥ +1 +. + +∎ + +C.3Extending Algorithms for Linear Dueling Bandits to Kernelized Setting + +Maximum Informative Pair Algorithm. Proposed in saha2021optimal for linear utilities, the MaxInP algorithm similarly maintains a set of plausible maximizer arms, and picks the pair of actions that have the largest joint uncertainty, and therefore are expected to be informative. Algorithm˜3 present the kernelized variant of this algorithm. Using Corollary˜5, we can show that the kernelized MaxInP also satisfies a +𝒪 +~ +​ +( +𝛾 +𝑇 +​ +𝑇 +) + regret. + +Theorem 15. + +Let +𝛿 +∈ +( +0 +, +1 +] + and choose the exploration coefficient +𝛽 +𝑡 +D +​ +( +𝛿 +) + as defined in Corollary˜5. Then MaxInP satisfies the anytime dueling regret guarantee of + + +ℙ +( +∀ +𝑇 +≥ +0 +: +𝑅 +D +( +𝑇 +) +≤ +𝐶 +2 +𝛽 +𝑇 +D +( +𝛿 +) +𝑇 +​ +𝛾 +𝑇 +D +) +≥ +1 +− +𝛿 + + +where +𝛾 +𝑇 +D + is the +𝑇 +-step information gain of kernel +𝑘 +D + and +𝐶 +2 += +4 +/ +log +⁡ +( +1 ++ +4 +​ +( +𝜆 +​ +𝜅 +) +− +1 +) +. + +Proof of Theorem˜15. + +When selecting +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) + according to Algorithm˜3, we choose the pair via + + +𝒙 +𝑡 +, +𝒙 +𝑡 +′ += +arg +​ +max +𝒙 +, +𝒙 +′ +∈ +ℳ +𝑡 +⁡ +𝜔 +𝑡 +​ +( +𝒙 +, +𝒙 +′ +) + +(27) + +where action space is restricted to +ℳ +𝑡 + and therefore, + + +𝑠 +​ +( +ℎ +𝑡 +​ +( +𝒙 +⋆ +, +𝒙 +𝑡 +) +) + +≤ +1 +/ +2 ++ +𝜔 +𝑡 +​ +( +𝒙 +𝑡 +, +𝒙 +⋆ +) + + +𝑠 +​ +( +ℎ +𝑡 +​ +( +𝒙 +⋆ +, +𝒙 +𝑡 +′ +) +) + +≤ +1 +/ +2 ++ +𝜔 +𝑡 +​ +( +𝒙 +𝑡 +′ +, +𝒙 +⋆ +) + +(28) + +where we have used the identity +𝑠 +​ +( +− +𝑧 +) += +1 +− +𝑠 +​ +( +𝑧 +) +. Simultaneously for all +𝑡 +≥ +1 +, we can bound the single-step dueling regret with probability greater than +1 +− +𝛿 + + +2 +​ +𝑟 +𝑡 +D + += +𝑠 +​ +( +𝑓 +​ +( +𝒙 +⋆ +) +− +𝑓 +​ +( +𝒙 +𝑡 +) +) ++ +𝑠 +​ +( +𝑓 +​ +( +𝒙 +⋆ +) +− +𝑓 +​ +( +𝒙 +𝑡 +′ +) +) +− +1 + + +≤ +𝑠 +​ +( +ℎ +𝑡 +​ +( +𝒙 +⋆ +, +𝒙 +𝑡 +) +) ++ +𝜔 +𝑡 +​ +( +𝒙 +⋆ +, +𝒙 +𝑡 +) ++ +𝑠 +​ +( +ℎ +𝑡 +​ +( +𝒙 +⋆ +, +𝒙 +𝑡 +′ +) +) ++ +𝜔 +𝑡 +​ +( +𝒙 +⋆ +, +𝒙 +𝑡 +′ +) +− +1 + +(w.h.p.) + + +≤ +2 +​ +( +𝜔 +𝑡 +​ +( +𝒙 +⋆ +, +𝒙 +𝑡 +) ++ +𝜔 +𝑡 +​ +( +𝒙 +⋆ +, +𝒙 +𝑡 +′ +) +) + +Eq. (28) + + +≤ +4 +𝜔 +𝑡 +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) +) + +Eq.��(27) + +where for the first inequality we have invoked Corollary˜5. Then for the regret satisfies + + +𝑅 +D +​ +( +𝑇 +) += +∑ +𝑡 += +1 +𝑇 +𝑟 +𝑡 +D + +≤ +𝑇 +​ +∑ +𝑡 += +1 +𝑇 +( +𝑟 +𝑡 +D +) +2 + + +≤ +2 +​ +𝛽 +𝑇 +D +​ +( +𝛿 +) +​ +𝑇 +​ +∑ +𝑡 += +1 +𝑇 +( +𝜎 +𝑡 +D +) +2 +​ +( +𝒙 +𝑡 +, +𝜆 +​ +𝜅 +) + +𝛽 +𝑡 +​ +( +𝛿 +) +≤ +𝛽 +𝑇 +D +​ +( +𝛿 +) + + +≤ +𝐶 +2 +​ +𝛽 +𝑇 +D +​ +( +𝛿 +) +​ +𝑇 +​ +𝛾 +𝑡 +D + +Lem. 14 + +with probability greater than +1 +− +𝛿 + for all +𝑇 +≥ +1 +. ∎ + +Algorithm 3 MaxInP- Kernelized Variant +Input +( +𝛽 +𝑡 +D +) +𝑡 +≥ +1 +. +for +𝑡 +≥ +1 + do +  Play the most informative pair via + +𝒙 +𝑡 +, +𝒙 +𝑡 +′ += +arg +⁡ +max +𝒙 +, +𝒙 +′ +​ +ℳ +𝑡 +⁡ +𝜎 +𝑡 +D +​ +( +𝒙 +, +𝒙 +′ +) + +  Observe +𝑦 +𝑡 + and append history. +  Update +ℎ +𝑡 ++ +1 + and +𝜎 +𝑡 ++ +1 +D + and the set of plausible maximizers + +ℳ +𝑡 ++ +1 += +{ +𝒙 +∈ +𝒳 +| +∀ +𝒙 +′ +∈ +𝒳 +: +𝑠 +​ +( +ℎ +𝑡 ++ +1 +​ +( +𝒙 +, +𝒙 +′ +) +) ++ +𝛽 +𝑡 ++ +1 +D +​ +𝜎 +𝑡 ++ +1 +D +​ +( +𝒙 +, +𝒙 +′ +) +> +1 +/ +2 +} +. + +end for + +Dueling Information Directed Sampling (IDS) Algorithm. To choose actions at each iteration +𝑡 +, MaxInP and MaxMinLCB require solving an optimization problem on +𝒳 +× +𝒳 +. The Dueling IDS approach addresses this issue and presents an algorithm which requires solving an optimization problem on +𝒳 +× +[ +0 +, +1 +] + and is computationally more efficient when +𝑑 +0 +> +1 +. This work considers kernelized utilities, however, assumes the probability of preference itself is in an RKHS and solves a kernelized ridge regression problem to estimate the probability +𝑠 +( +ℎ +( +𝒙 +, +𝒙 +′ +) +. In the following, we present an improved version of this algorithm, by considering the preference-based loss (6) for estimating the utility function. We modify the algorithm and the theoretical analysis to accommodate this. + +Consider the sub-optimality gap +Δ +​ +( +𝒙 +) +≔ +ℎ +​ +( +𝒙 +⋆ +, +𝒙 +) + for an action +𝒙 +∈ +𝒳 +. We may estimate this gap using the reward estimate maximizer +𝒙 +^ +𝑡 +⋆ +≔ +arg +​ +max +𝒙 +∈ +𝒳 +⁡ +𝑓 +𝑡 +​ +( +𝒙 +) +. Suppose we choose +𝒙 +^ +𝑡 +⋆ + as one of the actions. Then +𝑢 +𝑡 + shows an optimistic estimate of the highest obtainable reward at this step: + + +𝑢 +𝑡 +≔ +max +𝒙 +∈ +𝒳 +⁡ +ℎ +​ +( +𝒙 +, +𝒙 +^ +𝑡 +⋆ +) ++ +𝛽 +~ +𝑡 +​ +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +, +𝒙 +𝑡 +⋆ +) +. + + +where +𝛽 +~ +𝑡 + is the exploration coefficient. We bound +Δ +​ +( +𝒙 +) + by the estimated gap + + +Δ +^ +𝑡 +​ +( +𝒙 +) +≔ +𝑢 +𝑡 ++ +ℎ +𝑡 +​ +( +𝒙 +^ +𝑡 +⋆ +, +𝒙 +) + +(29) + +and show its uniform validity in Lemma˜17. We can now propose the Kernelized Logistic IDS algorithm with preference feedback in Algorithm˜4, as a variant of the algorithm of kirschner2021bias. + +Theorem 16. + +Let +𝛿 +∈ +( +0 +, +1 +] + and for all +𝑡 +≥ +1 +, set the exploration coefficient as +𝛽 +~ +𝑡 += +𝛽 +𝑡 +D +​ +( +𝛿 +) +/ +𝐿 +. Then Algorithm˜4 satisfies the anytime cumulative dueling regret guarantee of + + +ℙ +( +∀ +𝑇 +≥ +0 +: +𝑅 +D +( +𝑇 +) += +𝒪 +( +𝛽 +𝑇 +D +( +𝛿 +) +𝑇 +​ +( +𝛾 +𝑇 ++ +log +⁡ +1 +/ +𝛿 +) +) +) +≥ +1 +− +𝛿 +. + +Algorithm 4 Preference-based IDS - Kernelized Logistic Variant +Initialize Set +( +𝛽 +𝑡 +) +𝑡 +≥ +1 + according to Theorem˜2. +for +𝑡 +≥ +1 + do +  Find a greedy action via fixing any point +𝑥 +null +∈ +𝒳 + and maximizing + +𝒙 +𝑡 +( +1 +) += +𝒙 +^ +𝑡 +⋆ += +arg +​ +max +𝑥 +∈ +𝒳 +⁡ +ℎ +𝑡 +​ +( +𝑥 +, +𝑥 +null +) +. + +  Update +𝑢 +𝑡 + and +Δ +^ +𝑡 +​ +( +𝒙 +) + acc. to (29). +  Find an informative action and the probability of selection via + +𝒙 +𝑡 +( +2 +) +, +𝑝 +𝑡 += +arg +​ +min +𝒙 +∈ +𝒳 + + +𝑝 +∈ +[ +0 +, +1 +] +⁡ +( +( +1 +− +𝑝 +) +​ +𝑢 +𝑡 ++ +𝑝 +​ +Δ +^ +𝑡 +​ +( +𝒙 +) +) +2 +𝑝 +​ +log +⁡ +( +1 ++ +( +𝜆 +​ +𝜅 +) +− +1 +​ +( +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +𝑡 +( +1 +) +, +𝒙 +) +) +2 +) +. + +  Draw +𝛼 +𝑡 +∼ +Bern +​ +( +𝑝 +𝑡 +) +. +  if +𝛼 +𝑡 += +1 + then choose pair +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) += +( +𝒙 +𝑡 +( +1 +) +, +𝒙 +𝑡 +( +2 +) +) + else choose +( +𝒙 +𝑡 +, +𝒙 +𝑡 +′ +) += +( +𝒙 +𝑡 +( +1 +) +, +𝒙 +𝑡 +( +1 +) +) +. +  Observe +𝑦 +𝑡 + and append history. +  Update +ℎ +𝑡 ++ +1 + and +𝜎 +𝑡 ++ +1 +D +. +end for +Proof of Theorem˜16. + +Our approach closely follows the proof of kirschner2021bias. Let +𝒫 +​ +( +⋅ +) + show the set of continuous probability distributions over a domain. Define the expected average gap for a policy +𝜇 +∈ +𝒫 +​ +( +𝒳 +× +𝒳 +) + + +Δ +^ +𝑡 +​ +( +𝜇 +) +≔ +1 +2 +​ +𝔼 +𝒙 +, +𝒙 +′ +∼ +𝜇 +​ +Δ +^ +𝑡 +​ +( +𝒙 +) ++ +Δ +^ +𝑡 +​ +( +𝒙 +′ +) + + +and the expected information ratio as + + +Ξ +𝑡 +​ +( +𝜇 +) +≔ +Δ +^ +𝑡 +2 +​ +( +𝜇 +) +𝔼 +𝒙 +, +𝒙 +′ +∼ +𝜇 +​ +log +⁡ +( +1 ++ +( +𝜆 +​ +𝜅 +) +− +1 +​ +( +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +, +𝒙 +′ +) +) +2 +) +. + + +Algorithm˜4 draws actions via +𝜇 +𝑡 += +( +1 +− +𝑝 +𝑡 +) +​ +𝛿 +( +𝒙 +𝑡 +( +1 +) +, +𝒙 +𝑡 +( +1 +) +) ++ +𝑝 +𝑡 +​ +𝛿 +( +𝒙 +𝑡 +( +1 +) +, +𝒙 +𝑡 +( +2 +) +) +, where +𝛿 +( +𝒙 +, +𝒙 +′ +) + denotes the Dirac delta. Then by kirschner2020information, + + +1 +2 +​ +∑ +𝑡 += +1 +𝑇 +ℎ +​ +( +𝒙 +⋆ +, +𝒙 +𝑡 +) ++ +ℎ +​ +( +𝒙 +⋆ +, +𝒙 +𝑡 +′ +) +≤ +∑ +𝑡 += +1 +𝑇 +Ξ +𝑡 +​ +( +𝜇 +𝑡 +) +​ +( +𝛾 +𝑇 ++ +𝒪 +​ +( +log +⁡ +1 +/ +𝛿 +) +) ++ +𝒪 +​ +( +log +⁡ +𝑇 +/ +𝛿 +) + + +which allows us to bound the regret with probability greater than +1 +− +𝛿 + as + + +𝑅 +D +​ +( +𝑇 +) +≤ +𝐿 +​ +∑ +𝑡 += +1 +𝑇 +Ξ +𝑡 +​ +( +𝜇 +𝑡 +) +​ +( +𝛾 +𝑇 ++ +𝒪 +​ +( +log +⁡ +1 +/ +𝛿 +) +) ++ +𝒪 +​ +( +𝐿 +​ +log +⁡ +𝑇 +/ +𝛿 +) + +(30) + +since +𝑠 +​ +( +⋅ +) + with its domain restricted to +[ +− +2 +​ +𝐵 +, +2 +​ +𝐵 +] + is +𝐿 +-Lipschitz. It remains to bound +Ξ +𝑡 +​ +( +𝜇 +𝑡 +) +, the expected information ratio for Algorithm˜4. Now by definition of +𝜇 +𝑡 + + +2 +​ +Δ +^ +𝑡 +​ +( +𝜇 +𝑡 +) + += +( +2 +− +𝑝 +𝑡 +) +​ +Δ +^ +𝑡 +​ +( +𝒙 +𝑡 +( +1 +) +) ++ +𝑝 +𝑡 +​ +Δ +𝑡 +​ +( +𝒙 +𝑡 +( +2 +) +) + + += +( +2 +− +𝑝 +𝑡 +) +​ +( +𝑢 +𝑡 ++ +ℎ +𝑡 +​ +( +𝒙 +𝑡 +( +1 +) +, +𝒙 +𝑡 +( +1 +) +) +) ++ +𝑝 +𝑡 +​ +Δ +𝑡 +​ +( +𝒙 +𝑡 +( +2 +) +) + + += +2 +​ +( +1 +− +𝑝 +𝑡 +) +​ +𝑢 +𝑡 ++ +𝑝 +𝑡 +​ +( +Δ +^ +𝑡 +​ +( +𝒙 +𝑡 +( +2 +) +) ++ +𝑢 +𝑡 +) +, + + +and similarly + + +𝔼 +𝜇 +𝑡 +​ +log +⁡ +( +1 ++ +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +, +𝒙 +′ +) +2 +𝜆 +​ +𝜅 +) + += +( +1 +− +𝑝 +𝑡 +) +​ +log +⁡ +( +1 ++ +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +𝑡 +( +1 +) +, +𝒙 +𝑡 +( +1 +) +) +2 +𝜆 +​ +𝜅 +) ++ +𝑝 +𝑡 +​ +log +⁡ +( +1 ++ +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +𝑡 +( +1 +) +, +𝒙 +𝑡 +( +2 +) +) +2 +𝜆 +​ +𝜅 +) + + += +𝑝 +𝑡 +​ +log +⁡ +( +1 ++ +( +𝜆 +​ +𝜅 +) +− +1 +​ +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +𝑡 +( +1 +) +, +𝒙 +𝑡 +( +2 +) +) +2 +) + +( +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +, +𝒙 +) += +0 +) + +allowing us to re-write the expected information ratio as + + +Ξ +𝑡 +​ +( +𝜇 +𝑡 +) + += +( +2 +​ +( +1 +− +𝑝 +𝑡 +) +​ +𝑢 +𝑡 ++ +𝑝 +𝑡 +​ +( +Δ +^ +𝑡 +​ +( +𝒙 +𝑡 +( +2 +) +) ++ +𝑢 +𝑡 +) +) +2 +4 +​ +𝑝 +𝑡 +​ +log +⁡ +( +1 ++ +( +𝜆 +​ +𝜅 +) +− +1 +​ +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +𝑡 +( +1 +) +, +𝒙 +𝑡 +( +2 +) +) +2 +) + + +≤ +( +( +1 +− +𝑝 +𝑡 +) +​ +𝑢 +𝑡 ++ +𝑝 +𝑡 +​ +Δ +^ +𝑡 +​ +( +𝒙 +𝑡 +( +2 +) +) +) +2 +𝑝 +𝑡 +​ +log +⁡ +( +1 ++ +( +𝜆 +​ +𝜅 +) +− +1 +​ +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +𝑡 +( +1 +) +, +𝒙 +𝑡 +( +2 +) +) +2 +) + +( +𝑢 +𝑡 +≤ +Δ +^ +𝑡 +​ +( +𝒙 +) +) + + += +min +𝒙 +, +𝑝 +⁡ +( +( +1 +− +𝑝 +) +​ +𝑢 +𝑡 ++ +𝑝 +​ +Δ +^ +𝑡 +​ +( +𝒙 +) +) +2 +𝑝 +​ +log +⁡ +( +1 ++ +( +𝜆 +​ +𝜅 +) +− +1 +​ +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +𝑡 +( +1 +) +, +𝒙 +) +2 +) + +Def. ( +𝑝 +𝑡 +, +𝒙 +𝑡 +( +2 +) +) + + +≤ +min +𝒙 +⁡ +Δ +^ +𝑡 +2 +​ +( +𝒙 +) +log +⁡ +( +1 ++ +( +𝜆 +​ +𝜅 +) +− +1 +​ +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +𝑡 +( +1 +) +, +𝒙 +) +2 +) +. + +Set +𝑝 += +1 + +Now consider the definition of +𝑢 +𝑡 + and let +𝒛 +𝑡 + denote the action for which +𝑢 +𝑡 + is achieved, i.e. +𝒛 +𝑡 += +arg +⁡ +max +⁡ +ℎ +​ +( +𝒙 +, +𝒙 +^ +𝑡 +⋆ +) ++ +𝛽 +¯ +𝑡 +​ +( +𝛿 +) +​ +𝜎 +𝑡 +D +​ +( +𝒙 +, +𝒙 +^ +𝑡 +⋆ +) +. Then + + +Δ +^ +𝑡 +​ +( +𝒛 +𝑡 +) += +ℎ +​ +( +𝒙 +^ +𝑡 +⋆ +, +𝒛 +𝑡 +) ++ +𝛽 +¯ +𝑡 +​ +( +𝛿 +) +​ +𝜎 +𝑡 +D +​ +( +𝒛 +𝑡 +, +𝒙 +^ +𝑡 +⋆ +) ++ +ℎ +​ +( +𝒛 +𝑡 +, +𝒙 +^ +𝑡 +⋆ +) += +𝛽 +¯ +𝑡 +​ +( +𝛿 +) +​ +𝜎 +𝑡 +D +​ +( +𝒙 +, +𝒙 +^ +𝑡 +⋆ +) +, + + +therefore using the above chain of equations we may write + + +Ξ +𝑡 +​ +( +𝜇 +𝑡 +) + +≤ +min +𝒙 +⁡ +Δ +^ +𝑡 +2 +​ +( +𝒙 +) +log +⁡ +( +1 ++ +�� +𝑡 +𝐷 +​ +( +𝒙 +𝑡 +( +1 +) +, +𝒙 +) +2 +) + + +≤ +Δ +^ +𝑡 +2 +​ +( +𝒛 +𝑡 +) +log +⁡ +( +1 ++ +( +𝜆 +​ +𝜅 +) +− +1 +​ +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +𝑡 +( +1 +) +, +𝒛 +𝑡 +) +2 +) + + +≤ +𝛽 +¯ +𝑡 +2 +​ +( +𝛿 +) +​ +𝜎 +𝑡 +D +​ +( +𝒛 +𝑡 +​ +𝒙 +^ +𝑡 +⋆ +) +2 +log +⁡ +( +1 ++ +( +𝜆 +​ +𝜅 +) +− +1 +​ +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +𝑡 +( +1 +) +, +𝒛 +𝑡 +) +2 +) + + +≤ +4 +​ +𝛽 +¯ +𝑡 +2 +​ +( +𝛿 +) +log +⁡ +( +1 ++ +4 +​ +( +𝜆 +​ +𝜅 +) +− +1 +) + +(31) + +where last inequality holds due to the following argument. Recall that +𝑘 +​ +( +𝒙 +, +𝒙 +) +≤ +1 +, implying that +𝜎 +𝑡 +D +​ +( +𝒙 +, +𝒙 +′ +) +2 +≤ +4 + and therefore +log +⁡ +( +1 ++ +𝜎 +𝑡 +D +​ +( +𝒙 +, +𝒙 +′ +) +2 +) +≥ +log +⁡ +( +1 ++ +( +𝜆 +​ +𝜅 +) +− +1 +) +​ +𝜎 +𝑡 +D +​ +( +𝒙 +, +𝒙 +′ +) +2 +/ +4 +, similar to Lemma˜14. To conclude the proof, from (30) and (31) it holds that + + +𝑅 +D +​ +( +𝑇 +) + +≤ +𝐿 +​ +∑ +𝑡 += +1 +𝑇 +Ξ +𝑡 +​ +( +𝜇 +𝑡 +) +​ +( +𝛾 +𝑇 ++ +𝒪 +​ +( +log +⁡ +1 +/ +𝛿 +) +) ++ +𝒪 +​ +( +𝐿 +​ +log +⁡ +𝑇 +/ +𝛿 +) + + +≤ +𝐿 +​ +∑ +𝑡 += +1 +𝑇 +4 +​ +𝛽 +¯ +𝑡 +2 +​ +( +𝛿 +) +log +⁡ +( +1 ++ +4 +​ +( +𝜆 +​ +𝜅 +) +− +1 +) +​ +( +𝛾 +𝑇 ++ +𝒪 +​ +( +log +⁡ +1 +/ +𝛿 +) +) ++ +𝒪 +​ +( +𝐿 +​ +log +⁡ +𝑇 +/ +𝛿 +) + + +≤ +𝐿 +​ +4 +​ +𝑇 +​ +𝛽 +¯ +𝑇 +2 +​ +( +𝛿 +) +log +⁡ +( +1 ++ +4 +​ +( +𝜆 +​ +𝜅 +) +− +1 +) +​ +( +𝛾 +𝑇 ++ +𝒪 +​ +( +log +⁡ +1 +/ +𝛿 +) +) ++ +𝒪 +​ +( +𝐿 +​ +log +⁡ +𝑇 +/ +𝛿 +) + + += +𝒪 +​ +( +𝛽 +𝑇 +D +​ +( +𝛿 +) +​ +𝑇 +​ +( +𝛾 +𝑇 ++ +log +⁡ +1 +/ +𝛿 +) +) + + +with probability greater than +1 +− +𝛿 +, simultaneously for all +𝑇 +≥ +1 +. ∎ + +C.4Helper Lemmas for Section˜C.3 +Lemma 17. + +Let +0 +< +𝛿 +< +1 + and +𝑓 +∈ +ℋ +𝑘 +. Suppose +sup +𝑎 +≤ +𝐵 +𝑠 +˙ +​ +( +𝑎 +) += +𝐿 + and +sup +𝑎 +≤ +𝐵 +1 +/ +𝑠 +˙ +​ +( +𝑎 +) += +𝜅 +. Then + + +ℙ +( +∀ +𝑡 +≥ +0 +, +𝒙 +∈ +𝒳 +: +Δ +( +𝒙 +) +≤ +2 +Δ +^ +𝑡 +( +𝒙 +) +) +≥ +1 +− +𝛿 +. + +Proof of Lemma˜17. + +Note that for any three inputs +𝒙 +1 +, +𝒙 +2 +, +𝒙 +3 + + +ℎ +​ +( +𝒙 +1 +, +𝒙 +3 +) += +ℎ +​ +( +𝒙 +1 +, +𝒙 +2 +) ++ +ℎ +​ +( +𝒙 +2 +, +𝒙 +3 +) +. + +(32) + +Therefore, from the definition of the estimated gap get + + +Δ +^ +𝑡 +​ +( +𝒙 +) + += +max +𝒛 +∈ +𝒳 +⁡ +ℎ +​ +( +𝒛 +, +𝒙 +^ +𝑡 +⋆ +) ++ +ℎ +𝑡 +​ +( +𝒙 +^ +𝑡 +⋆ +, +𝒙 +) ++ +𝛽 +¯ +𝑡 +​ +( +𝛿 +) +​ +𝜎 +𝑡 +𝐷 +​ +( +𝒛 +, +𝒙 +^ +𝑡 +⋆ +) + + += +max +𝒛 +∈ +𝒳 +⁡ +ℎ +​ +( +𝒛 +, +𝒙 +) ++ +𝛽 +¯ +𝑡 +​ +( +𝛿 +) +​ +𝜎 +𝑡 +𝐷 +​ +( +𝒛 +, +𝒙 +^ +𝑡 +⋆ +) + + +≥ +ℎ +​ +( +𝒙 +, +𝒙 +) ++ +𝛽 +¯ +𝑡 +​ +( +𝛿 +) +​ +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +, +𝒙 +𝑡 +⋆ +) + + += +𝛽 +¯ +𝑡 +​ +( +𝛿 +) +​ +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +, +𝒙 +𝑡 +⋆ +) +. + +(33) + +Then going back to the definition of the true gap we may write + + +Δ +​ +( +𝒙 +) + += +max +𝒛 +∈ +𝒳 +⁡ +ℎ +​ +( +𝒛 +, +𝒙 +) + + += +max +𝒛 +∈ +𝒳 +⁡ +ℎ +​ +( +𝒛 +, +𝒙 +^ +𝑡 +⋆ +) ++ +ℎ +​ +( +𝒙 +^ +𝑡 +⋆ +, +𝒙 +) + +Eq. (32) + + +≤ +w +. +h +. +p +. +max +𝒛 +∈ +𝒳 +⁡ +ℎ +𝑡 +𝑃 +​ +( +𝒛 +, +𝒙 +^ +𝑡 +⋆ +) ++ +ℎ +𝑡 +​ +( +𝒙 +^ +𝑡 +⋆ +, +𝒙 +) ++ +𝛽 +¯ +𝑡 +​ +( +𝛿 +) +​ +( +𝜎 +𝑡 +𝐷 +​ +( +𝒛 +, +𝒙 +^ +𝑡 +⋆ +) ++ +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +^ +𝑡 +⋆ +, +𝒙 +) +) + +Lem. 18 + + += +𝑢 +𝑡 ++ +ℎ +𝑡 +𝑃 +​ +( +𝒙 +^ +𝑡 +⋆ +, +𝒙 +) ++ +𝛽 +¯ +𝑡 +​ +( +𝛿 +) +​ +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +^ +𝑡 +⋆ +, +𝒙 +) + +Def.  +𝑢 +𝑡 + + += +Δ +^ +𝑡 +​ +( +𝒙 +) ++ +𝛽 +¯ +𝑡 +​ +( +𝛿 +) +​ +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +𝑡 +⋆ +, +𝒙 +) + +Def.  +Δ +^ +𝑡 +​ +( +𝒙 +) + + +≤ +2 +​ +Δ +^ +𝑡 +​ +( +𝒙 +) + +Eq. (C.4) + +with probability greater than +1 +− +𝛿 +. ∎ + +Lemma 18. + +Assume +𝑓 +∈ +ℋ +𝑘 +. Suppose +sup +𝑎 +≤ +𝐵 +1 +/ +𝑠 +˙ +​ +( +𝑎 +) += +𝜅 +. Then for any +0 +< +𝛿 +< +1 + + +ℙ +( +∀ +𝑡 +≥ +1 +, +𝑥 +∈ +𝒳 +: +| +ℎ +( +𝒙 +, +𝒙 +′ +) +− +ℎ +𝑡 +𝑃 +( +𝒙 +, +𝒙 +′ +) +| +≤ +𝛽 +¯ +𝑡 +( +𝛿 +) +𝜎 +𝑡 +𝐷 +( +𝒙 +, +𝒙 +′ +; +𝜆 +​ +𝜅 +) +) +≥ +1 +− +𝛿 + + +where + + +𝛽 +¯ +𝑡 +​ +( +𝛿 +) +≔ +2 +​ +𝐵 ++ +𝜅 +𝜆 +​ +2 +​ +log +⁡ +1 +/ +𝛿 ++ +2 +​ +𝛾 +𝑡 +​ +( +𝜆 +​ +𝜅 +) +. + +Proof of Lemma˜18. + +This lemma is effectively a weaker parallel of Corollary˜5. We have + + +| +ℎ +​ +( +𝒙 +, +𝒙 +′ +) +− +ℎ +𝑡 +𝑃 +​ +( +𝒙 +, +𝒙 +′ +) +| + += +| +𝑓 +( +𝒙 +, +) +− +𝑓 +( +𝒙 +′ +) +− +( +𝑓 +𝑡 +𝑃 +( +𝒙 +, +) +− +𝑓 +𝑡 +𝑃 +( +𝒙 +′ +) +) +| + + += +| +𝝍 +⊤ +​ +( +𝒙 +, +𝒙 +′ +) +​ +( +𝜽 +⋆ +− +𝜽 +𝑡 +𝑃 +) +| + + +≤ +‖ +𝝍 +​ +( +𝒙 +, +𝒙 +′ +) +‖ +( +𝑉 +𝑡 +𝐷 +) +− +1 +​ +‖ +𝜽 +⋆ +− +𝜽 +𝑡 +𝑃 +‖ +𝑉 +𝑡 +𝐷 + + +≤ +w.h.p. +𝜆 +​ +𝜅 +​ +𝛽 +¯ +𝑡 +​ +( +𝛿 +) +​ +‖ +𝝍 +​ +( +𝒙 +, +𝒙 +′ +) +‖ +( +𝑉 +𝑡 +𝐷 +) +− +1 + +Lem. 9 + + +≤ +𝛽 +¯ +𝑡 +​ +( +𝛿 +) +​ +𝜎 +𝑡 +𝐷 +​ +( +𝒙 +, +𝜆 +​ +𝜅 +) + +Lem. 13 + +where the third to last inequality holds with probability greater than +1 +− +𝛿 + , but the rest of the inequalities hold deterministically. ∎ + +Appendix DDetails of Experiments +Figure 3: Confidence sets for an illustrative problem with +3 + arms at a single time step. Annotated arrows highlight the action selection for three common approaches. MaxMinLCB selects the action pair +( +1 +, +2 +) + with the least regret. Upper-bound maximization (Optimism) and information maximization (Max Info) choose sub-optimal arms. + +Test Environments. We use a wide range of target functions common to the optimization literature [jamil2013literature], to evaluate the robustness of MaxMinLCB. The results are reported in Table˜1 and Table˜2. Note that for the experiments we negate them all to get utilities. We use a uniform grid of +100 + points over their specified domains and scale the utility values to +[ +− +3 +, +3 +] +. + +• + +Ackley: +𝒳 += +[ +− +5 +, +5 +] +𝑑 +, +𝑑 += +2 + + +𝑓 +​ +( +𝒙 +) += +− +20 +​ +exp +⁡ +( +− +0.2 +​ +1 +𝑑 +​ +∑ +𝑖 += +1 +𝑑 +𝑥 +𝑖 +2 +) +− +exp +⁡ +( +1 +𝑑 +​ +∑ +𝑖 += +1 +𝑑 +cos +⁡ +( +2 +​ +𝜋 +​ +𝑥 +𝑖 +) +) ++ +20 ++ +exp +⁡ +( +1 +) + +• + +Branin: +𝒳 += +[ +− +5 +, +10 +] +× +[ +0 +, +15 +] + + +𝑓 +​ +( +𝒙 +) += +( +𝑥 +2 +− +5.1 +4 +​ +𝜋 +2 +​ +𝑥 +1 +2 ++ +5 +𝜋 +​ +𝑥 +1 +− +6 +) +2 ++ +10 +​ +( +1 +− +1 +8 +​ +𝜋 +) +​ +cos +⁡ +( +𝑥 +1 +) ++ +10 + +• + +Eggholder: +𝒳 += +[ +− +512 +, +512 +] +2 + + +𝑓 +​ +( +𝒙 +) += +− +( +𝑥 +2 ++ +47 +) +​ +sin +⁡ +( +| +𝑥 +2 ++ +𝑥 +1 +2 ++ +47 +| +) +− +𝑥 +1 +​ +sin +⁡ +( +| +𝑥 +1 +− +( +𝑥 +2 ++ +47 +) +| +) + +• + +Hölder: +𝒳 += +[ +− +10 +, +10 +] +2 + + +𝑓 +​ +( +𝒙 +) += +− +| +sin +⁡ +( +𝑥 +1 +) +​ +cos +⁡ +( +𝑥 +2 +) +​ +exp +⁡ +( +| +1 +− +𝑥 +1 +2 ++ +𝑥 +2 +2 +𝜋 +| +) +| + +• + +Matyas: +𝒳 += +[ +− +10 +, +10 +] +2 + + +𝑓 +​ +( +𝒙 +) += +0.26 +​ +( +𝑥 +1 +2 ++ +𝑥 +2 +2 +) +− +0.48 +​ +𝑥 +1 +​ +𝑥 +2 + +• + +Michalewicz: +𝒳 += +[ +0 +, +𝜋 +] +𝑑 +, +𝑑 += +2 +, +𝑚 += +10 + + +𝑓 +​ +( +𝒙 +) += +− +∑ +𝑖 += +1 +𝑑 +sin +⁡ +( +𝑥 +𝑖 +) +​ +sin +2 +​ +𝑚 +⁡ +( +𝑖 +​ +𝑥 +𝑖 +2 +𝜋 +) + +• + +Rosenbrock: +𝒳 += +[ +− +5 +, +10 +] +2 + + +𝑓 +​ +( +𝒙 +) += +∑ +𝑖 += +1 +𝑑 +− +1 +[ +100 +​ +( +𝑥 +𝑖 ++ +1 +− +𝑥 +𝑖 +2 +) +2 ++ +( +𝑥 +𝑖 +− +1 +) +2 +] + +Algorithm 5 Doubler [ailon2014reducing] +Input +( +𝛽 +𝑡 +D +) +𝑡 +≥ +1 +. +Let +ℒ + be any action from +𝒳 +for +𝑡 +≥ +1 + do +  for +𝑗 += +1 +, +… +, +2 +𝑡 + do +   Select +𝒙 +𝑡 +′ + uniformly randomly from +ℒ +   Select +𝒙 +𝑡 += +arg +​ +max +𝒙 +∈ +ℳ +𝑡 +⁡ +𝑠 +​ +( +ℎ +𝑡 +​ +( +𝒙 +, +𝒙 +𝑡 +′ +) +) ++ +𝛽 +𝑡 +D +​ +𝜎 +𝑡 +D +​ +( +𝒙 +, +𝒙 +𝑡 +′ +) +   Observe +𝑦 +𝑡 + and append history. +   Update +ℎ +𝑡 ++ +1 + and +𝜎 +𝑡 ++ +1 +D +  end for +   +ℒ +← + the multi-set of actions played as +𝒙 +𝑡 +′ + in the last for-loop over index +𝑗 +end for + +Algorithm 6 MultiSBM [ailon2014reducing] +Input +( +𝛽 +𝑡 +D +) +𝑡 +≥ +1 +. +for +𝑡 +≥ +1 + do +  Set +𝒙 +𝑡 +← +𝒙 +𝑡 +− +1 +′ +  Select +𝒙 +𝑡 +′ += +arg +​ +max +𝒙 +∈ +ℳ +𝑡 +⁡ +𝑠 +​ +( +ℎ +𝑡 +​ +( +𝒙 +, +𝒙 +𝑡 +) +) ++ +𝛽 +𝑡 +D +​ +𝜎 +𝑡 +D +​ +( +𝒙 +, +𝒙 +𝑡 +) +  Observe +𝑦 +𝑡 + and append history. +  Update +ℎ +𝑡 ++ +1 + and +𝜎 +𝑡 ++ +1 +D + and the set of plausible maximizers + +ℳ +𝑡 ++ +1 += +{ +𝒙 +∈ +𝒳 +| +∀ +𝒙 +′ +∈ +𝒳 +: +𝑠 +​ +( +ℎ +𝑡 ++ +1 +​ +( +𝒙 +, +𝒙 +′ +) +) ++ +𝛽 +𝑡 ++ +1 +D +​ +𝜎 +𝑡 ++ +1 +D +​ +( +𝒙 +, +𝒙 +′ +) +> +1 +/ +2 +} +. + +end for + +Algorithm 7 RUCB [zoghi2014relative] +Input +( +𝛽 +𝑡 +D +) +𝑡 +≥ +1 +. +for +𝑡 +≥ +1 + do +  Select +𝒙 +𝑡 +′ + uniformly randomly from +ℳ +𝑡 +  Select +𝒙 +𝑡 += +arg +​ +max +𝒙 +∈ +ℳ +𝑡 +⁡ +𝑠 +​ +( +ℎ +𝑡 +​ +( +𝒙 +, +𝒙 +𝑡 +′ +) +) ++ +𝛽 +𝑡 +D +​ +𝜎 +𝑡 +D +​ +( +𝒙 +, +𝒙 +𝑡 +′ +) +  Observe +𝑦 +𝑡 + and append history. +  Update +ℎ +𝑡 ++ +1 + and +𝜎 +𝑡 ++ +1 +D + and the set of plausible maximizers + +ℳ +𝑡 ++ +1 += +{ +𝒙 +∈ +𝒳 +| +∀ +𝒙 +′ +∈ +𝒳 +: +𝑠 +​ +( +ℎ +𝑡 ++ +1 +​ +( +𝒙 +, +𝒙 +′ +) +) ++ +𝛽 +𝑡 ++ +1 +D +​ +𝜎 +𝑡 ++ +1 +D +​ +( +𝒙 +, +𝒙 +′ +) +> +1 +/ +2 +} +. + +end for + +Acquisition Function Maximization. In our computations, to eliminate additional noise coming from approximate solvers, we use an exhaustive search over the domain for the action selection of LGP-UCB, MaxMinLCB, and other presented algorithms. For the numerical experiments presented in this paper, we do not consider this as a practical limitation. Due to our efficient implementation in JAX, this optimization step can be carried out in parallel and seamlessly support accelerator devices such as GPUs and TPUs. + +Hyper-parameters for Logistic Bandits. We set +𝛿 += +0.1 + for all algorithms. For GP-UCB and LGP-UCB, we set +𝛽 += +1 +, and +0.25 + for the noise variance. We use the Radial Basis Function (RBF) kernel and choose the variance and length scale parameters from +[ +0.1 +, +1.0 +] + to optimize their performance separately. For LGP-UCB, we tuned +𝜆 +, the +𝐿 +​ +2 + penalty coefficient in Proposition˜1, on the grid +[ +0.0 +, +0.1 +, +1.0 +, +5.0 +] + and +𝐵 + on +[ +1.0 +, +2.0 +, +3.0 +] +. The hyper-parameter selections were done for each algorithm separately to create a fair comparison. + +Hyper-parameters for Preference-based Bandits. We tune the same parameters of LGP-UCB for the preference feedback bandit problem on the following grid: +𝜆 +∈ +[ +0 +, +0.1 +, +1 +] +, +𝐵 +∈ +[ +1 +, +2 +, +3 +] +, and +[ +0.1 +, +1 +] + for the kernel variance and length scale. The same hyper-parameters are tuned separately for every baseline . + +Pseudo-code for Baselines. Algorithm˜5, Algorithm˜6, and Algorithm˜7 described the baselines used for the benchmark of Section˜6.2. MaxInP and IDS are defined in Algorithm˜3 and Algorithm˜4, respectively, in Section˜C.3 alongside with their theoretical analysis. We note that Doubler includes an internal for-loop, therefore, we adjusted the time-horizon +𝑇 + such that it observes the same number of feedback +𝑦 +𝑡 + as the other algorithms for a fair comparison. + +Computational Resources and Costs. We ran our experiments on a shared cluster equipped with various NVIDIA GPUs and AMD EPYC CPUs. Our default configuration for all experiments was a single GPU with 24 GB of memory, 16 CPU cores, and 16 GB of RAM. Each experiment of the +11 + configurations reported in Section˜6.2 ran for about +12 + hours and the experiment reported in Section˜6.1 ran for 5 hours. The total computational cost to reproduce our results is around 140 hours of the default configuration. Our total computational costs including the failed experiments are estimated to be 2-3 times more. + +D.1Yelp Experiment + +We filter the Yelp data3 for restaurants in Philadelphia, USA with at least +500 + reviews and users who reviewed at least +90 + restaurants. The final dataset includes +275 + restaurants, +20 + users, and a total of +2563 + reviews. We define the action space +𝒳 + by assigning to each restaurant their respective +32 +-dimensional embedding of their reviews, i.e., +𝒳 +⊆ +ℝ +32 +. For each restaurant, we concatenate all reviews in the filtered dataset and we use the text-embedding-3-large OpenAI embedding model 4 to retrieve the embeddings. The Yelp dataset provides utility values for users in the form of ratings on the scale of +1 + to +5 +, however, not all users rated every restaurant. We use collaborative filtering to estimate the missing reviews [schafer2007collaborative]. For each user, these values are used as the utilities for restaurants during the simulation. Experiments are conducted separately for each user, therefore, utility values are not aggregated but the action space is identical for each experiment. + +Note that we do not assume any explicit functional form of the utility functions +𝑓 + that we calibrate to this data. Instead, the actions space +𝒳 + and utility values are derived separately from the dataset. Regardless, the results presented in Section˜6.3 show that our kernelized method achieves good performance on this task. + +Appendix EAdditional Experiments + +In this section, we provide Table˜2 that details our logistic bandit benchmark, complementing the results in Section˜6.1. Figure˜4 and Figure˜5 show the logistic and dueling regret of additional test functions, complementing the results of Table˜1. + +Table 2:Benchmarking +�� +𝑇 +L + for a variety of test utility functions, +𝑇 += +2000 +. +𝑓 + LGP-UCB GP-UCB Ind-UCB Log-UCB1 +Ackley +23.97 +± +1.54 + +96.35 +± +1.27 + +479.63 +± +3.42 + +1810.30 +± +0.00 + +Branin +75.23 +± +17.51 + +44.81 +± +2.81 + +142.37 +± +1.33 + +1810.30 +± +0.00 + +Eggholder +167.11 +± +31.26 + +152.34 +± +4.28 + +559.56 +± +4.15 + +1041.00 +± +0.00 + +Hoelder +57.35 +± +10.23 + +150.41 +± +9.64 + +426.28 +± +2.94 + +105.64 +± +4.88 + +Matyas +36.64 +± +8.77 + +50.21 +± +2.07 + +137.98 +± +1.21 + +920.48 +± +0.57 + +Michalewicz +283.85 +± +3.62 + +175.46 +± +2.86 + +566.36 +± +3.75 + +1810.30 +± +0.00 + +Rosenbrock +8.92 +± +0.33 + +26.14 +± +0.87 + +76.13 +± +0.84 + +897.04 +± +120.68 +Figure 4:Regret with Branin utility function with logistic (left) and preference (right) feedback. +Figure 5:Top to bottom Regret for Eggholder, Hölder, Matyas Michalewicz, Rosenbrock functions, with logistic (left) and preference (right) feedback. +Report Issue +Report Issue for Selection +Generated by L A T E xml +Instructions for reporting errors + +We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below: + +Click the "Report Issue" button. +Open a report feedback form via keyboard, use "Ctrl + ?". +Make a text selection and click the "Report Issue for Selection" button near your cursor. +You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section. + +Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all. + +Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.