diff --git "a/DtFRT4oBgHgl3EQfxjhb/content/tmp_files/load_file.txt" "b/DtFRT4oBgHgl3EQfxjhb/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/DtFRT4oBgHgl3EQfxjhb/content/tmp_files/load_file.txt" @@ -0,0 +1,1229 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf,len=1228 +page_content='An Efficient Solution to s-Rectangular Robust Markov Decision Processes Navdeep Kumar1, Kfir Levy1, Kaixin Wang2, and Shie Mannor1 1Technion 2National University of Singapore February 1, 2023 Abstract We present an efficient robust value iteration for s-rectangular robust Markov Decision Processes (MDPs) with a time complexity comparable to standard (non- robust) MDPs which is significantly faster than any existing method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We do so by deriving the optimal robust Bellman operator in concrete forms using our Lp water filling lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We unveil the exact form of the optimal policies, which turn out to be novel threshold policies with the probability of playing an action proportional to its advantage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 1 Introduction In Markov Decision Processes (MDPs), an agent interacts with the environment and learns to optimally behave in it [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' However, the MDP solution may be very sensitive to little changes in the model parameters [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Hence we should be cautious applying the solution of the MDP, when the model is changing or when there is uncertainty in the model parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Robust MDPs provide a way to address this issue, where an agent can learn to optimally behave even when the model parameters are uncertain [15, 29, 18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Another motivation to study robust MDPs is that they can lead to better generalization [33, 34, 25] compared to non-robust solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Unfortunately, solving robust MDPs is proven to be NP-hard for general uncertainty sets [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' As a result, the uncertainty set is often assumed to be rectangular, which enables the existence of a contractive robust Bellman operators to obtain the optimal robust value function [24, 18, 22, 12, 32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Recently, there has been progress in solving robust MDPs for some sa-rectangular uncertainty sets via both value-based and policy-based methods [30, 31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' An uncertainty set is said to be sa-rectangular if it can be expressed as a Cartesian product of the uncertainty in all states and actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' It can be further generalized to a s-rectangular uncertainty set if it can be expressed as a Cartesian product of the uncertainty in all states only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Compared to sa-rectangular robust MDPs, s-rectangular robust MDPs are less conservative and hence more desirable;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' however, they are also much more difficult and poorly understood [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Currently, there are few works that consider s-rectangular Lp robust MDPs where uncertainty set is further constrained by Lp norm, but they rely on 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='13642v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='LG] 31 Jan 2023 black box methods which limits its applicability and offers little insights [7, 16, 9, 32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' No effective value or policy based methods exist for solving any s-robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Moreover, it is known that optimal policies in s-rectangular robust MDPs can be stochastic, in contrast to sa-rectangular robust MDPs and non-robust MDPs [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' However, so far, nothing is known about the stochastic nature of the optimal policies in s-rectangular MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In this work, we mainly focus on s-rectangular Lp robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We first revise the unrealistic assumptions made in the noise transition kernel in [9] and introduce forbidden transitions, which leads to novel regularizers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Then we derive robust Bellman operator (policy evaluation) for a s-rectangular robust MDPs in closed form which is equivalent to reward-value-policy-regularized non-robust Bellman operator without radius assumption 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='1 in [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We exploit this equivalence to derive an optimal robust Bellman operator in concrete forms using our Lp-water pouring lemma which generalizes existing water pouring lemma for L2 case [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We can compute these operators in closed form for p = 1, ∞ and exactly by a simple algorithm for p = 2, and approximately by binary search for general p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We show that the time complexity of robust value iteration for p = 1, 2 is the same as that of non-robust value iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For general p, the complexity includes some additional log-factors due to binary searches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In addition, we derive a complete characterization of the stochastic nature of optimal policies in s-rectangular robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The optimal policies in this case, are threshold policies, that plays only actions with positive advantage with probability proportional to (p − 1)-th power to its advantage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Related Work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For sa-rectangular R-contamination robust MDPs, [30] derived robust Bellman operators which are equivalent to value-regularized-non-robust Bellman operators, enabling efficient robust value iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Building upon this work, [31] derived robust policy gradient which is equivalvent to non-robust policy gradient with regularizer and correction terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Unfortunately, these methods can’t be naturally generalized to s-rectangular robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For s-rectangular robust MDPs, methods such as robust value iteration [6, 32], robust modified policy iteration [19], partial robust policy iteration [16] etc tries to approximately evaluate robust Bellman operators using variety of tools to estimate optimal robust value function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The scalability of these methods has been limited due to their reliance on an external black-box solver such as Linear Programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Previous works have explored robust MDPs from a regularization perspective [9, 10, 17, 11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Specifically, [9] showed that s-rectangular robust MDPs is equivalent to reward-value-policy regularized MDPs, and proposed a gradient based policy iteration for s-rectangular Lp robust MDPs ( where uncertainty set is s-rectangular and constrained by Lp norm).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' But this gradient based policy improvement relies on black box simplex projection, hence very slow and not scalable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The detailed discussion of the above works can be found in the appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 2 Preliminary 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='1 Notations For a set S, |S| denotes its cardinality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' ⟨u, v⟩ := � s∈S u(s)v(s) denotes the dot product between functions u, v : S → R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' ∥v∥q p := (� s|v(s)|p) q p denotes the q-th power of Lp norm of function v, and we use ∥v∥p := ∥v∥1 p and ∥v∥ := ∥v∥2 as shorthand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For a 2 set C, ∆C := {a : C → R|a(c) ≥ 0, ∀c, � c∈C ac = 1} is the probability simplex over C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 0, 1 denotes all zero vector and all ones vector/function respectively of appropriate dimension/domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 1(a = b) := 1 if a = b, 0 otherwise, is the indicator function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For vectors u, v, 1(u ≥ v) is component wise indicator vector, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 1(u ≥ v)(x) = 1(u(x) ≥ v(x)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' A × B = {(a, b) | a ∈ A, b ∈ B} is cartesain product between set A and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='2 Markov Decision Processes A Markov Decision Process (MDP) can be described as a tuple (S, A, P, R, γ, µ), where S is the state space, A is the action space, P is a transition kernel mapping S × A to ∆S, R is a reward function mapping S × A to R, µ is an initial distribution over states in S, and γ is a discount factor in [0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The expected discounted cumulative reward (return) is defined as ρπ (P,R) :=E � ∞ � n=0 γnR(sn, an) ��� s0 ∼ µ, π, P � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The return can be written compactly as ρπ (P,R) = ⟨µ, vπ (P,R)⟩, (1) [26] where vπ (P,R) is the value function , defined as vπ (P,R)(s) := E � ∞ � n=0 γnR(sn, an) ��� s0 = s, π, P � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (2) Our objective is to find an optimal policy π∗ (P,R) that maximizes the performance ρπ (P,R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This performance can be written as : ρ∗ (P,R) := max π ρπ (P,R) = ⟨µ, v∗ (P,R)⟩, (3) where v∗ (P,R) := maxπ vπ (P,R) is the optimal value function [26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The value function vπ (P,R) and the optimal value function v∗ (P,R) are the fixed points of the Bellman operator T π (P,R) and the robust Bellman operator T ∗ (P,R), respectively [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' These γ-contraction operators are defined as follows: For any vector v, and state s ∈ S, (T π (P,R)v)(s) := � a π(a|s) � R(s, a) + γ � s′ P(s′|s, a)v(s′) � , and T ∗ (P,R)v := max π T π (P,R)v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Therefore, the value iteration vn+1 := T ∗ (P,R)vn converges linearly to the optimal value function v∗ (P,R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Given this optimal value function, the optimal policy can be computed as: π∗ (P,R) ∈ arg maxπ T π (P,R)v∗ (P,R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The vector minimum of a set U of vectors is defined component wise, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (minu∈U u)(i) := minu∈U u(i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This operation is well-defined only when there exists a minimal vector u∗ ∈ U such that u∗ ⪯ u, ∀u ∈ U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The same holds for other operations such as maximum, argmin, argmax, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='3 Robust Markov Decision Processes A robust Markov Decision Process (MDP) is a tuple (S, A, P, R, γ, µ) which generalizes the standard MDP by containing a set of transition kernels P and set of reward functions R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Let uncertainty set U = P × R be set of tuples of transition kernels and reward functions [18, 24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The robust performance ρπ U of a policy π is defined to be its worst performance on the entire uncertainty set U as ρπ U := min (P,R)∈U ρπ (P,R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (4) Our objective is to find an optimal robust policy π∗ U that maximizes the robust performance ρπ U, defined as ρ∗ U := max π ρπ U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (5) Solving the above robust objectives 4 and 5 are strongly NP-hard for general uncertainty sets, even if they are convex [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Hence, the uncertainty set U = P × R is commonly assumed to be s-rectangular, meaning that R and P can be decomposed state-wise as R = ×s∈SRs and P = ×s∈SPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For further simplification, U = P × R is assumed to decompose state-action-wise as R = ×(s,a)∈S×ARs,a and P = ×(s,a)∈S×APs,a, known as sa-rectangular uncertainty set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Throughout the paper, the uncertainty set is assumed to be s-rectangular (or sa-rectangular) unless stated otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Under the s-rectangularity assumption, for every policy π, there exists a robust value function vπ U which is the minimum of vπ (P,R) for all (P, R) ∈ U, and the optimal robust value function v∗ U which is the maximum of vπ U for all policies π [32], that is vπ U := min (P,R)∈U vπ (P,R), and v∗ U := max π vπ U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This implies, robust policy performance can be rewritten as ρπ U = ⟨µ, vπ U⟩, and ρ∗ U = ⟨µ, v∗ U⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Furthermore, the robust value function vπ U is the fixed point of the robust Bellmen operator T π U [32, 18], defined as (T π U v)(s) := min (P,R)∈U � a π(a|s) � R(s, a) + γ � s′ P(s′|s, a)v(s′) � , and the optimal robust value function v∗ U is the fixed point of the optimal robust Bellman operator T ∗ U [18, 32], defined as T ∗ U v := max π T π U v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The optimal robust Bellman operator T ∗ U and robust Bellman operators T π U are γ contraction maps for all policy π [32], that is ∥T ∗ U v − T ∗ U u∥∞ ≤ γ∥u − v∥∞, ∥T π U v − T π U u∥∞ ≤ γ∥u − v∥∞, ∀π, u, v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' So for all initial values vπ 0 , v∗ 0, sequences defined as vπ n+1 := T π U vπ n, v∗ n+1 := T ∗ U v∗ n (6) converges linearly to their respective fixed points, that is vπ n → vπ U and v∗ n → v∗ U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Given this optimal robust value function, the optimal robust policy can be computed as: π∗ U ∈ arg maxπ T π U v∗ U [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This makes the robust value iteration an attractive method for solving s-rectangular robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 4 Table 1: p-variance x κx(v) Remark p minω∈R∥v − ω1∥p Binary search ∞ maxs v(s)−mins v(s) 2 Semi-norm 2 �� s � v(s) − � s v(s) S �2 Variance 1 �⌊(S+1)/2⌋ i=1 v(si) Top half - lower half − �S i=⌈(S+1)/2⌉ v(si) where v is sorted, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' v(si) ≥ v(si+1) ∀i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 3 Method In this section, we consider constraining the uncertainty set around nominal values by the Lp norm, which is a natural way of limiting the broad class of s (or sa)-rectangular uncertainty sets [9, 16, 3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We will then derive robust Bellman operators for these uncertainty sets, which can be used to obtain robust value functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This will be done separately for sa-rectangular in Subsection 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='1 and s-rectangular case in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We begin by making a few useful definitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We reserve q for Holder conjugate of p, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 1 p + 1 q = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Let p-variance function κp : S → R be defined as κp(v) := min ω∈R∥v − ω1∥p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (7) For p = 1, 2, ∞, the p-variance function κp has intuitive closed forms as summarized in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For general p, it can be calculated by binary search in the range [mins v(s), maxs v(s)] ( see appendix I for proofs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='1 (Sa)-rectangular Lp robust Markov Decision Processes In accordance with [9], we define sa-rectangular Lp constrained uncertainty set Usa p as Usa p := (P0 + P) × (R0 + R) where P, R are noise sets around nominal kernel P0 and nominal reward R0 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Furthermore, noise sets are sa-rectangular, that is P = ×s∈S,a∈APs,a, and R = ×s∈S,a∈ARs,a, and each component are bounded by Lp norm that is Rs,a = � Rs,a ∈ R ��� |Rs,a| ≤ αs,a � , and Ps,a = {Ps,a : S → R ��� � s′ Ps,a(s′) = 0 � �� � simplex condition , ∥Ps,a∥p ≤ βs,a} 5 with radius vector α and β.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Radius vector β is chosen small enough so that all the transition kernels in (P0 + P) are well defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Further, all transition kernels in (P0 + P) must have the sum of each row equal to one, with P0 being a valid transition kernel satisfying this requirement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This implies that the elements of P must have a sum of zero across each row as ensured by simplex condition above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Our setting differs from [9] as they didn’t impose this simplex condition on the kernel noise, which renders their setting unrealistic as not all transition kernels in their uncertainty set satisfy the properties of transition kernels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This makes our reward regularizer dependent on the q-variance of the value function κq(v), instead of the q-th norm of value function ∥v∥q in [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The main result of this subsection below states that robust Bellman operators can be evaluated using only nominal values and regularizers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' sa-rectangular Lp robust Bellman operators are equivalent to reward-value regularized (non-robust) Bellman operators: (T π Usa p v)(s) = � a π(a|s) � −αs,a − γβs,aκq(v) + R0(s, a) + γ � s′ P0(s′|s, a)v(s′) � , and (T ∗ Usa p v)(s) = max a∈A � −αs,a − γβs,aκq(v) + R0(s, a) + γ � s′ P0(s′|s, a)v(s′) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The proof in appendix, it mainly consists of two parts: a) Separating the noise from nominal values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' b) The reward noise to yields the term −αs,a and noise in kernel yields −γβs,aκq(v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Note, the reward penalty is proportional to both the uncertainty radiuses and a novel variance function κp(v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We recover non-robust value iteration by putting uncertainty radiuses (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' αs,a, βs,a) to zero, in the above results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Furthermore, the same is true for all subsequent robust results in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Q-Learning The above result immediately implies the robust value iteration, and also suggests the Q-value iteration of the following form Qn+1(s, a) = max a � R0(s, a) − αs,a − γβs,aκq(vn) + � s′ P0(s′|s, a) max a Qn(s′, a′) � , where vn(s) = maxa Qn(s, a), which is further discussed in appendix E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Observe that value-variance κp(v) can be estimated online, using batches or other more sophisticated methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This paves the path for generalizing to a model-free setting similar to [30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Forbidden Transitions Now, we focus on the cases where P0(s′|s, a) = 0 for some states s′, that is, forbidden transitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In many practical situations, for a given state, many transitions are impossible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For example, consider a grid world example where only a single-step jumps (left, right, up, down) are allowed, so in this case, the probability of making a multi-step jump is impossible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 6 Table 2: Optimal robust Bellman operator evaluation U (T ∗ U v)(s) remark Us p min x s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' ��� � Qs − x1 � 1 � Qs ≥ x ���� p= σq(v,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' s) Solve by binary search Us 1 maxk �k i=1 Q(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='ai)−σ∞(v,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='s) k Highest penalized average Us 2 By algorithm 1 High mean and variance Us ∞ maxa∈A Q(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) − σ1(v,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' s) Best action Usa p maxa∈A � Q(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) − αsa − γβsaκq(v) � Best penalized action nr maxa Q(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) Best action where nr stands for Non-Robust MDP,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Q(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) = R0(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) + γ � s′ P0(s′|s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a)v(s′),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' sorted Q-value: Q(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a1) ≥ · · · ≥ Q(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' aA) ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' σq(v,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' s) = αs + γβsκq(v),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Qs = Q(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' ·),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' and ◦ is Hadamard product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' So upon adding noise to the kernel, the system should not start making impossible transitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Therefore, noise set P must satisfy additional constraint: For any (s, a) if P0(s′|s, a) = 0 then P(s′|s, a) = 0, ∀P ∈ P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Incorporating this constraint without much change in the theory is one of our novel contri- bution, and is discussed in the appendix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='2 S-rectangular Lp robust Markov Decision Processes In this subsection, we discuss the core contribution of this paper: the evaluation of robust Bellman operators for the s-rectangular uncertainty set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We begin by defining s-rectangular Lp constrained uncertainty set Us p as Us p := (P0 + P) × (R0 + R) where noise sets are s-rectangular, P = ×s∈SPs, and R = ×s∈SRs, and each component are bounded by Lp norm, Rs = � Rs : A → R ��� ∥Rs∥p ≤ αs � , and Ps = � Ps : S × A → R ��� ∥Ps∥p ≤ βs, � s′ Ps(s′, a) = 0, ∀a � , with radius vectors α and small enough β.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The result below shows that, compared to the sa-rectangular case, the policy evaluation for the s-rectangular case has an extra dependence on the policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 7 Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (Policy Evaluation) S-rectangular Lp robust Bellman operator is equivalent to reward-value-policy regularized (non-robust) Bellman operator: (T π Uspv)(s) = − � αs + γβsκq(v) � ∥πs∥q + � a π(a|s) � R0(s, a) + γ � s′ P0(s′|s, a)v(s′) � , where ∥πs∥q is q-norm of the vector π(·|s) ∈ ∆A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The proof in the appendix: the techniques are similar to as its sa-rectangular counterpart.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The reward penalty in this case has an additional dependence on the norm of the policy (∥πs∥q).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This norm is conceptually similar to entropy regularization � a π(a|s) ln(π(a|s)), which is widely studied in the literature [21, 13, 20, 14, 27], and other regularizers such as � a π(a|s)tsallis( 1−π(a|s) 2 ), � a π(a|s)cos(cos( π(a|s) 2 )), etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Note: These regularizers, which are convex functions, are often used to promote stochas- ticity in the policy and thus improve exploration during learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' However, the above result shows another benefit of these regularizers: they can improve robustness, which in turn can lead to better generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In literature, the above regularizers are scaled with arbitrary chosen constant, here we have the different constant αs + γβsκq(v) for different states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This extra dependence makes the policy improvement a more challenging task and thus, presents a richer theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (Policy improvement) For any vector v and state s, (T ∗ Uspv)(s) is the minimum value of x that satisfies � � a � Q(s, a) − x �p 1 � Q(s, a) ≥ x �� 1 p = σ, (8) where Q(s, a) = R0(s, a) + γ � s′ P0(s′|s, a)v(s′), and σ = αs + γβsκq(v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The proof is in the appendix;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' the main steps are: (T ∗ Uspv)(s) = max π (T π Uspv)(s), (from definition) ( Using policy evaluation Theorem 2) = max π � (T π (P0,R0)v)(s)− � αs + γβsκq(v) � ∥πs∥q � = max πs∈∆A⟨πs, Qs⟩ − σ∥πs∥q ( where Qs = Q(·|s)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The solution to the above optimization problem is technically complex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Specifically, for p = 2, the solution is known as the water filling/pouring lemma [1], we generalize it to the Lp case, in the appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' To better understand the nature of (8), lets look at the ’sub-optimality distance’ function g, g(x) := � � a � Q(s, a) − x �p 1 � Q(s, a) ≥ x �� 1 p .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 8 Algorithm 1 s-rectangular L2 robust Bellman operator (see algorithm 1 of [1] Input: v, s, x = Q(s, ·), and σ = αs + γβsκq(v) Output: (T ∗ Uspv)(s) 1: Sort x such that x1 ≥ x2, · · · ≥ xA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 2: Set k = 0 and λ = x1 − σ 3: while k ≤ A − 1 and λ ≤ xk do 4: k = k + 1 5: λ = 1 k � k � i=1 xi − � � � �kσ2 + ( k � i=1 x2 i − k k � i=1 xi)2 � 6: end while 7: return λ The g(x) is the cumulative difference between x and the Q-values of actions whose Q-value is greater than x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The function is monotonically decreasing, with a lower bound of σ at x = maxa Q(s, a) − σ and a value of zero for all x ≥ maxa Q(s, a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Since, (T ∗ Uspv)(s) is the value of x at which the "sub-optimality distance" g(x) is equal to the "uncertainty penalty" σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Hence, (8) can be approximately solved using a binary search between the interval [maxa Q(s, a) − σ, maxa Q(s, a)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We invite the readers to consider the dependence of (T ∗ Uspv)(s) on p, αs, and βs, specifically: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' If αs = βs = 0 then σ = 0 which implies (T ∗ Uspv)(s) = maxa Q(s, a), same as non-robust case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' If p = ∞ then (T ∗ Uspv)(s) = maxa Q(s, a) − σ, as in the sa-rectangular case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For p = 1, 2, (8) becomes linear and quadratic equation respectively, hence can be solved exactly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' As αs and βs increase, σ increases, resulting in a decrease in (T ∗ Uspv)(s) at a rate that becomes smaller as σ increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' When σ is sufficiently small, (T ∗ Uspv)(s) = maxa Q(s, a)− σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Solution to (8) can be obtained in closed form for the cases of p = 1, ∞, exactly by algorithm 1 for p = 2, and approximately by binary search for general p, as summarized in table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In this section, we have demonstrated that robust Bellman operators can be efficiently evaluated for both sa and s rectangular Lp robust MDPs, thus enabling efficient robust value iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In the following sections, we discuss the nature of optimal policies and the time complexity of robust value iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Finally, we present experiments validating the time complexity of robust value iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 4 Optimal Policies In the previous sections, we discussed how to efficiently obtain the optimal robust value functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This section focuses on utilizing these optimal robust value functions to derive 9 Table 3: Optimal Policy U π∗ U(a|s) ∝ Remark Us p A(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a)p−11(A(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) ≥ 0) Top actions proportional to (p − 1)-th power of advantage Us 1 1(A(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) ≥ 0) Top actions with uniform probability Us 2 A(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a)1(A(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) ≥ 0) Top actions proportion to advantage Us ∞ 1(A(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) = 0) Best action Usa p 1(A(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) = maxa A(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a)) Best regularized action (P0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' R0) 1(A(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) = 0) Non-robust MDP: Best action where Q(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) = R0(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) + γ � s′ P0(s′|s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a)v∗ U(s′),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' and A(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) = Q(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) − v∗ U(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' the optimal robust policy using π∗ U ∈ arg max π T π U v∗ U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This implies, the robust optimal policy π∗ U(·|s) at state s, is the policy π that maximizes � a π(a|s) min (P,R)∈U � R(s, a) + γ � s′ P(s′|s, a)v∗ U(s′) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Non-robust MDP admits a deterministic optimal policy that maximizes the optimal Q-value Q(s, a) := R(s, a) + γ � s′ P(s′|s, a)v∗ (P,R)(s′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' sa-rectangular robust MDPs are known to admit a deterministic optimal robust policy [18, 24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Moreover, from Theorem 1, it clear that a sa-rectangular Lp robust MDP has a deterministic optimal robust policy that maximizes the regularized Q-value Q(s, a) = −αs,a − γβs,aκq(v) + R0(s, a) + γ � s′ P0(s′|s, a)v∗ Usa p (s′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' s-rectangular robust MDPs: For this case, it is known that all optimal robust policies can be stochastic [32], however, it was not previously known what the nature of this stochasticity was.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The result below provides the first explicit characterization of robust optimal policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The optimal robust policy π∗ Usp can be computed using optimal robust value function as: π∗ Usp(a|s) ∝ [Q(s, a) − v∗ Usp(s)]p−11 � Q(s, a) ≥ v∗ Usp(s) � where Q(s, a) = R0(s, a) + γ � s′ P0(s′|s, a)v∗ Us p(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The above policy is a threshold policy that takes actions with a positive advantage, which is proportional to the advantage function, while giving more weight to actions with higher advantages and avoiding playing actions that are not useful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This policy is different from the optimal policy in soft-Q learning with entropy regularization, which is a softmax policy 10 Algorithm 2 Online s-rectangular Lp robust value iteration Input: Initialize Q, v randomly, s0 ∼ µ, and n = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Output: v = v∗ Usp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 1: while not converged;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' n = n + 1 do 2: Estimate κp(v) using table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 3: Approximate (T ∗ Uspv)(sn) using table 2 and update v(sn) = v(sn) + ηn[(T ∗ Uspv)(sn) − v(sn)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 4: Play action an = a with probability proportional to [Q(sn, a) − v(sn)]p−11(Q(sn, a) ≥ v(sn)), and get next state sn+1 from the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 5: Update Q-value: Q(sn, an) =Q(sn, an) + η′ n[R(sn, an) + γv(sn+1) − Q(sn, an)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 6: end while Table 4: Relative running cost (time) for value iteration S A nr Usa 1 LP Us 1 LP Usa 1 Usa 2 Usa ∞ Us 1 Us 2 Us ∞ Usa 10 Us 10 10 10 1 1438 72625 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='7 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='4 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='4 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='5 33 30 10 1 6616 629890 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='0 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='2 78 50 10 1 6622 4904004 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='9 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='4 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='2 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='1 41 100 20 1 16714 NA 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='2 41 nr stands for Non-robust MDP of the form π(a|s) ∝ eη(Q(a|s)−v(s)) [14, 21, 27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' To the best of our knowledge, this type of policy has not been presented in literature before.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The special cases of the above theorem for p = 1, 2, ∞ along with others are summarized in table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 5 Time complexity In this section, we examine the time complexity of robust value iteration: vn+1 := T ∗ U vn for different Lp robust MDPs assuming the knowledge of nominal values (P0, R0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Since, the optimal robust Bellman operator T ∗ U is γ-contraction operator [32], meaning that it requires only O(log( 1 ϵ )) iterations to obtain an ϵ-close approximation of the optimal robust value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The main challenge is to calculate the cost of one iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The evaluation of the optimal robust Bellman operators in Theorem 1 and Theorem 3 has three main components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' A) Computing κp(v), which can be done differently depending 11 Table 5: Time complexity Total cost O Non-Robust MDP log(1/ϵ)S2A Usa 1 log(1/ϵ)S2A Usa 2 log(1/ϵ)S2A Usa ∞ log(1/ϵ)S2A Us 1 log(1/ϵ)(S2A + SA log(A)) Us 2 log(1/ϵ)(S2A + SA log(A)) Us ∞ log(1/ϵ)S2A Usa p log(1/ϵ) � S2A + S log(S/ϵ) � Us p log(1/ϵ) � S2A + SA log(A/ϵ) � Convex U Strongly NP Hard on the value of p, as shown in table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' B) Computing the Q-value from v, which requires O(S2A) in all cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' And finally, C) Evaluating optimal robust Bellman operators from Q-values, which requires different operations such as sorting of the Q-value, calculating the best action, and performing a binary search, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=', as shown in table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The overall complexity of the evaluation is presented in table 5, with the proofs provided in appendix L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We can observe that when the state space S is large, the complexity of the robust MDPs is the same as that of the non-robust MDPs, as the complexity of all robust MDPs is the same as non-robust MDPs at the limit S → ∞ (keeping action space A and tolerance ϵ constant).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This is verified by our experiments, thus concluding that the Lp robust MDPs are as easy as non-robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 6 Experiments In this section, we present numerical results that demonstrate the effectiveness of our methods, verifying our theoretical claims.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Table 4 and Figure 1 demonstrate the relative cost (time) of robust value iteration compared to non-robust MDP, for randomly generated kernel and reward functions with varying numbers of states S and actions A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The results show that s and sa-rectangular MDPs are indeed costly to solve using numerical methods such as Linear Programming (LP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Our methods perform similarly to non-robust MDPs, especially for p = 1, 2, ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For general p, binary search is required for acceptable tolerance, which requires 30−50 iterations, leading to a little longer computation time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' As our complexity analysis shows, value iteration’s relative cost converges to 1 as the number of states increases while keeping the number of actions fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This is confirmed by Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The rate of convergence for all the settings tested was the same as that of the non-robust setting, as predicted by the theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The experiments ran a few times, resulting in some stochasticity in the results, but the trend is clear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Further details can be found in section G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 12 Figure 1: Relative cost of value iteration w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' non-robust MDP at different S with fixed A = 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 13 Relative cost of value iteration U^sa 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='5 U^s 1 relative cost to non-robust MDPs non-robust 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='4 L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='0 0 200 600 800 400 1000 1200 1400 number of states7 Conclusion and future work We present an efficient robust value iteration for s-rectangular Lp-robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Our method can be easily adapted to an online setting, as shown in Algorithm 2 for s-rectangular Lp-robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Algorithm 2 is a two-time-scale algorithm, where the Q-values are approximated at a faster time scale and the value function is approximated from the Q-values at a slower time scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The p-variance function κp can be estimated in an online fashion using batches or other sophisticated methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The convergence of the algorithm can be guaranteed from [8];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' however, its analysis is left for future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Additionally, we introduce a novel value regularizer (κp) and a novel threshold policy which may help to obtain more robust and generalizable policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Further research could focus on other types of uncertainty sets, potentially resulting in different kinds of regularizers and optimal policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' References [1] Oren Anava and Kfir Levy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' k*-nearest neighbors: From global to local.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Advances in neural information processing systems, 29, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [2] Mahsa Asadi, Mohammad Sadegh Talebi, Hippolyte Bourel, and Odalric-Ambrym Maillard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Model-based reinforcement learning exploiting state-action equivalence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' CoRR, abs/1910.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='04077, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [3] Peter Auer, Thomas Jaksch, and Ronald Ortner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Near-optimal regret bounds for reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Koller, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Schuurmans, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Bengio, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Bottou, editors, Advances in Neural Information Processing Systems, volume 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=', 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [4] Peter Auer and Ronald Ortner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Logarithmic online regret bounds for undiscounted reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Schölkopf, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Platt, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Hoffman, editors, Advances in Neural Information Processing Systems, volume 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' MIT Press, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [5] Mohammad Gheshlaghi Azar, Ian Osband, and Rémi Munos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Minimax regret bounds for reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 263–272.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' PMLR, 06–11 Aug 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [6] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Andrew Bagnell, Andrew Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Ng, and Jeff G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Schneider.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Solving uncertain markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Technical report, Carnegie Mellon University, 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [7] Bahram Behzadian, Marek Petrik, and Chin Pang Ho.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Fast algorithms for l_\\infty- constrained s-rectangular robust mdps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Ranzato, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Beygelzimer, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Dauphin, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Liang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 25982–25992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [8] Vivek Borkar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Stochastic Approximation: A Dynamical Systems Viewpoint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 01 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [9] Esther Derman, Matthieu Geist, and Shie Mannor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Twice regularized mdps and the equivalence between robustness and regularization, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 14 [10] Esther Derman and Shie Mannor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Distributional robustness and regularization in reinforcement learning, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [11] Benjamin Eysenbach and Sergey Levine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Maximum entropy rl (provably) solves some robust rl problems, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [12] Vineet Goyal and Julien Grand-Clément.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Robust markov decision process: Beyond rectangularity, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [13] Jean-Bastien Grill, Omar Darwiche Domingues, Pierre Menard, Remi Munos, and Michal Valko.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Planning in entropy-regularized markov decision processes and games.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Wallach, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Larochelle, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Beygelzimer, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=" d'Alché-Buc, E." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Fox, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Gar- nett, editors, Advances in Neural Information Processing Systems, volume 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [14] Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Reinforcement learning with deep energy-based policies, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [15] Grani Adiwena Hanasusanto and Daniel Kuhn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Robust data-driven dynamic program- ming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Burges, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Bottou, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Welling, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Ghahramani, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Weinberger, editors, Advances in Neural Information Processing Systems, volume 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Curran Asso- ciates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=', 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [16] Chin Pang Ho, Marek Petrik, and Wolfram Wiesemann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Partial policy iteration for l1-robust markov decision processes, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [17] Hisham Husain, Kamil Ciosek, and Ryota Tomioka.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Regularized policies are reward robust, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [18] Garud N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Iyengar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Robust dynamic programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Mathematics of Operations Research, 30(2):257–280, May 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [19] David L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Kaufman and Andrew J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Schaefer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Robust modified policy iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' INFORMS J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=', 25:396–410, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [20] Xiang Li, Wenhao Yang, and Zhihua Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' A Regularized Approach to Sparse Optimal Policy in Reinforcement Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Curran Associates Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=', Red Hook, NY, USA, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [21] Tien Mai and Patrick Jaillet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Robust entropy-regularized markov decision processes, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [22] Shie Mannor, Ofir Mebel, and Huan Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Robust mdps with k-rectangular uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Oper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=', 41(4):1484–1509, nov 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [23] Shie Mannor, Duncan Simester, Peng Sun, and John N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Tsitsiklis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Bias and variance in value function estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In Proceedings of the Twenty-First International Conference on Machine Learning, ICML ’04, page 72, New York, NY, USA, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Association for Computing Machinery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [24] Arnab Nilim and Laurent El Ghaoui.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Robust control of markov decision processes with uncertain transition matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Oper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=', 53:780–798, 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [25] Charles Packer, Katelyn Gao, Jernej Kos, Philipp Krähenbühl, Vladlen Koltun, and Dawn Song.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Assessing generalization in deep reinforcement learning, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 15 [26] Martin L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Puterman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Markov decision processes: Discrete stochastic dynamic program- ming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In Wiley Series in Probability and Statistics, 1994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [27] John Schulman, Xi Chen, and Pieter Abbeel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Equivalence between policy gradients and soft q-learning, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [28] Richard S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Sutton and Andrew G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Barto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Reinforcement Learning: An Introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The MIT Press, second edition, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [29] Aviv Tamar, Shie Mannor, and Huan Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Scaling up robust mdps using function approximation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In Eric P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Xing and Tony Jebara, editors, Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pages 181–189, Bejing, China, 22–24 Jun 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [30] Yue Wang and Shaofeng Zou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Online robust reinforcement learning with model uncer- tainty, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [31] Yue Wang and Shaofeng Zou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Policy gradient method for robust reinforcement learning, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [32] Wolfram Wiesemann, Daniel Kuhn, and Breç Rustem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Robust markov decision processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Mathematics of Operations Research, 38(1):153–183, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [33] Huan Xu and Shie Mannor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Robustness and generalization, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' [34] Chenyang Zhao, Olivier Sigaud, Freek Stulp, and Timothy M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Hospedales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Investigating generalisation in continuous deep reinforcement learning, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' How to read appendix 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Section A contains related work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Section B contains additional properties and results that couldn’t be included in the main section for the sake of clarity and space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Many of the results in the main paper is special cases of the results in this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Section C contains the discussion on zero transition kernel (forbidden transitions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Section D contains a possible connection this work to UCRL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Section G contains additional experimental results and a detailed discussion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' All the proofs of the main body of the paper is presented in the section K and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Section I contains helper results for section K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Particularly, it discusses p-mean function ωp and p-variance function κp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Section J contains helper results for section K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Particularly, it discusses Lp water pouring lemma, necessary to evaluate robust optimal Bellman operator (learning) for s-rectangular Lp robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Section L contains time complexity proof for model based algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 16 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Section E develops Q-learning machinery for (sa)-rectangular Lp robust MDPs based on the results in the main section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' It is not used in the main body or anywhere else, but this provides a good understanding for algorithms proposed in section F for (sa)-rectangular case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Section F contains model-based algorithms for s and (sa)-rectangular Lp robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' It also contains, remarks for special cases for p = 1, 2, ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' A Related Work R-Contamination Uncertainty Robust MDPs The paper [30] considers the following uncertainty set for some fixed constant 0 ≤ R ≤ 1, Psa = {(1 − R)(P0)(·|s, a) + RP | P ∈ ∆S}, s ∈ S, a ∈ A, (9) and P = ⊗s,aPs,a, U = {R0} × P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The robust value function vπ U is the fixed point of the robust Bellman operator defined as (T π U v)(s) := min P ∈P � a π(a|s)[R0(s, a) + γ � s′ P(s′|s, a)v(s′)], (10) = � a π(a|s)[R0(s, a) − γR max s v(s) + (1 − R)γ � s′ P0(s′|s, a)v(s′)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (11) And the optimal robust value function v∗ U⊣ is the fixed point of the optimal robust Bellman operator defined as (T ∗ U v)(s) := max π min P ∈P � a π(a|s)[R0(s, a) + γ(1 − R) � s′ P(s′|s, a)v(s′)], (12) = max a [R0(s, a) − γR max s v(s) + γ(1 − R) � s′ P0(s′|s, a)v(s′)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (13) Since, the uncertainty set is sa-rectangular, hence the map is a contraction [24], so the robust value iteration here, will also converge linearly similar to non-robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' It is also possible to obtain Q-learning as following Qn+1(s, a) = R0(s, a) − γR max s,a Qn(s, a) + γ(1 − R) � s′ P0(s′|s, a) max s′ Qn(s′, a′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (14) Convergence of the above Q-learning follows from the contraction of robust value iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Further, it is easy to see that model-free Q-learning can be obtained from the above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' A follow-up work [31] proposes a policy gradient method for the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proposition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='3 of [31]) Consider a class of policies Π satisfying Assumption 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='2 of [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The gradient of the robust return is given by ∇ρπθ = γR (1 − γ)(1 − γ + γR) � s,a dπθ µ (s, a)∇πθ(a|s)Qπθ U (s, a) + 1 1 − γ + γR � s,a dπθ sθ (s, a)∇πθ(a|s)Qπθ U (s, a), where sθ ∈ arg max vπθ U (s), and Qπ U(s, a) = � a π(a|s) � R0(s, a) − γR maxs vπ U(s) + γ(1 − R) � s′ P0(s′|s, a)vπ U(s′) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 17 The work shows that the proposed robust policy gradient method converges to the global optimum asymptotically under direct policy parameterization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The uncertainty set considered here, is sa-rectangular, as uncertainty in each state-action is independent, hence the regularizer term (γR maxs v(s)) is independent of policy, and the optimal (and greedy) policy is deterministic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' It is unclear, how the uncertainty set can be generalized to the s-rectangular case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Observe that the above results resemble very closely our sa-rectangular L1 robust MDPs results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Twice Regularized MDPs The paper [9] converts robust MDPs to twice regularized MDPs, and proposes a gradient based policy iteration method for solving them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='1 of [9]) (s-rectangular reward robust policy evaluation) Let the uncertainty set be U = (R0 + R) × {P0}, where Rs = {rs ∈ RA | ∥rs∥ ≤ αs} for all s ∈ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Then the robust value function vπ U is the optimal solution to the convex optimization problem: max v∈RA⟨µ, v⟩ s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' v(s) ≤ (T π R0,P0v)(s) − αs∥πs∥, ∀s ∈ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' It derives the policy gradient for reward robust MDPs to obtain the optimal robust policy π∗ U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='2 of [9]) (s-rectangular reward robust policy gradient) Let the uncertainty set be U = (R0 + R) × {P0}, where Rs = {rs ∈ RA | ∥rs∥ ≤ αs} for all s ∈ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Then the gradient of the reward robust objective ρπ U := ⟨µ, vπ U⟩ is given by ∇ρπ U = E(s,a)∼dπ P0 � ∇ ln(π(a|s)) � Qπ U(s, a) − αs π(a|s) ∥πs∥ �� , where Qπ U(s, a) := min(R,P )∈U[R(s, a) + γ � s′ P(s′|s, a)vπ U(s′)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proposition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='1 of [9]) (s-rectangular general robust policy evaluation) Let the uncertainty set be U = (R0 + R) × {P0 + P}, where Rs = {rs ∈ RA | ∥rs∥ ≤ αs} and Ps = {Ps ∈ RS×A | ∥Ps∥ ≤ βs} for all s ∈ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Then the robust value function vπ U is the optimal solution to the convex optimization problem: max v∈RA⟨µ, v⟩ s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' v(s) ≤ (T π R0,P0v)(s) − αs∥πs∥ − γβs∥v∥∥πs∥, ∀s ∈ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Same as the reward robust case, the paper tries to find a policy gradient method to obtain the optimal robust policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Unfortunately, the dependence of regularizer terms on value makes it a very difficult task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Hence it proposes the R2MPI algorithm (algorithm 1 of [9]) for the purpose that optimizing the greedy step via projection onto the simplex using a black box solver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Note that the above proposition is not same as our policy evaluation (although it looks similar), it requires some extra assumptions (assumption 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='1 [9]) and lot of work ensure R2 Bellman operator is contraction etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In our case, we directly evaluate robust Bellman operator that has already proven to be a contraction, hence we don’t require any extra assumption nor any other work as [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Our work makes improvements over this work by explicitly solving both policy evaluation and policy improvement in general robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' It also makes more realistic assumptions on the transition kernel uncertainty set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 18 Regularizer solves Robust MDPs The work [11] looks in the opposite direction than we do.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' It investigates the impact of the popularly used entropy regularizer on robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' It finds that MaxEnt can be used to maximize a lower bound on a certain robust RL objective (reward robust).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' As we noticed that ∥πs∥q behaves like entropy in our regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Further, our work also deals with uncertainty in transition kernel in addition to the uncertainty in reward function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Upper Confidence RL The upper confidence setting in [4, 3] is very similar to our Lp robust setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We refer to this discussion in section D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' B S-rectangular: More Properties Definition 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We begin with the following notational definitions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Q-value at value function v is defined as Qv(s, a) := R0(s, a) + γ � s′ P0(s′|s, a)v(s′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Optimal Q-value is defined as Q∗ U(s, a) = R0(s, a) + γ � s′ P0(s′|s, a)v∗ U(s′) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' With little abuse of notation, Q(s, ai) shall denote the ith best value in state s, that is Q(s, a1) ≥ Q(s, a2) ≥, · · · , ≥ Q(s, aA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' πv U denotes the greedy policy at value function v, that is T ∗ U v = T πv U U v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' χp(s) denotes the number of active actions in state s in s-rectangular Lp robust MDPs, defined as χp(s) := �� {a | π∗ Us p(a|s) ≥ 0} �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' χp(p, s) denotes the number of active actions in state s at value function v in s- rectangular Lp robust MDPs, defined as χp(v, s) := �� {a | πv Us p(a|s) ≥ 0} �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We saw above that optimal policy in s-rectangular robust MDPs may be stochastic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The action that has a positive advantage is active and the rest are inactive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Let χp(s) be the number of active actions in state s, defined as χp(s) := �� {a | π∗ Us p(a|s) ≥ 0} ��= �� {a | Q∗ Us p(s, a) ≥ v∗ Us p(s)} �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (15) 19 Last equality comes from Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' One direct relation between Q-value and value function is given by v∗ Us p(s) = � a π∗ Us p(a|s) � − � αs + γβsκq(v) � ∥π∗ Us p(·|s)∥q + Q∗ Us p(s, a) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (16) The above relation is very convoluted compared to non-robust and sa-rectangular robust cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The property below illuminates an interesting relation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Property 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (Optimal Value vs Q-value) v∗ Us p(s) is bounded by the Q-value of χp(s)th and (χp(s) + 1)th actions, that is , Q∗ Us p(s, aχp(s)+1) < v∗ Us p(s) ≤ Q∗ Us p(s, aχp(s)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This special case of the property 2, similarly table 6 is special case of table 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Table 6: Optimal value function and Q-value v∗(s) = maxa Q∗(s, a) Best value v∗ Usa p (s) = maxa[αs,a − γβs,aκq(v∗ Usa p ) − Q∗ Usa p (s, a)] Best regularized value Q∗ Us p(s, aχp(s)+1) < v∗ Us p(s) ≤ Q∗ Us p(s, aχp(s)) Sandwich!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' where v∗, Q∗ is the optimal value function and Q-value respectively of non-robust MDP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The same is true for the non-optimal Q-value and value function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Theorem 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (Greedy policy) The greedy policy πv Us p is a threshold policy, that is proportional to the advantage function, that is πv Us p(a|s) ∝ � Qv(s, a) − (T ∗ Us pv)(s) �p−1 1 � Qv(s, a) ≥ (T ∗ Us pv)(s) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The above theorem is proved in the appendix, and Theorem 4 is its special case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' So is table 3 special case of table 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Table 7: Greedy policy at value function v U πv U(a|s) ∝ remark Us p (Qv(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) − (T ∗ U v)(s))p−11(Av U(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) ≥ 0) top actions proportional to (p − 1)th power of its advantage Us 1 1(Av U(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a)≥0) � a 1(Av U(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a)≥0) top actions with uniform probability Us 2 Av U(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a)1Av U(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a)≥0) � a Av U(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a)1(Av U(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a)≥0) top actions proportion to advantage Us ∞ arg maxa∈A Qv(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) best action Usa p arg maxa[−αsa − γβsaκq(v) + Qv(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a)] best action where Av U(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) = Qv(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) − (T ∗ U v)(s) and Qv(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) = R0(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) + γ � s′ P0(s′|s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a)v(s′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 20 The above result states that the greedy policy takes actions that have a positive advantage, so we have.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' χp(v, s) := �� {a | πv Us p(a|s) ≥ 0} ��= �� {a | Qv(s, a) ≥ (T ∗ Us p)v(s)} �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (17) Property 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (Greedy Value vs Q-value) (T ∗ Us pv)(s) is bounded by the Q-value of χp(v, s)th and (χp(v, s) + 1)th actions, that is , Qv(s, aχp(v,s)+1) < (T ∗ Us pv)(s) ≤ Qv(s, aχp(v,s)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Table 8: Greedy value function and Q-value (T ∗v)(s) = maxa Qv(s, a) Best value (T ∗ Usa p )v(s) = maxa[αs,a − γβs,aκq(v) − Qv(s, a)] Best regularized value Qv(s, aχp(v,s)+1) < (T ∗ Us p)v(s) ≤ Qv(s, aχp(v,s)) Sandwich!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' where Qv(s, a1) ≥, · · · , ≥ Qv(s, aA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The property below states that we can compute the number of active actions χp(v, s) (and χp(s)) directly without computing greedy (optimal) policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Property 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' χp(v, s) is number of actions that has positive advantage, that is χp(v, s) := max{k | k � i=1 � Qv(s, ai) − Qv(s, ak) �p≤ σp}, where σ = αs + γβsκq(v), and Qv(s, a1) ≥ Qv(s, a2), ≥ · · · ≥ Q(s, aA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' When uncertainty radiuses (αs, βs) are zero (essentially σ = 0 ), then χp(v, s) = 1, ∀v, s, that means, greedy policy taking the best action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In other words, all the robust results reduce to non-robust results as discussed in section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='2 as the uncertainty radius becomes zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Algorithm 3 Algorithm to compute s-rectangular Lp robust optimal Bellman Operator 1: Input: σ = αs + γβsκq(v), Q(s, a) = R0(s, a) + γ � s′ P0(s′|s, a)v(s′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 2: Output (T ∗ Us pv)(s), χp(v, s) 3: Sort Q(s, ·) and label actions such that Q(s, a1) ≥ Q(s, a2), · · · .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 4: Set initial value guess λ1 = Q(s, a1) − σ and counter k = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 5: while k ≤ A − 1 and λk ≤ Q(s, ak) do 6: Increment counter: k = k + 1 7: Take λk to be a solution of the following k � i=1 � Q(s, ai) − x �p= σp, and x ≤ Q(s, ak).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (18) 8: end while 9: Return: λk, k 21 C Revisiting kernel noise assumption Sa-Rectangular Uncertainty Suppose at state s, we know that it is impossible to have transition (next) to some states (forbidden states Fs,a) under some action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' That is, we have the transition uncertainty set P and nominal kernel P0 such that P0(s′|s, a) = P(s′|s, a) = 0, ∀P ∈ P, ∀s′ ∈ Fs,a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (19) Then we define, the kernel noise as Ps,a = {P | ∥P∥p = βs,a, � s′ P(s′) = 0, P(s”) = 0, ∀s” ∈ Fs,a}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (20) In this case, our p-variance function is redefined as κp(v, s, a) = min ∥P ∥p=βs,a, � s′ P (s′)=0, P (s”)=0, ∀s”∈Fs,a⟨P, v⟩ (21) = min ω∈R∥u − ω1∥p, where u(s) = v(s)1(s /∈ Fs,a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (22) =κp(u) (23) This basically says, we consider value of only those states that is allowed (not forbidden) in calculation of p-variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For example, we have κ∞(v, s, a) = maxs/∈Fs,a v(s) − mins/∈Fs,a v(s) 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (24) (25) So theorem 1 of the main paper can be re-stated as Theorem 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (Restated) (Sa)-rectangular Lp robust Bellman operator is equivalent to reward regularized (non-robust) Bellman operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' That is, using κp above, we have (T π Usa p v)(s) = � a π(a|s)[−αs,a − γβs,aκq(v, s, a) + R0(s, a) + γ � s′ P0(s′|s, a)v(s′)], (T ∗ Usa p v)(s) = max a∈A[−αs,a − γβs,aκq(v, s, a) + R0(s, a) + γ � s′ P0(s′|s, a)v(s′)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' S-Rectangular Uncertainty This notion can also be applied to s-rectanular uncertainty, but with little caution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Here, we define forbidden states in state s to be Fs (state dependent) instead of state-action dependent in sa-rectangular case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Here, we define p-variance as κp(v, s) = κp(u), where u(s) = v(s)1(s /∈ Fs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (26) So the theorem 2 can be restated as Theorem 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (restated) (Policy Evaluation) S-rectangular Lp robust Bellman operator is equivalent to reward regularized (non-robust) Bellman operator, that is (T π Us pv)(s) = − � αs +γβsκq(v, s) � ∥π(·|s)∥q + � a π(a|s) � R0(s, a)+γ � s′ P0(s′|s, a)v(s′) � where κp is defined above and ∥π(·|s)∥q is q-norm of the vector π(·|s) ∈ ∆A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 22 All the other results (including theorem 4), we just need to replace the old p-variance function with new p-variance function appropriately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' D Application to UCRL In robust MDPs, we consider the minimization over uncertainty set to avoid risk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' When we want to discover the underlying kernel by exploration, then we seek optimistic policy, then we consider the maximization over uncertainty set [4, 3, 2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We refer the reader to the step 3 of the UCRL algorithm [4], which seeks to find arg max π max R,P ∈U⟨µ, vπ P,R⟩, (27) where U = {(R, P) | |R(s, a) − R0(s, a)| ≤ αs,a, |P(s′|s, a) − P0(s′|s, a)| ≤ βs,a,s′, P ∈ (∆S)S×A} for current estimated kernel P0 and reward function R0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We refer section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='1 and step 4 of the UCRL 2 algorithm of [3], which seeks to find arg max π max R,P ∈U⟨µ, vπ P,R⟩, (28) where U ={(R, P) | |R(s, a) − R0(s, a)| ≤ αs,a, ∥P(·|s, a) − P0(·|s, a)∥1 ≤ βs,a, P ∈ (∆S)S×A} The uncertainty radius α, β depends on the number of samples of different transitions and observations of the reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The paper [4] doesn’t explain any method to solve the above problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' UCRL 2 algorithm [3], suggests to solve it by linear programming that can be very slow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We show that it can be solved by our methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The above problem can be tackled as following max π max R,P ∈Usa p ⟨µ, vπ P,R⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (29) We can define, optimistic Bellman operators as ˆT π U v := max R,P ∈U vπ P,R, ˆT ∗ U v := max π max R,P ∈U vπ P,R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (30) The well definition and contraction of the above optimistic operators may follow directly from their pessimistic (robust) counterparts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We can evaluate above optimistic operators as ( ˆT π Usa p v)(s) = � a π(a|s) � R0(s, a) + αs,a + βs,aγκq(v) + � s′ P0(s′|s, a)v(s′) � , (31) ( ˆT ∗ Usa p v)(s) = max a � R0(s, a) + αs,a + βs,aγκq(v) + � s′ P0(s′|s, a)v(s′) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (32) The uncertainty radiuses α, β and nominal values P0, R0 can be found by similar analysis by [4, 3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We can get the Q-learning from the above results as Q(s, a) → R0(s, a) − αs,a − γβs,aκq(v) + γ � s′ P0(s′|s, a) max a′ Q(s′, a′), (33) 23 where v(s) = maxa Q(s, a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' From law of large numbers, we know that uncertainty radiuses αs,a, βs,a behaves as O( 1 √n) asymptotically with number of iteration n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This resembles very closely to UCB VI algorithm [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We emphasize that similar optimistic operators can be defined and evaluated for s-rectangular uncertainty sets too.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' E Q-Learning for sa-rectangular MDPs In view of Theorem 1, we can define Qπ Usa p , the robust Q-values under policy π for (sa)- rectangular Lp constrained uncertainty set Usa p as Qπ Usa p (s, a) := −αs,a − γβs,aκq(vπ Usa p ) + R0(s, a) + γ � s′ P0(s′|s, a)vπ Usa p (s′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (34) This implies that we have the following relation between robust Q-values and robust value function, same as its non-robust counterparts, vπ Usa p (s) = � a π(a|s)Qπ Usa p (s, a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (35) Let Q∗ Usa p denote the optimal robust Q-values associated with optimal robust value v∗ Usa p , given as Q∗ Usa p (s, a) := −αs,a − γβs,aκq(v∗ Usa p ) + R0(s, a) + γ � s′ P0(s′|s, a)v∗ Usa p (s′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (36) It is evident from Theorem 1 that optimal robust value and optimal robust Q-values satisfies the following relation, same as its non-robust counterparts, v∗ Usa p (s′) = max a∈A Q∗ Usa p (s, a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (37) Combining 37 and 36, we have optimal robust Q-value recursion as follows Q∗ Usa p (s, a) = −αs,a − γβs,aκq(v∗ Usa p ) + R0(s, a) + γ � s′ P0(s′|s, a) max a∈A Q∗ Usa p (s, a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (38) The above robust Q-value recursion enjoys similar properties as its non-robust counterparts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' ((sa)-rectangular Lp regularized Q-learning) Let Qn+1(s, a) = R0(s, a) − αsa − γβsaκq(vn) + γ � s′ P0(s′|s, a) max a∈A Qn(s′, a), where vn(s) = maxa∈A Qn(s, a), then Qn converges to Q∗ Usa p linearly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Observe that the above Q-learning equation is exactly the same as non-robust MDP except the reward penalty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Recall that κ1(v) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='5(maxs v(s) − mins v(s)) is difference between peak to peak values and κ2(v) is variance of v, that can be easily estimated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Hence, model free algorithms for (sa)-rectangular Lp robust MDPs for p = 1, 2, can be derived easily from the above results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This implies that (sa)-rectangular L1 and L2 robust MDPs are as easy as non-robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 24 F Model Based Algorithms In this section, we assume that we know the nominal transitional kernel and nominal reward function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Algorithm 4, algorithm 5 is model based algorithm for (sa)-rectangular and s rectangular Lp robust MDPs respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' It is explained in the algorithms, how to get deal with specail cases (p = 1, 2, ∞) in a easy way.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Algorithm 4 Model Based Q-Learning Algorithm for SA Rectangular Lp Robust MDP 1: Input: αs,a, βs,a are uncertainty radius in reward and transition kernel respectively in state s and action a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Transition kernel P and reward vector R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Take initial Q-values Q0 randomly and v0(s) = maxa Q0(s, a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 2: while not converged do 3: Do binary search in [mins vn(s), maxs vn(s)] to get q-mean ωn, such that � s (vn(s) − ωn) |vn(s) − ωn| |vn(s) − ωn| 1 p−1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (39) 4: Compute q-variance: κn = ∥v − ωn∥q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 5: Note: For p = 1, 2, ∞, we can compute κn exactly in closed from, see table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 6: for s ∈ S do 7: for a ∈ A do 8: Update Q-value as Qn+1(s, a) = R0(s, a) − αsa − γβsaκn + γ � s′ P0(s′|s, a) max a Qn(s′, a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 9: end for 10: Update value as vn+1(s) = max a Qn+1(s, a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 11: end for n → n + 1 12: end while 25 Algorithm 5 Model Based Algorithm for S Rectangular Lp Robust MDP 1: Take initial Q-values Q0 and value function v0 randomly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 2: Input: αs, βs are uncertainty radius in reward and transition kernel respectively in state s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 3: while not converged do 4: Do binary search in [mins vn(s), maxs vn(s)] to get q-mean ωn, such that � s (vn(s) − ωn) |vn(s) − ωn| |vn(s) − ωn| 1 p−1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (40) 5: Compute q-variance: κn = ∥v − ωn∥q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 6: Note: For p = 1, 2, ∞, we can compute κn exactly in closed from, see table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 7: for s ∈ S do 8: for a ∈ A do 9: Update Q-value as Qn+1(s, a) = R0(s, a) + γ � s′ P0(s′|s, a)vn+1(s′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (41) 10: end for 11: Sort actions in decreasing order of the Q-value, that is Qn+1(s, ai) ≥ Qn+1(s, ai+1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (42) 12: Value evaluation: vn+1(s) = x such that (αs + γβsκn)p = � Qn+1(s,ai)≥x |Qn+1(s, ai) − x|p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (43) 13: Note: We can compute vn+1(s) exactly in closed from for p = ∞ and for p = 1, 2, we can do the same using algorithm 8,7 respectively, see table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 14: end for n → n + 1 15: end while 26 Algorithm 6 Model based algorithm for s-recantangular L1 robust MDPs 1: Take initial value function v0 randomly and start the counter n = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 2: while not converged do 3: Calculate q-variance: κn = 1 2 � maxs vn(s) − mins vn(s) � 4: for s ∈ S do 5: for a ∈ A do 6: Update Q-value as Qn(s, a) = R0(s, a) + γ � s′ P0(s′|s, a)vn(s′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (44) 7: end for 8: Sort actions in state s, in decreasing order of the Q-value, that is Qn(s, a1) ≥ Qn(s, a2), · · · ≥ Qn(s, aA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (45) 9: Value evaluation: vn+1(s) = max m �m i=1 Qn(s, ai) − αs − βsγκn m .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (46) 10: Value evaluation can also be done using algorithm 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 11: end for n → n + 1 12: end while G Experiments The table 4 contains relative cost (time) of robust value iteration w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' non-robust MDP, for randomly generated kernel and reward function with the number of states S and the number of action A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Notations S : number of state, A: number of actions, Usa p LP: Sa rectangular Lp robust MPDs by Linear Programming, Us p LP: S rectangular Lp robust MPDs by Linear Programming and other numerical methods, Usa/s p=1,2,∞ : Sa/S rectangular L1/L2/L∞ robust MDPs by closed form method (see table 2, theorem 3) Usa/s p=5,10 : Sa/S rectangular L5/L10 robust MDPs by binary search (see table 2, theorem 3 of the paper) Observations 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Our method for s/sa rectangular L1/L2/L∞ robust MDPs takes almost same (1-3 times) the time as non-robust MDP for one iteration of value iteration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This confirms our complexity analysis (see table 4 of the paper) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Our binary search method for sa rectangular L5/L10 robust MDPs takes around 4 − 6 times more time than non-robust counterpart.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This is 27 Table 9: Relative running cost (time) for value iteration U S=10 A=10 S=30 A=10 S=50 A=10 S=100 A=20 remark non-robust 1 1 1 1 Usa ∞ by LP 1374 2282 2848 6930 lp Usa 1 by LP 1438 6616 6622 16714 lp Us 1 by LP 72625 629890 4904004 NA lp/minimize Usa 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='77 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='38 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='54 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='45 closed form Usa 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='51 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='43 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='91 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='59 closed form Usa ∞ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='58 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='48 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='37 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='58 closed form Us 1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='41 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='58 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='20 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='16 closed form Us 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='63 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='82 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='49 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='18 closed form Us ∞ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='41 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='04 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='25 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='50 closed form Usa 5 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='91 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='14 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='06 binary search Usa 10 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='56 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='29 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='15 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='26 binary search Us 5 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='30 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='23 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='22 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='22 binary search Us 10 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='59 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='17 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='07 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='10 binary search lp stands for scipy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='optimize.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='linearprog 28 Table 10: Relative running cost (time) for value iteration U S=10 A=10 S=100 A=20 remark non-robust 1 1 Usa 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='999 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='999 closed form Usa 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='999 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='999 closed form Usa ∞ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='998 closed form Us 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='999 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='999 closed form Us 2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='999 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='999 closed form Us ∞ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='998 closed form Usa 5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='999 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='995 binary search Usa 10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='999 binary search Us 5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='999 binary search Us 10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='000 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='995 binary search due to extra iterations required to find p-variance function κp(v) through binary search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Our binary search method for s rectangular L5/L10 robust MDPs takes around 30 − 100 times more time than non-robust counterpart.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This is due to extra iterations required to find p-variance function κp(v) through binary search and Bellman operator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' One common feature of our method is that time complexity scales moderately as guranteed through our complexity analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Linear programming methods for sa-rectangualr L1/L∞ robsust MDPs take atleast 1000 times more than our methods for small state-action space, and it scales up very fast.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Numerical methods (Linear programming for minimization over uncertianty and ’scipy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='optimize.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='minimize’ for maximization over policy) for s-rectangular L1 robust MDPs take 4-5 order more time than our mehtods (and non-robust MDPs) for very small state-action space, and scales up too fast.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The reason is obvious, as it has to solve two optimization, one minimization over uncertainty and other maximization over policy, whereas in the sa-rectangular case, only minimization over uncertainty is required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This confirms that s-rectangular uncertainty set is much more challenging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Rate of convergence The rate of convergence for all were approximately the same as 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='9 = γ, as predicted by theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' And it is well illustrated by the relative rate of convergence w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' non-robust by the table G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In the above experiments, Bellman updates for sa/s rectangular L1/L2/L∞ were done in 29 closed form, and for L5/L10 were done by binary search as suggested by table 2 and theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Note: Above experiments’ results are for few runs, hence containing some stochas- ticity but the general trend is clear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In the final version, we will do averaging of many runs to minimize the stochastic nature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Results for many different runs can be found at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='com/******.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Note that the above experiments were done without using too much parallelization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' There is ample scope to fine-tune and improve the performance of robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The above experiments confirm the theoretical complexity provided in Table 4 of the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The codes and results can be found at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='com/******.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Experiments parameters Number of states S (variable), number of actions A (variable), transition kernel and reward function generated randomly, discount factor 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='9, uncertainty radiuses =0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='1 (for all states and action, just for convenience ), number of iterations = 100, tolerance for binary search = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='00001 Hardware The experiments are done on the following hardware: Intel(R) Core(TM) i5-4300U CPU @ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='90GHz 64 bits, memory 7862MiB Software: Experiments were done in python, using numpy, scipy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='optimize.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='linprog for Linear programmig for policy evalution in s/sa rectangular robust MDPs, scipy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='optimize.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='minize and scipy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='optimize.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='LinearConstraints for policy improvement in s-rectangular L1 robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' H Extension to Model Free Settings Extension of Q-learning (in section E ) for sa-rectangular MDPs to model free setting can easily done similar to [30], also policy gradient method can be obtained as [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The only thing, we need to do, is to be able to compute/estimate κq online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' It can be estimated using an ensemble (samples).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Further, κ2 can be estimated by the estimated mean and the estimated second moment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' κ∞ can be estimated by tracking maximum and minimum values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For s-rectangular case too, we can obtain model-free algorithms easily, by estimating κq online and keeping track of Q-values and value function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The convergence analysis may be similar to [30], especially for sa-rectangular case, and for the other, it would be two time scale, which can be dealt with techniques in [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We leave this for future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' It would be interesting to obtain policy gradient methods for this case, which we believe can be obtained from the policy evaluation theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' I p-variance Recall that κp is defined as follows κp(v) = min w ∥v − ω1∥p = ∥v − ωp∥p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 30 Now, observe that ∂∥v − ω∥p ∂ω = 0 =⇒ � s sign(v(s) − ω)|v(s) − ω|p−1 = 0, =⇒ � s sign(v(s) − ωp(v))|v(s) − ωq(v)|p−1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (47) For p = ∞,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' we have lim p→∞ ��� � s sign � v(s) − ω∞(v) ��� v(s) − ω∞(v) ��p��� 1 p = 0 = � max s |v(s) − ω∞(v)| � lim p→∞ ��� � s sign � v(s) − ω∞(v) �� |v(s) − ω∞(v) �� maxs|v(s) − ω∞(v)| �p��� 1 p Assuming max s |v(s) − ω∞(v)| ̸= 0 otherwise ω∞ = v(s) = v(s′),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' ∀s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' s′ =⇒ lim p→∞ ��� � s sign � v(s) − ω∞(v) �� |v(s) − ω∞(v) �� maxs|v(s) − ω∞(v)| �p��� 1 p = 0 To avoid technical complication,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' we assume max s v(s) > v(s) < min s v(s),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' ∀s =⇒ lim p→∞|max s v(s) − ω∞(v)| = lim p→∞|min s v(s) − ω∞(v)| =⇒ max s v(s) − lim q→∞ ω∞(v) = −(min s v(s) − lim p→∞ ω∞(v)),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (managing signs) =⇒ lim p→∞ ω∞(v) = maxs v(s) + mins v(s) 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (48) κ∞(v) =∥v − ω∞1∥∞ =∥v − maxs v(s) + mins v(s) 2 1∥∞, (putting in value of ω∞) =maxs v(s) − mins v(s) 2 (49) For p = 2, we have κ2(v) =∥v − ω21∥2 =∥v − � s v(s) S 1∥2, = �� s (v(s) − � s v(s) S )2 (50) For p = 1, we have � s∈S sign � v(s) − ω1(v) � = 0 (51) Note that there may be more than one values of ω1(v) that satisfies the above equation and each solution does equally good job (as we will see later).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' So we will pick one ( is median of 31 Table 11: p-mean, where v(si) ≥ v(si+1) ∀i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' x ωx(v) remark p � s sign(v(s) − ωp(v))|v(s) − ωp(v)| 1 p−1 = 0 Solve by binary search 1 v(s⌊(S+1)/2⌋)+v(s⌈(S+1)/2⌉) 2 Median 2 � s v(s) S Mean ∞ maxs v(s)+mins v(s) 2 Average of peaks v) according to our convenience as ω1(v) = v(s⌊(S+1)/2⌋) + v(s⌈(S+1)/2⌉) 2 where v(si) ≥ v(si+1) ∀i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' κ1(v) =∥v − ω11∥1 =∥v − med(v)1∥1, (putting in value of ω0, see table 11) = � s |v(s) − med(v)| = ⌊(S+1)/2⌋ � i=1 (v(s) − med(v)) + S � ⌈(S+1)/2⌉ (med(v) − v(s)) = ⌊(S+1)/2⌋ � i=1 v(s) − S � ⌈(S+1)/2⌉ v(s) (52) where med(v) := v(s⌊(S+1)/2⌋)+v(s⌈(S+1)/2⌉) 2 where v(si) ≥ v(si+1) ∀i is median of v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The results are summarized in table 1 and 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='1 p-variance function and kernel noise Lemma 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' q-variance function κq is the solution of the following optimization problem (kernel noise), κq(v) = −1 ϵ min c ⟨c, v⟩, ∥c∥p ≤ ϵ, � s c(s) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Writing Lagrangian L, as L := � s c(s)v(s) + λ � s c(s) + µ( � s |c(s)|p − ϵp), where λ ∈ R is the multiplier for the constraint � s c(s) = 0 and µ ≥ 0 is the multiplier for the inequality constraint ∥c∥q≤ ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Taking its derivative, we have ∂L ∂c(s) = v(s) + λ + µp|c(s)|p−1 c(s) |c(s)| (53) 32 From the KKT (stationarity) condition, the solution c∗ has zero derivative, that is v(s) + λ + µp|c∗(s)|p−1 c∗(s) |c∗(s)| = 0, ∀s ∈ S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (54) Using Lagrangian derivative equation (54), we have v(s) + λ + µp|c∗(s)|p−1 c∗(s) |c∗(s)| = 0 =⇒ � s c∗(s)[v(s) + λ + µp|c∗(s)|p−1 c∗(s) |c∗(s)|] = 0, (multiply with c∗(s) and summing ) =⇒ � s c∗(s)v(s) + λ � s c∗(s) + µp � s |c∗(s)|p−1 (c∗(s))2 |c∗(s)| = 0 =⇒ ⟨c∗, v⟩ + µp � s |c∗(s)|p = 0 (using � s c∗(s) = 0 and (c∗(s))2 = |c∗(s)|2 ) =⇒ ⟨c∗, v⟩ = −µpϵp, (using � s |c∗(s)|p = ϵp ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (55) It is easy to see that µ ≥ 0, as minimum value of the objective must not be positive ( at c = 0, the objective value is zero).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Again we use Lagrangian derivative (54) and try to get the objective value (−µpϵp) in terms of λ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' as v(s) + λ + µp|c∗(s)|p−1 c∗(s) |c∗(s)| = 0 =⇒ |c∗(s)|p−2c∗(s) = −v(s) + λ µp ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (re-arranging terms) =⇒ � s |(|c∗(s)|p−2c∗(s))| p p−1 = � s | − v(s) + λ µp | p p−1 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (doing � s |·| p p−1 ) =⇒ ∥c∗∥p p = � s | − v(s) + λ µp | p p−1 = � s |v(s) + λ µp |q = ∥v + λ∥q q |µp|q =⇒ |µp|q∥c∗∥p p = ∥v + λ∥q q,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (re-arranging terms) =⇒ |µp|qϵp = ∥v + λ∥q q,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (using � s |c∗(s)|p = ϵp ) =⇒ ϵ(µpϵp/q) = ϵ∥v + λ∥q (taking 1 q the power then multiplying with ϵ) =⇒ µpϵp = ϵ∥v + λ∥q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (56) 33 Again,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' using Lagrangian derivative (54) to solve for λ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' we have v(s) + λ + µp|c∗(s)|p−1 c∗(s) |c∗(s)| = 0 =⇒ |c∗(s)|p−2c∗(s) = −v(s) + λ µp ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (re-arranging terms) =⇒ |c∗(s)| = |v(s) + λ µp | 1 p−1 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (looking at absolute value) and c∗(s) |c∗(s)| = − v(s) + λ |v(s) + λ|,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (looking at sign: and note µ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' p ≥ 0) =⇒ � s c∗(s) |c∗(s)||c∗(s)| = − � s v(s) + λ |v(s) + λ||v(s) + λ µp | 1 p−1 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (putting back) =⇒ � s c∗(s) = − � s v(s) + λ |v(s) + λ||v(s) + λ µp | 1 p−1 ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' =⇒ � s v(s) + λ |v(s) + λ||v(s) + λ| 1 p−1 = 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' ( using � i c∗(s) = 0) (57) Combining everything,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' we have − 1 ϵ min c ⟨c,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' v⟩,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' ∥c∥p ≤ ϵ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' � s c(s) = 0 =∥v − λ∥q,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' such that � s sign(v(s) − λ)|v(s) − λ| 1 p−1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (58) Now, observe that ∂∥v − λ∥q ∂λ = 0 =⇒ � s sign(v(s) − λ)|v(s) − λ| 1 p−1 = 0, =⇒ κq(v) = ∥v − λ∥q, such that � s sign(v(s) − λ)|v(s) − λ| 1 p−1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (59) The last equality follows from the convexity of p-norm ∥·∥q, where every local minima is global minima.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For the sanity check, we re-derive things for p = 1 from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For p = 1, we have − 1 ϵ min c ⟨c, v⟩, ∥c∥1 ≤ ϵ, � s c(s) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' = − 1 2(min s v(s) − max s v(s)) =κ1(v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (60) It is easy to see the above result, just by inspection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='2 Binary search for p-mean and estimation of p-variance If the function f : [−B/2, B/2] → R, B ∈ R is monotonic (WLOG let it be monotonically decreasing) in a bounded domain, and it has a unique root x∗ s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' f(x∗) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Then we can 34 find x that is an ϵ-approximation x∗ (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' ∥x − x∗∥ ≤ ϵ ) in O(B/ϵ) iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Why?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Let x0 = 0 and xn+1 := � � � � � −B+xn 2 if f(xn) > 0 B+xn 2 if f(xn) < 0 xn if f(xn) = 0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' It is easy to observe that ∥xn − x∗∥ ≤ B(1/2)n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This is proves the above claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This observation will be referred to many times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Now, we move to the main claims of the section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proposition 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The function hp(λ) := � s sign � v(s) − λ ��� v(s) − λ ��p is monotonically strictly decreasing and also has a root in the range [mins v(s), maxs v(s)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' hp(λ) = � s v(s) − λ |v(s) − λ||v(s) − λ|p dhp dλ (λ) = −p � s |v(s) − λ|p−1 ≤ 0, ∀p ≥ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (61) Now, observe that hp(maxs v(s)) ≤ 0 and hp(mins v(s)) ≥ 0, hence by hp must have a root in the range [mins v(s), maxs v(s)] as the function is continuous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The above proposition ensures that a root ωp(v) can be easily found by binary search between [mins v(s), maxs v(s)].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Precisely, ϵ approximation of ωp(v) can be found in O(log( maxs v(s)−mins v(s) ϵ )) number of iterations of binary search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' And one evaluation of the function hp requires O(S) iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' And we have finite state-action space and bounded reward hence WLOG we can assume |maxs v(s)|, |mins v(s)| are bounded by a constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Hence, the complexity to approximate ωp is O(S log( 1 ϵ )).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Let ˆωp(v) be an ϵ-approximation of ωp(v), that is �� ωp(v) − ˆωp(v) ��≤ ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' And let ˆκp(v) be approximation of κp(v) using approximated mean, that is, ˆκp(v) := ∥v − ˆωp(v)1∥p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Now we will show that ϵ error in calculation of p-mean ωp, induces O(ϵ) error in estimation of p-variance κp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Precisely, ��� κp(v) − ˆκp(v) ���= ��� �� v − ωp(v)1 �� p − �� v − ˆωp(v)1 �� p ��� ≤ �� ωp(v)1 − ˆωp(v)1 �� p, (reverse triangle inequality) = �� 1 �� p �� ωp(v) − ˆωp(v) �� ≤ �� 1 �� p ϵ =S 1 p ϵ ≤ Sϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (62) 35 For general p, an ϵ approximation of κp(v) can be calculated in O(S log( S ϵ ) iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Why?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We will estimate mean ωp to an ϵ/S tolerance (with cost O(S log( S ϵ ) ) and then approximate the κp with this approximated mean (cost O(S)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' J Lp Water Filling/Pouring lemma In this section, we are going to discuss the following optimization problem, max c −α∥c∥q + ⟨c, b⟩ such that A � i=1 ci = 1, ci ≥ 0, ∀i where α ≥ 0, referred as Lp-water pouring problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We are going to assume WLOG that b is sorted component wise, that is b1 ≥ b2, · · · ≥ bA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The above problem for p = 2, is studied in [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The approach we are going to solve the problem is as follows: a) Write Lagrangian b) Since the problem is convex, any solutions of KKT condition is global maximum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' c) Obtain conditions using KKT conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Let b ∈ RA be such that its components are in decreasing order (i,e bi ≥ bi+1), α ≥ 0 be any non-negative constant, and ζp := max c −α∥c∥q + ⟨c, b⟩ such that A � i=1 ci = 1, ci ≥ 0, ∀i, (63) and let c∗ be a solution to the above problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Then 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Higher components of b, gets higher weight in c∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In other words, c∗ is also sorted component wise in descending order, that is c∗ 1 ≥ c∗ 2, · · · , ≥ c∗ A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The value ζp satisfies the following equation αp = � bi≥ζp (bi − ζp)p 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The solution c of (63), is related to ζp as ci = (bi − ζp)p−11(bi ≥ ζp) � s(bi − ζp)p−11(bi ≥ ζp) 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Observe that the top χp := max{i|bi ≥ ζp} actions are active and rest are passive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The number of active actions can be calculated as {k|αp ≥ k � i=1 (bi − bk)p} = {1, 2, · · · , χp}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Things can be re-written as ci ∝ � (bi − ζp)p−1 if i ≤ χp 0 else and αp = χp � i=1 (bi − ζp)p 36 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The function � bi≥x(bi − x)p is monotonically decreasing in x, hence the root ζp can be calculated efficiently by binary search between [b1 − α, b1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Solution is sandwiched as follows bχp+1 ≤ ζp ≤ bχp 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' k ≤ χp if and only if there exist the solution of the following, k � i=1 (bi − x)p = αp and x ≤ bk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' If action k is active and there is greedy increment hope then action k + 1 is also active.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' That is k ≤ χp and λk ≤ bk+1 =⇒ k + 1 ≤ χp, where k � i=1 (bi − λk)p = αp and λk ≤ bk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' If action k is active, and there is no greedy hope and then action k + 1 is not active.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' That is, k ≤ χp and λk > bk+1 =⇒ k + 1 > χp, where k � i=1 (bi − λk)p = αp and λk ≤ bk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' And this implies k = χp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Let f(c) := −α∥c∥q + ⟨b, c⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Let c be any vector, and c′ be rearrangement c in descending order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Precisely, c′ k := cik, where ci1 ≥ ci2, · · · , ≥ ciA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Then it is easy to see that f(c′) ≥ f(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' And the claim follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Writting Lagrangian of the optimization problem, and its derivative, L = −α∥c∥q + ⟨c, b⟩ + λ( � i ci − 1) + θici ∂L ∂ci = −α∥c∥1−q q |ci|q−2ci + bi + λ + θi, (64) λ ∈ R is multiplier for equality constraint � i ci = 1 and θ1, · · · , θA ≥ 0 are multipliers for inequality constraints ci ≥ 0, ∀i ∈ [A].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Using KKT (stationarity) condition,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' we have −α∥c∗∥1−q q |c∗ i |q−2c∗ i + bi + λ + θi = 0 (65) 37 Let B := {i|c∗ i > 0},' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' then � i∈B c∗ i [−α∥c∗∥1−q q |c∗ i |q−2c∗ i + bi + λ] = 0 =⇒ − α∥c∗∥1−q q ∥c∗∥q q + ⟨c∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' b⟩ + λ = 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (using � i c∗ i = 1 and (c∗ i )2 = |c∗ i |2) =⇒ − α∥c∗∥q + ⟨c∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' b⟩ + λ = 0 =⇒ − α∥c∗∥q + ⟨c∗,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' b⟩ = −λ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (re-arranging) (66) Now again using (65),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' we have − α∥c∗∥1−q q |c∗ i |q−2c∗ i + bi + λ + θi = 0 =⇒ α∥c∗∥1−q q |c∗ i |q−2c∗ i = bi + λ + θi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' ∀i,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (re-arranging) (67) Now,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' if i ∈ B then θi = 0 from complimentry slackness,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' so we have α∥c∗∥1−q q |c∗ i |q−2c∗ i = bi + λ > 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' ∀i ∈ B by definition of B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Now, if for some i, bi + λ > 0 then bi + λ + θi > 0 as θi ≥ 0, that implies α∥c∗∥1−q q |c∗ i |q−2c∗ i = bi + λ + θi > 0 =⇒ c∗ i > 0 =⇒ i ∈ B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' So, we have, i ∈ B ⇐⇒ bi + λ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' To summarize, we have α∥c∗∥1−q q |c∗ i |q−2c∗ i = (bi + λ)1(bi ≥ −λ), ∀i, (68) =⇒ � i α q q−1 ∥c∗∥−q q (c∗ i )q = � i (bi + λ) q q−1 1(bi ≥ −λ), (taking q/(q − 1)th power and summing) =⇒ αp = A � i=1 (bi + λ)p1(bi ≥ −λ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (69) So, we have, ζp = −λ such that αp = � bi≥λ (bi + λ)p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' =⇒ αp = � bi≥ζp (bi − ζp)p (70) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Furthermore, using (68), we have α∥c∗∥1−q q |c∗ i |q−2c∗ i = (bi + λ)1(bi ≥ −λ) = (bi − ζp)1(bi ≥ ζp) ∀i, =⇒ c∗ i ∝ (bi − ζp) 1 q−1 1(bi ≥ ζp) = (bi − ζp)p−11(bi ≥ ζp) � i(bi − ζp)p−11(bi ≥ ζp), (using � i c∗ i = 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (71) 38 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Now, we move on to calculate the number of active actions χp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Observe that the function f(λ) := A � i=1 (bi − λ)p1(bi ≥ λ) − αp (72) is monotonically decreasing in λ and ζp is a root of f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This implies f(x) ≤ 0 ⇐⇒ x ≥ ζp =⇒ f(bi) ≤ 0 ⇐⇒ bi ≥ ζp =⇒ {i|bi ≥ ζp} = {i|f(bi) ≤ 0} =⇒ χp = max{i|bi ≥ ζp} = max{i|f(bi) ≤ 0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (73) Hence, things follows by putting back in the definition of f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We have, αp = A � i=1 (bi − ζp)p1(bi ≥ ζp), and χp = max{i|bi ≥ ζp}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Combining both we have αp = χp � i=1 (bi − ζp)p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' And the other part follows directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Continuity and montonocity of the function � bi≥x(bi − x)p is trivial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Now observe that � bi≥b1(bi − b1)p = 0 and � bi≥b1−α(bi − (b1 − α))p ≥ αp, so it implies that it is equal to αp in the range [b1 − α, b1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Recall that the ζp is the solution to the following equation αp = � bi≥x (bi − x)p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' And from the definition of χp, we have αp < χp+1 � i=1 (bi − bχp+1)p = � bi≥bχp+1 (bi − bχp+1)p, and αp ≥ χp � i=1 (bi − bχp)p = � bi≥bχp (bi − bχp)p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' So from continuity, we infer the root ζp must lie between [bχp+1, bχ].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We prove the first direction, and assume we have k ≤ χp =⇒ k � i=1 (bi − bk)p ≤ αp (from definition of χp).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (74) 39 Observe the function f(x) := �k i=1(bi − x)p is monotically decreasing in the range (−∞, bk].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Further, f(bk) ≤ αp and limx→−∞ f(x) = ∞, so from the continuity argument there must exist a value y ∈ (−∞, bk] such that f(y) = αp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This implies that k � i=1 (bi − y)p ≤ αp, and y ≤ bk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Hence, explicitly showed the existence of the solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Now, we move on to the second direction, and assume there exist x such that k � i=1 (bi − x)p = αp, and x ≤ bk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' =⇒ k � i=1 (bi − bk)p ≤ αp, (as x ≤ bk ≤ bk−1 · · · ≤ b1) =⇒ k ≤ χp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We have k ≤ χp and λk such that αp = k � i=1 (bi − λk)p, and λk ≤ bk, (from above item) ≥ k � i=1 (bi − bk+1)p, (as λk ≤ bk+1 ≤ bk) ≥ k+1 � i=1 (bi − bk+1)p, (addition of 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (75) From the definition of χp, we get k + 1 ≤ χp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We are given k � i=1 (bi − λk)p = αp =⇒ k � i=1 (bi − bk+1)p > αp, (as λk > bk+1) =⇒ k+1 � i=1 (bi − bk+1)p > αp, (addition of zero) =⇒ k + 1 > χp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 40 J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='1 Special case: L1 For p = 1, by definition, we have ζ1 = max c −α∥c∥∞ + ⟨c, b⟩ such that � a∈A ca = 1, c ⪰ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (76) And χ1 is the optimal number of actions, that is α = χ1 � i=1 (bi − ζ1) =⇒ ζ1 = �χ1 i=1 bi − α χ1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Let λk be the such that α = k � i=1 (bi − λk) =⇒ λk = �k i=1 bi − α k .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' ζ1 = max k λk Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' From lemma 2, we have λ1 ≤ λ2 · · · ≤ λχ1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Now, we have λk − λk+m = �k i=1 bi − α k − �k+m i=1 bi − α k + m = �k i=1 bi − α k − �k i=1 bi − α k + m − �m i=1 bk+i k + m = m(�k i=1 bi − α k(k + m)) − �m i=1 bk+i k + m = m k + m( �k i=1 bi − α k − �m i=1 bk+i m ) = m k + m(λk − �m i=1 bk+i m ) (77) From lemma 2, we also know the stopping criteria for χ1, that is λχ1 > bχ1+1 =⇒ λχ1 > bχ1+i, i ≥ 1, (as bi are in descending order) =⇒ λχ1 > �m i=1 bχ1+i m , ∀m ≥ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 41 Combining it with the (77), for all m ≥ 0 , we get λχ1 − λχ1+m = m χ1 + m(λχ1 − �m i=1 bχ1+i m ) ≥ 0 =⇒ λχ1 ≥ λχ1+m (78) Hence, we get the desired result, ζ1 = λχ1 = max k λk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='2 Special case: max norm For p = ∞, by definition, we have ζ∞(b) = max c −α∥c∥1 + ⟨c, b⟩ such that � a∈A ca = 1, c ⪰ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' = max c −α + ⟨c, b⟩ such that � a∈A ca = 1, c ⪰ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' = − α + max i bi (79) J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='3 Special case: L2 The problem is discussed in great details in [1], here we outline the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For p = 2, we have ζ2 = max c −α∥c∥2 + ⟨c, b⟩ such that � a∈A ca = 1, c ⪰ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (80) Let λk be the solution of the following equation α2 = k � i=1 (bi − λ)2, λ ≤ bk = kλ2 − 2 k � i=1 λbi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' + k � i=1 (bi)2, λ ≤ bk =⇒ λk = �k i=1 bi ± � (�k i=1 bi)2 − k(�k i=1(bi)2 − α2) k , and λk ≤ bk = �k i=1 bi − � (�k i=1 bi)2 − k(�k i=1(bi)2 − α2) k = �k i=1 bi k − � � � �α2 − k � i=1 (bi − �k i=1 bi k )2 (81) From lemma 2, we know λ1 ≤ λ2 · · · ≤ λχ2 = ζ2 42 where χ2 calculated in two ways: a) χ2 = max m {m| m � i=1 (bi − bm)2 ≤ α2} b) χ2 = min m {m|λm ≤ bm+1} We proceed greedily until stopping condition is met in lemma 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Concretely, it is illustrated in algorithm 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='1 L1 Water Pouring lemma In this section, we re-derive the above water pouring lemma for p = 1 from scratch, just for sanity check.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' As in the above proof, there is a possibility of some breakdown, as we had take limits q → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We will see that all the above results for p = 1 too.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Let b ∈ RA be such that its components are in decreasing order, i,e bi ≥ bi+1 and ζ1 := max c −α∥c∥∞ + ⟨c, b⟩ such that A � i=1 ci = 1, ci ≥ 0, ∀i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (82) Lets fix any vector c ∈ RA, and let k1 := ⌊ 1 maxi ci ⌋ and let c1 i = � � � � � maxi ci if i ≤ k1 1 − k1 maxi ci if i = k1 + 1 0 else Then we have, −α∥c∥∞ + ⟨c, b⟩ = − α max i ci + A � i=1 cibi ≤ − α max i ci + A � i=1 c1 i bi, (recall bi is in decreasing order) = − α∥c1∥∞ + ⟨c1, b⟩ (83) Now, lets define c2 ∈ RA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Let k2 = � k1 + 1 if �k1 i=1 bi−α k1 ≤ bk+1 k1 else 43 and let c2 i = 1(i≤k2) k2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Then we have, −α∥c1∥∞ + ⟨c1, b⟩ = − α max i ci + A � i=1 c1 i bi = − α max i ci + k1 � i=1 max i cibi + (1 − k1 max i ci)bk1+1, (definition of c1) =(−α + �k1 i=1 bi k1 )k1 max i ci + bk1+1(1 − k1 max i ci), (re-arranging) ≤−α + �k2 i=1 bi k2 = − α∥c2∥∞ + ⟨c2, b⟩ (84) The last inequality comes from the definition of k2 and c2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' So we conclude that a optimal solution is uniform over some actions, that is ζ1 = max c∈C −α∥c∥∞ + ⟨c, b⟩ = max k � −α + �k i=1 bi k � (85) where C := {ck ∈ RA|ck i = 1(i≤k) k } is set of uniform actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Rest all the properties follows same as Lp water pouring lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' K Robust Value Iteration (Main) In this section, we will discuss the main results from the paper except for time complexity results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' It contains the proofs of the results presented in the main body and also some other corollaries/special cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='1 sa-rectangular robust policy evaluation and improvement Theorem 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (sa)-rectangular Lp robust Bellman operator is equivalent to reward regularized (non-robust) Bellman operator, that is (T π Usa p v)(s) = � a π(a|s)[−αs,a − γβs,aκq(v) + R0(s, a) + γ � s′ P0(s′|s, a)v(s′)], and (T ∗ Usa p v)(s) = max a∈A[−αs,a − γβs,aκq(v) + R0(s, a) + γ � s′ P0(s′|s, a)v(s′)], where κp is defined in (7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 44 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' From definition robust Bellman operator and Usa p = (R0 + R) × (P0 + P),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' we have,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (T π Usa p v)(s) = min R,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='P ∈Usa p � a π(a|s) � R(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) + γ � s′ P(s′|s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a)v(s′) � = � a π(a|s) � R0(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) + γ � s′ P0(s′|s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a)v(s′) � + min p∈P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='r∈R � a π(a|s) � r(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) + γ � s′ p(s′|s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a)v(s′) � ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (from (sa)-rectangularity,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' we get) = � a π(a|s) � R0(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) + γ � s′ P0(s′|s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a)v(s′) � + � a π(a|s) min ps,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a∈Psa,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='rs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a∈Rs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a � rs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a + γ � s′ ps,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a(s′)v(s′) � � �� � :=Ωsa(v) (86) Now we focus on regularizer function Ω,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' as follows Ωsa(v) = min ps,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a∈Ps,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='rs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a∈Rs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a � rs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a + γ � s′ ps,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a(s′)v(s′) � = min rs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a∈Rs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a rs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a + γ min ps,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a∈Psa � s′ ps,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a(s′)v(s′) = −αs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a + γ min ∥psa∥p≤βs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='� s′ psa(s′)=0⟨ps,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' v⟩,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' = − αs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a − γβs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='aκq(v),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (from lemma 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (87) Putting back,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' we have (T π Usa p v)(s) = � a π(a|s) � −αs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a − γβs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='aκq(v) + R0(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) + γ � s′ P0(s′|s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a)v(s′) � Again,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' reusing above results in optimal robust operator,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' we have (T ∗ Usa p v)(s) = max πs∈∆A min R,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='P ∈Usa p � a πs(a) � R(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) + γ � s′ P(s′|s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a)v(s′) � = max πs∈∆A � a πs(a) � −αs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a − γβs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='aκp(v) + R0(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) + γ � s′ P0(s′|s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a)v(s′) � = max a∈A � −αs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a − γβs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='aκq(v) + R0(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) + γ � s′ P0(s′|s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a)v(s′) � (88) The claim is proved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='2 S-rectangular robust policy evaluation Theorem 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' S-rectangular Lp robust Bellman operator is equivalent to reward regularized (non-robust) Bellman operator, that is (T π Us pv)(s) = − � αs + γβsκq(v) � ∥π(·|s)∥q + � a π(a|s) � R0(s, a) + γ � s′ P0(s′|s, a)v(s′) � 45 where κp is defined in (7) and ∥π(·|s)∥q is q-norm of the vector π(·|s) ∈ ∆A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' From definition of robust Bellman operator and Us p = (R0 + R) × (P0 + P),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' we have (T π Us pv)(s) = min R,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='P ∈Us p � a π(a|s) � R(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) + γ � s′ P(s′|s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a)v(s′) � = � a π(a|s) � R0(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) + γ � s′ P0(s′|s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a)v(s′) � �� � nominal values � + min p∈P,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='r∈R � a π(a|s) � r(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) + γ � s′ p(s′|s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a)v(s′) � (from s-rectangularity we have) = � a π(a|s) � R0(s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a) + γ � s′ P0(s′|s,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' a)v(s′) � + min ps∈Ps,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='rs∈Rs � a π(a|s) � rs(a) + γ � s′ ps(s′|a)v(s′) � � �� � :=Ωs(πs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='v) (89) where we denote πs(a) = π(a|s) as a shorthand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Now we calculate the regularizer function as follows Ωs(πs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' v) := min rs∈Rs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='ps∈Ps⟨rs + γvT ps,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' πs⟩ = min rs∈Rs⟨rs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' πs⟩ + γ min ps∈Ps vT psπs = −αs∥πs∥q + γ min ps∈Ps vT psπs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (using 1 p + 1 q = 1 ) = − αs∥πs∥q + γ min ps∈Ps � a πs(a)⟨ps,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' v⟩ = − αs∥πs∥q + γ min � a(βs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a)p≤(βs)p min ∥psa∥p≤βs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='� s′ psa(s′)=0 � a πs(a)⟨ps,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' v⟩ = − αs∥πs∥q + γ min � a(βs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a)p≤(βs)p � a πs(a) min ∥psa∥p≤βs,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='� s′ psa(s′)=0 ⟨ps,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='a,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' v⟩ = − αs∥πs∥q + γ min � a(βsa)p≤(βs)p � a πs(a)(−βsaκp(v)) ( from lemma 1) = − αs∥πs∥q − γκq(v) max � a(βsa)p≤(βs)p � a πs(a)βsa = − αs∥πs∥q − γκp(v)∥πs∥qβs (using Holders) = − (αs + γβsκq(v))∥πs∥q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (90) Now putting above values in robust operator, we have (T π Us pv)(s) = − � αs + γβsκq(v) � ∥π(·|s)∥q+ � a π(a|s) � R0(s, a) + γ � s′ P0(s′|s, a)v(s′) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 46 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='3 s-rectangular robust policy improvement Reusing robust policy evaluation results in section K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='2, we have (T ∗ Uspv)(s) = max πs∈∆A min R,P ∈Usa p � a πs(a) � R(s, a) + γ � s′ P(s′|s, a)v(s′) � = max πs∈∆A � −(αs + γβsκq(v))∥πs∥q + � a πs(a)(R(s, a) + γ � s′ P(s′|s, a)v(s′)) � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (91) Observe that, we have the following form (T ∗ Uspv)(s) = max c −α∥c∥q + ⟨c, b⟩ such that A � i=1 ci = 1, c ⪰ 0, (92) where α = αs +γβsκq(v) and bi = R(s, ai)+γ � s′ P(s′|s, ai)v(s′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Now all the results below, follows from water pouring lemma ( lemma 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Theorem 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (Policy improvement) The optimal robust Bellman operator can be evaluated in following ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (T ∗ Us pv)(s) is the solution of the following equation that can be found using binary search between � maxa Q(s, a) − σ, maxa Q(s, a) � , � a � Q(s, a) − x �p 1 � Q(s, a) ≥ x � = σp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (93) 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (T ∗ Us pv)(s) and χp(v, s) can also be computed through algorithm 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' where σ = αs + γβsκq(v), and Q(s, a) = R0(s, a) + γ � s′ P0(s′|s, a)v(s′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The first part follows from lemma 2, point 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The second part follows from lemma 2, point 9 (greedy inclusion ) and point 10 (stopping condition).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Theorem 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (Go To Policy) The greedy policy π w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' value function v, defined as T ∗ Us pv = T π Us pv is a threshold policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' It takes only those actions that has positive advantage, with probability proportional to (p − 1)th power of its advantage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' That is π(a|s) ∝ (A(s, a))p−11(A(s, a) ≥ 0), where A(s, a) = R0(s, a) + γ � s′ P0(s′|s, a)v(s′) − (T ∗ Us pv)(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Follows from lemma 2, point 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Property 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' χp(v, s) is number of actions that has positive advantage, that is χp(v, s) = ��� � a | (T ∗ Us pv)(s) ≤ R0(s, a) + γ � s′ P0(s′|s, a)v(s′) ���� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Follows from lemma 2, point 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 47 Property 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' ( Value vs Q-value) (T ∗ Us pv)(s) is bounded by the Q-value of χth and (χ + 1)th actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' That is Q(s, aχ+1) < (T ∗ Us pv)(s) ≤ Q(s, aχ), where χ = χp(v, s), Q(s, a) = R0(s, a) + γ � s′ P0(s′|s, a)v(s′), and Q(s, a1) ≥ Q(s, a2), · · · Q(s, aA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Follows from lemma 2, point 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Corollary 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For p = 1, the optimal policy π1 w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' value function v and uncertainty set Us 1, can be computed directly using χ1(s) without calculating advantage function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' That is π1(as i|s) = 1(i ≤ χ1(s)) χ1(s) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Follows from Theorem 11 by putting p = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Note that it can be directly obtained using L1 water pouring lemma (see section J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='1) Corollary 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (For p = ∞) The optimal policy π w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' value function v and uncertainty set Us ∞ (precisely T ∗ Us ∞v = T π Us ∞v), is to play the best response, that is π(a|s) = 1(a ∈ arg maxa Q(s, a)) �� arg maxa Q(s, a) �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In case of tie in the best response, it is optimal to play any of the best responses with any probability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Follows from Theorem 11 by taking limit p → ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For p = ∞, T ∗ Us pv, the robust optimal Bellman operator evaluation can be obtained in closed form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' That is (T ∗ Us ∞v)(s) = max a Q(s, a) − σ, where σ = αs + γβsκ1(v), Q(s, a) = R0(s, a) + γ � s′ P0(s′|s, a)v(s′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Let π be such that T ∗ Us ∞v = T π Us ∞v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' This implies (T ∗ Uspv)(s) = min R,P ∈Usa p � a π(a|s) � R(s, a) + γ � s′ P(s′|s, a)v(s′) � = −(αs + γβsκp(v))∥π(·|s)∥q + � a π(a|s)(R(s, a) + γ � s′ P(s′|s, a)v(s′)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (94) From corollary 3, we know the that π is deterministic best response policy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Putting this we get the desired result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' There is a another way of proving this, using Theorem 3 by taking limit p → ∞ carefully as lim p→∞ � a � Q(s, a) − T ∗ Uspv)(s) �p 1 � Q(s, a) ≥ T ∗ Uspv)(s) � ) 1 p = σ, (95) where σ = αs + γβsκ1(v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 48 Corollary 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For p = 1, the robust optimal Bellman operator T ∗ Us p, can be computed in closed form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' That is (T ∗ Us pv)(s) = max k �k i=1 Q(s, ai) − σ k , where σ = αs + γβsκ∞(v), Q(s, a) = R0(s, a) + γ � s′ P0(s′|s, a)v(s′), and Q(s, a1) ≥ Q(s, a2), ≥ · · · ≥ Q(s, aA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Follows from section J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Corollary 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The s rectangular Lp robust Bellman operator can be evaluated for p = 1, 2 by algorithm 8 and algorithm 7 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' It follows from the algorithm 3, where we solve the linear equation and quadratic equation for p = 1, 2 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For p = 2, it can be found in [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Algorithm 7 Algorithm to compute S-rectangular L2 robust optimal Bellman Operator 1: Input: σ = αs + γβsκ2(v), Q(s, a) = R0(s, a) + γ � s′ P0(s′|s, a)v(s′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 2: Output (T ∗ Us 2 v)(s), χ2(v, s) 3: Sort Q(s, ·) and label actions such that Q(s, a1) ≥ Q(s, a2), · · · .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 4: Set initial value guess λ1 = Q(s, a1) − σ and counter k = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 5: while k ≤ A − 1 and λk ≤ Q(s, ak) do 6: Increment counter: k = k + 1 7: Update value estimate: λk = 1 k � k � i=1 Q(s, ai) − � � � �kσ2 + ( k � i=1 Q(s, ai))2 − k k � i=1 (Q(s, ai))2 � 8: end while 9: Return: λk, k Algorithm 8 Algorithm to compute S-rectangular L1 robust optimal Bellman Operator 1: Input: σ = αs + γβsκ∞(v), Q(s, a) = R0(s, a) + γ � s′ P0(s′|s, a)v(s′).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 2: Output (T ∗ Us 1 v)(s), χ1(v, s) 3: Sort Q(s, ·) and label actions such that Q(s, a1) ≥ Q(s, a2), · · · .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 4: Set initial value guess λ1 = Q(s, a1) − σ and counter k = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 5: while k ≤ A − 1 and λk ≤ Q(s, ak) do 6: Increment counter: k = k + 1 7: Update value estimate: λk = 1 k � k � i=1 Q(s, ai) − σ � 8: end while 9: Return: λk, k 49 L Time Complexity In this section, we will discuss time complexity of various robust MDPs and compare it with time complexity of non-robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We assume that we have the knowledge of nominal transition kernel and nominal reward function for robust MDPs, and in case of non-robust MDPs, we assume the knowledge of the transition kernel and reward function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We divide the discussion into various parts depending upon their similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='1 Exact Value Iteration: Best Response In this section, we will discuss non-robust MDPs, (sa)-rectangular L1/L2/L∞ robust MDPs and s-rectangular L∞ robust MDPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' They all have a common theme for value iteration as follows, for the value function v, their Bellman operator ( T ) evaluation is done as (T v)(s) = max a ���� action cost � R(s, a) + αs,a κ(v) ���� reward penalty/cost +γ � s′ P(s′|s, a)v(s′) � �� � sweep � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (96) ’Sweep’ requires O(S) iterations and ’action cost’ requires O(A) iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Note that the reward penalty κ(v) doesn’t depend on state and action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' It is calculated only once for value iteration for all states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The above value update has to be done for each states , so one full update requires O � S(action cost)(sweep cost � +reward cost � = O � S2A + reward cost � Since the value iteration is a contraction map, so to get ϵ-close to the optimal value, it requires O(log( 1 ϵ )) full value update, so the complexity is O � log(1 ϵ ) � S2A + reward cost �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Non-robust MDPs: The cost of ’reward is zero as there is no regularizer to compute.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' The total complexity is O � log(1 ϵ ) � S2A + 0 �� = O � log(1 ϵ )S2A � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (sa)-rectangular L1/L2/L∞ and s-rectangular L∞ robust MDPs: We need to calculate the reward penalty (κ1(v)/κ2(v)/κ∞) that takes O(S) iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' As calculation of mean, variance and median, all are linear time compute.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Hence the complexity is O � log(1 ϵ ) � S2A + S �� = O � log(1 ϵ )S2A � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='2 Exact Value iteration: Top k response In this section, we discuss the time complexity of s-rectangular L1/L2 robust MDPs as in algorithm 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We need to calculate the reward penalty (κ∞(v)/κ2(v) in (40)) that takes O(S) iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Then for each state we do: sorting of Q-values in (45), value evaluation in (46), 50 update Q-value in (44) that takes O(A log(A)), O(A), O(SA) iterations respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Hence the complexity is total iteration(reward cost (40) + S( sorting (45) + value evaluation (46) +Q-value(44)) = log(1 ϵ )(S + S(A log(A) + A + SA) O � log(1 ϵ ) � S2A + SA log(A) �� .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For general p, we need little caution as kp(v) can’t be calculated exactly but approximately by binary search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' And it is the subject of discussion for the next sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='3 Inexact Value Iteration: sa-rectangular Lp robust MDPs (U sa p ) In this section, we will study the time complexity for robust value iteration for (sa)-rectangular Lp robust MDPs for general p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Recall, that value iteration takes best penalized action, that is easy to compute.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' But reward penalization depends on p-variance measure κp(v), that we will estimate by ˆκp(v) through binary search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We have inexact value iterations as vn+1(s) := max a∈A[αsa − γβsaˆκq(vn) + R0(s, a) + γ � s′ P0(s′|s, a)vn(s′)] where ˆκq(vn) is a ϵ1 approximation of κq(vn), that is |ˆκq(vn) − κq(vn)| ≤ ϵ1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Then it is easy to see that we have bounded error in robust value iteration, that is ∥vn+1 − T ∗ Usa p vn∥∞ ≤ γβmaxϵ1 where βmax := maxs,a βs,a Proposition 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Let T ∗ U be a γ contraction map, and v∗ be its fixed point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' And let {vn, n ≥ 0} be approximate value iteration, that is ∥vn+1 − T ∗ U vn∥∞ ≤ ϵ then lim n→∞∥vn − v∗∥∞ ≤ ϵ 1 − γ moreover, it converges to the ϵ 1−γ radius ball linearly, that is ∥vn − v∗∥∞ − ϵ 1 − γ ≤ cγn where c = 1 1−γ ϵ + ∥v0 − v∗∥∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' 51 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' ∥vn+1 − v∗∥∞ =∥vn+1 − T ∗ U v∗∥∞ =∥vn+1 − T ∗ U vn + T ∗ U vn − T ∗ U v∗∥∞ ≤∥vn+1 − T ∗ U vn∥∞ + ∥T ∗ U vn − T ∗ U v∗∥∞ ≤∥vn+1 − T ∗ U vn∥∞ + γ∥vn − v∗∥∞, (contraction) ≤ϵ + γ∥vn − v∗∥∞, (approximate value iteration) =⇒ ∥vn − v∗∥∞ = n−1 � k=0 γkϵ + γn∥v0 − v∗∥∞, (unrolling above recursion) =1 − γn 1 − γ ϵ + γn∥v0 − v∗∥∞ =γn[ 1 1 − γ ϵ + ∥v0 − v∗∥∞] + ϵ 1 − γ (97) Taking limit n → ∞ both sides, we get lim n→∞∥vn − v∗∥∞ ≤ ϵ 1 − γ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For Usa p , the total iteration cost is log( 1 ϵ )S2A + (log( 1 ϵ ))2 to get ϵ close to the optimal robust value function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We calculate κq(v) with ϵ1 = (1−γ)ϵ 3 tolerance that takes O(S log( S ϵ1 )) using binary search (see section I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Now, we do approximate value iteration for n = log( 3∥v0−v∗∥∞ ϵ ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Using the above lemma, we have ∥vn − v∗ Usa p ∥∞ =γn[ 1 1 − γ ϵ1 + ∥v0 − v∗ Usa p ∥∞] + ϵ1 1 − γ ≤γn[ ϵ 3 + ∥v0 − v∗ Usa p ∥∞] + ϵ 3 ≤γn ϵ 3 + ϵ 3 + ϵ 3 ≤ ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (98) In summary, we have action cost O(A), reward cost O(S log( S ϵ )), sweep cost O(S) and total number of iterations O(log( 1 ϵ )).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' So the complexity is (number of iterations) � S(actions cost) (sweep cost) + reward cost � = log(1 ϵ ) � S2A + S log(S ϵ ) � = log(1 ϵ )(S2A + S log(1 ϵ ) + S log(S)) = log(1 ϵ )S2A + S(log(1 ϵ ))2 52 L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='4 Inexact Value Iteration: s-rectangular Lp robust MDPs In this section, we study the time complexity for robust value iteration for s-rectangular Lp robust MDPs for general p ( algorithm 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Recall, that value iteration takes regularized actions and penalized reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' And reward penalization depends on q-variance measure κq(v), that we will estimate by ˆκq(v) through binary search, then again we will calculate T ∗ Usa p by binary search with approximated κq(v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Here, we have two error sources ((40), (46)) as contrast to (sa)-rectangular cases, where there was only one error source from the estimation of κq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' First, we account for the error caused by the first source (κq).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Here we do value iteration with approximated q-variance ˆκq, and exact action regularizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We have vn+1(s) := λ s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' αs + γβsˆκq(v) = ( � Q(s,a)≥λ (Q(s, a) − λ)p) 1 p where Q(s, a) = R0(s, a) + γ � s′ P0(s′|s, a)vn(s′), and |ˆκq(vn) − κq(vn)| ≤ ϵ1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Then from the next result (proposition 8), we get ∥vn+1 − T ∗ Usa p vn∥∞ ≤ γβmaxϵ1 where βmax := maxs,a βs,a Proposition 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Let ˆκ be an an ϵ-approximation of κ, that is |ˆκ − κ| ≤ ϵ, and let b ∈ RA be sorted component wise, that is, b1 ≥, · · · , ≥ bA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Let λ be the solution to the following equation with exact parameter κ, α + γβκ = ( � bi≥λ |bi − λ|p) 1 p and let ˆλ be the solution of the following equation with approximated parameter ˆκ, α + γβˆκ = ( � bi≥ˆλ |bi − ˆλ|p) 1 p , then ˆλ is an O(ϵ)-approximation of λ, that is |λ − ˆλ| ≤ γβϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Let the function f : [bA, b1] → R be defined as f(x) := ( � bi≥x |bi − x|p) 1 p .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We will show that derivative of f is bounded, implying its inverse is bounded and hence Lipschitz, that will prove the claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Let proceed df(x) dx = −( � bi≥x |bi − x|p) 1 p −1 � bi≥x |bi − x|p−1 = − � bi≥x |bi − x|p−1 (� bi≥x |bi − x|p) p−1 p = − � (� bi≥x |bi − x|p−1) 1 p−1 (� bi≥x |bi − x|p) 1 p �p−1 ≤ −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (99) 53 The inequality follows from the following relation between Lp norm, ∥x∥a ≥ ∥x∥b, ∀0 ≤ a ≤ b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' It is easy to see that the function f is strictly monotone in the range bA, b1], so its inverse is well defined in the same range.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Then derivative of the inverse of the function f is bounded as 0 ≥ d dxf −(x) ≥ −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Now, observe that λ = f −(α + γβκ) and ˆλ = f −(α + γβˆκ), then by Lipschitzcity, we have |λ − ˆλ| = |f −(α + γβκ) − f −(α + γβˆκ)| ≤ γβ| − κ − ˆκ)| ≤ γβϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For Us p, the total iteration cost is O � log( 1 ϵ ) � S2A + SA log( A ϵ ) �� to get ϵ close to the optimal robust value function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' We calculate κq(v) in (40) with ϵ1 = (1−γ)ϵ 6 tolerance that takes O(S log( S ϵ1 )) iterations using binary search (see section I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Then for every state, we sort the Q values (as in (45)) that costs O(A log(A)) iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' In each state, to update value, we do again binary search with approximate κq(v) upto ϵ2 := (1−γ)ϵ 6 tolerance, that takes O(log( 1 ϵ2 )) search iterations and each iteration cost O(A), altogether it costs O(A log( 1 ϵ2 )) iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Sorting of actions and binary search adds upto O(A log( A ϵ )) iterations (action cost).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' So we have (doubly) approximated value iteration as following, |vn+1(s) − ˆλ| ≤ ϵ1 (100) where (αs + γβsˆκq(vn))p = � Qn(s,a)≥ˆλ (Qn(s, a) − ˆλ)p and Qn(s, a) = R0(s, a) + γ � s′ P0(s′|s, a)vn(s′), |ˆκq(vn) − κq(vn)| ≤ ϵ1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' And we do this approximate value iteration for n = log( 3∥v0−v∗∥∞ ϵ ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Now, we do error analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' By accumulating error, we have |vn+1(s) − (T ∗ Us pvn)(s)| ≤|vn+1(s) − ˆλ| + |ˆλ − (T ∗ Us pvn)(s)| ≤ϵ1 + |ˆλ − (T ∗ Us pvn)(s)|, (by definition) ≤ϵ1 + γβmaxϵ1, (from proposition 8) ≤2ϵ1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (101) where βmax := maxs βs, γ ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Now, we do approximate value iteration, and from proposition 7, we get ∥vn − v∗ Us p∥ ≤ 2ϵ1 1 − γ + γn[ 1 1 − γ 2ϵ1 + ∥v0 − v∗ Us p∥∞] (102) 54 Now, putting the value of n, we have ∥vn − v∗ Us p∥∞ =γn[ 2ϵ1 1 − γ + ∥v0 − v∗ Us p∥∞] + 2ϵ1 1 − γ ≤γn[ ϵ 3 + ∥v0 − v∗ Us p∥∞] + ϵ 3 ≤γn ϵ 3 + ϵ 3 + ϵ 3 ≤ ϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' (103) To summarize, we do O(log( 1 ϵ )) full value iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' Cost of evaluating reward penalty is O(S log( S ϵ )).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' For each state: evaluation of Q-value from value function requires O(SA) iterations, sorting the actions according Q-values requires O(A log(A)) iterations, and binary search for evaluation of value requires O(A log(1/ϵ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'} +page_content=' So the complexity is O((total iterations)(reward cost + S(Q-value + sorting + binary search for value ))) = O � log(1 ϵ ) � S log(S ϵ ) + S(SA + A log(A) + A log(1 ϵ )) �� = O � log(1 ϵ ) � S log(1 ϵ ) + S log(S) + S2A + SA log(A) + SA log(1 ϵ ) �� = O � log(1 ϵ ) � S2A + SA log(A) + SA log(1 ϵ ) �� = O � log(1 ϵ ) � S2A + SA log(A ϵ ) �� 55' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/DtFRT4oBgHgl3EQfxjhb/content/2301.13642v1.pdf'}