Dataset Viewer
Auto-converted to Parquet Duplicate
chunk_id
int64
0
448
chunk_text
stringlengths
1
10.8k
chunk_text_tokens
int64
1
2.01k
serialized_text
stringlengths
1
11.1k
serialized_text_tokens
int64
1
2.02k
0
In this paper we discuss an application of Stochastic Approximation to statistical estimation of high-dimensional sparse parameters. The proposed solution reduces to resolving a penalized stochastic optimization problem on each stage of a multistage algorithm; each problem being solved to a prescribed accuracy by the n...
215
Stochastic Mirror Descent for Large-Scale Sparse Recovery Abstract In this paper we discuss an application of Stochastic Approximation to statistical estimation of high-dimensional sparse parameters. The proposed solution reduces to resolving a penalized stochastic optimization problem on each stage of a multistage alg...
229
1
Our original motivation is the well known problem of (generalized) linear high-dimensional regression with random design. Formally, consider a dataset of N𝑁N points (ϕi,ηi),i∈{1,…,N}subscriptitalic-ϕ𝑖subscript𝜂𝑖𝑖1…𝑁(\phi_{i},\eta_{i}),i\in\{1,\ldots,N\}, where ϕi∈𝐑nsubscriptitalic-ϕ𝑖superscript𝐑𝑛\phi_{i}\in{\...
1,715
Stochastic Mirror Descent for Large-Scale Sparse Recovery 1 Introduction Our original motivation is the well known problem of (generalized) linear high-dimensional regression with random design. Formally, consider a dataset of N𝑁N points (ϕi,ηi),i∈{1,…,N}subscriptitalic-ϕ𝑖subscript𝜂𝑖𝑖1…𝑁(\phi_{i},\eta_{i}),i\in\{...
1,730
2
Sparse recovery by Lasso and Dantzig Selector has been extensively studied [11, 8, 5, 46, 10, 9]. It computes a solution x^Nsubscript^𝑥𝑁{\widehat{x}}_{N} to the ℓ1subscriptℓ1\ell_{1}-penalized problem minx⁡g^N​(x)+λ​‖x‖1subscript𝑥subscript^𝑔𝑁𝑥𝜆subscriptnorm𝑥1\min_{x}{\widehat{g}}_{N}(x)+\lambda\|x\|_{1} where λ...
1,505
Stochastic Mirror Descent for Large-Scale Sparse Recovery 1 Introduction Existing approaches and related works. Sparse recovery by Lasso and Dantzig Selector has been extensively studied [11, 8, 5, 46, 10, 9]. It computes a solution x^Nsubscript^𝑥𝑁{\widehat{x}}_{N} to the ℓ1subscriptℓ1\ell_{1}-penalized problem minx⁡...
1,526
3
We provide a refined analysis of Composite Stochastic Mirror Descent (CSMD) algorithms for computing sparse solutions to Stochastic Optimization problem leveraging smoothness of the objective. This leads to a new “aggressive” choice of parameters in a multistage algorithm with significantly improved performances compar...
311
Stochastic Mirror Descent for Large-Scale Sparse Recovery 1 Introduction Principal contributions. We provide a refined analysis of Composite Stochastic Mirror Descent (CSMD) algorithms for computing sparse solutions to Stochastic Optimization problem leveraging smoothness of the objective. This leads to a new “aggressi...
329
4
The remaining of the paper is organized as follows. In Section 2, the general problem is set, and the multistage optimization routine and the study of its basic properties are presented. Then, in Section 3, we discuss the properties of the method and conditions under which it leads to “small error” solutions to sparse ...
583
Stochastic Mirror Descent for Large-Scale Sparse Recovery 1 Introduction Organization and notation The remaining of the paper is organized as follows. In Section 2, the general problem is set, and the multistage optimization routine and the study of its basic properties are presented. Then, in Section 3, we discuss the...
602
5
This section is dedicated to the formulation of the generic stochastic optimization problem, the description and the analysis of the generic algorithm.
24
Stochastic Mirror Descent for Large-Scale Sparse Recovery 2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization This section is dedicated to the formulation of the generic stochastic optimization problem, the description and the analysis of the generic algorithm.
51
6
Let X𝑋X be a convex closed subset of an Euclidean space E𝐸E and (Ω,P)Ω𝑃(\Omega,P) a probability space. We consider a mapping G:X×Ω→𝐑:𝐺→𝑋Ω𝐑G:X\times\Omega\rightarrow{\mathbf{R}} such that, for all ω∈Ω𝜔Ω\omega\in\Omega, G​(⋅,ω)𝐺⋅𝜔G(\cdot,\omega) is convex on X𝑋X and smooth, meaning that ∇G​(⋅,ω)∇𝐺⋅𝜔\nabla G(...
1,321
Stochastic Mirror Descent for Large-Scale Sparse Recovery 2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization 2.1 Problem statement Let X𝑋X be a convex closed subset of an Euclidean space E𝐸E and (Ω,P)Ω𝑃(\Omega,P) a probability space. We consider a mapping G:X×Ω→𝐑:𝐺→𝑋Ω𝐑G:X\times\Omega\right...
1,354
7
As mentioned in the introduction, (stochastic) optimization over the set of sparse solutions can be done through ”composite” techniques. We take a similar approach here, by transforming the generic problem 5 into the following composite Stochastic Optimization problem, adapted to some norm ∥⋅∥\|\cdot\|, and parameteriz...
317
Stochastic Mirror Descent for Large-Scale Sparse Recovery 2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization 2.2 Composite Stochastic Mirror Descent algorithm As mentioned in the introduction, (stochastic) optimization over the set of sparse solutions can be done through ”composite” techniques. W...
355
8
Let B𝐵B be the unit ball of the norm ∥⋅∥\|\cdot\| and θ:B→𝐑:𝜃→𝐵𝐑\theta:\,B\to{\mathbf{R}} be a distance-generating function (d.-g.f.) of B𝐵B, i.e., a continuously differentiable convex function which is strongly convex with respect to the norm ∥⋅∥\|\cdot\|, , 1 = ⟨∇θ​(x)−∇θ​(x′),x−x′⟩≥‖x−x′‖2,∀x,x′∈X.formulae-seq...
393
Stochastic Mirror Descent for Large-Scale Sparse Recovery 2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization 2.2 Composite Stochastic Mirror Descent algorithm Proximal setup, Bregman divergences and Proximal mapping. Let B𝐵B be the unit ball of the norm ∥⋅∥\|\cdot\| and θ:B→𝐑:𝜃→𝐵𝐑\theta:\,B\...
448
9
For any x0∈Xsubscript𝑥0𝑋x_{0}\in X, let XR​(x0):={z∈X:‖z−x0‖≤R}assignsubscript𝑋𝑅subscript𝑥0conditional-set𝑧𝑋norm𝑧subscript𝑥0𝑅X_{R}(x_{0}):=\{z\in X:\|z-x_{0}\|\leq R\} be the ball of radius R𝑅R around x0subscript𝑥0x_{0}. It is equipped with the d.-g.f. ϑx0R​(z):=R2​θ​((z−x0)/R)assignsubscriptsuperscriptital...
443
Stochastic Mirror Descent for Large-Scale Sparse Recovery 2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization 2.2 Composite Stochastic Mirror Descent algorithm Proximal setup, Bregman divergences and Proximal mapping. Definition 2.1 For any x0∈Xsubscript𝑥0𝑋x_{0}\in X, let XR​(x0):={z∈X:‖z−x0‖≤R}...
504
10
Given x0∈Xsubscript𝑥0𝑋x_{0}\in X and R>0𝑅0R>0, the Bregman divergence V𝑉V associated to ϑitalic-ϑ\vartheta is defined by , 1 = Vx0​(x,z)=ϑx0R​(z)−ϑx0R​(x)−⟨∇ϑx0R​(x),z−x⟩,x,z∈X.formulae-sequencesubscript𝑉subscript𝑥0𝑥𝑧subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑧subscriptsuperscriptitalic-ϑ𝑅subscript𝑥0𝑥∇subscr...
368
Stochastic Mirror Descent for Large-Scale Sparse Recovery 2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization 2.2 Composite Stochastic Mirror Descent algorithm Proximal setup, Bregman divergences and Proximal mapping. Definition 2.2 Given x0∈Xsubscript𝑥0𝑋x_{0}\in X and R>0𝑅0R>0, the Bregman div...
429
11
The composite proximal mapping with respect to hℎh and x𝑥x is defined by , 1 = Proxh,x0​(ζ,x)subscriptProxℎsubscript𝑥0𝜁𝑥\displaystyle\mathrm{Prox}_{h,x_{0}}(\zeta,x). , 2 = :=assign\displaystyle:=. , 3 = arg​minz∈XR​(x0)⁡{⟨ζ,z⟩+h​(z)+Vx0​(x,z)}subscriptargmin𝑧subscript𝑋𝑅subscript𝑥0𝜁𝑧ℎ𝑧subscript𝑉subscript𝑥0...
527
Stochastic Mirror Descent for Large-Scale Sparse Recovery 2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization 2.2 Composite Stochastic Mirror Descent algorithm Proximal setup, Bregman divergences and Proximal mapping. Definition 2.3 The composite proximal mapping with respect to hℎh and x𝑥x is de...
588
12
Given a sequence of positive step sizes γi>0subscript𝛾𝑖0\gamma_{i}>0, the Composite Stochastic Mirror Descent (CSMD) is defined by the following recursion , 1 = xisubscript𝑥𝑖\displaystyle x_{i}. , 2 = =Proxγi​h,x0​(γi−1​∇G​(xi−1,ωi),xi−1),x0∈X.formulae-sequenceabsentsubscriptProxsubscript𝛾𝑖ℎsubscript𝑥0subscript�...
1,249
Stochastic Mirror Descent for Large-Scale Sparse Recovery 2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization 2.2 Composite Stochastic Mirror Descent algorithm Composite Stochastic Mirror Descent algorithm. Given a sequence of positive step sizes γi>0subscript𝛾𝑖0\gamma_{i}>0, the Composite Stoch...
1,295
13
If step-sizes are constant, i.e., γi≡γ≤(4​ν)−1subscript𝛾𝑖𝛾superscript4𝜈1\gamma_{i}\equiv\gamma\leq(4\nu)^{-1}, i=0,1,…𝑖01…i=0,1,..., and the initial point x0∈Xsubscript𝑥0𝑋x_{0}\in X such that x∗∈XR​(x0)subscript𝑥subscript𝑋𝑅subscript𝑥0x_{*}\in X_{R}(x_{0}) then for any t≳1+ln⁡mgreater-than-or-equivalent-to𝑡1...
902
Stochastic Mirror Descent for Large-Scale Sparse Recovery 2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization 2.2 Composite Stochastic Mirror Descent algorithm Composite Stochastic Mirror Descent algorithm. Proposition 2.1 If step-sizes are constant, i.e., γi≡γ≤(4​ν)−1subscript𝛾𝑖𝛾superscript4𝜈...
955
14
Our approach to find sparse solution to the original stochastic optimization problem (7) consists in solving a sequence of auxiliary composite problems (7), with their sequence of parameters (κ𝜅\kappa, x0subscript𝑥0x_{0}, R𝑅R) defined recursively. For the latter, we need to infer the quality of approximate solution ...
117
Stochastic Mirror Descent for Large-Scale Sparse Recovery 2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization 2.3 Main contribution: a multistage adaptive algorithm Our approach to find sparse solution to the original stochastic optimization problem (7) consists in solving a sequence of auxiliary ...
157
15
There exist some δ>0𝛿0\delta>0 and ρ<∞𝜌\rho<\infty such that for any feasible solution x^∈X^𝑥𝑋\widehat{x}\in X to the composite problem (7) satisfying, with probability at least 1−ε1𝜀1-\varepsilon, , 1 = Fκ​(x^)−Fκ​(x∗)≤υ,subscript𝐹𝜅^𝑥subscript𝐹𝜅subscript𝑥𝜐F_{\kappa}(\widehat{x})-F_{\kappa}(x_{*})\leq\upsil...
1,104
Stochastic Mirror Descent for Large-Scale Sparse Recovery 2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization 2.3 Main contribution: a multistage adaptive algorithm Assumption [RSC] There exist some δ>0𝛿0\delta>0 and ρ<∞𝜌\rho<\infty such that for any feasible solution x^∈X^𝑥𝑋\widehat{x}\in X t...
1,150
16
Assume that the total sample budget satisfies N≥m0𝑁subscript𝑚0N\geq m_{0}, so that at least one stage of the preliminary phase of Algorithm 1 is completed, then for t≳ln⁡Ngreater-than-or-equivalent-to𝑡𝑁t\gtrsim\sqrt{\ln N} the approximate solution x^Nsubscript^𝑥𝑁\widehat{x}_{N} of Algorithm 1 satisfies, with prob...
838
Stochastic Mirror Descent for Large-Scale Sparse Recovery 2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization 2.3 Main contribution: a multistage adaptive algorithm Assumption [RSC] Theorem 2.1 Assume that the total sample budget satisfies N≥m0𝑁subscript𝑚0N\geq m_{0}, so that at least one stage ...
891
17
Along with the oracle computation, proximal computation to be implemented at each iteration of the algorithm is an important part of the computational cost of the method. It becomes even more important during the asymptotic phase when number of iterations per stage increases exponentially fast with the stage count, and...
165
Stochastic Mirror Descent for Large-Scale Sparse Recovery 2 Multistage Stochastic Mirror Descent for Sparse Stochastic Optimization 2.3 Main contribution: a multistage adaptive algorithm Assumption [RSC] Remark 2.1 Along with the oracle computation, proximal computation to be implemented at each iteration of the algori...
217
18
We now consider again the original problem of recovery of a s𝑠s-sparse signal x∗∈X⊂𝐑nsubscript𝑥𝑋superscript𝐑𝑛x_{*}\in X\subset{\mathbf{R}}^{n} from random observations defined by , 1 = ηi=𝔯​(ϕiT​x∗)+σ​ξi,i=1,2,…,N,formulae-sequencesubscript𝜂𝑖𝔯superscriptsubscriptitalic-ϕ𝑖𝑇subscript𝑥𝜎subscript𝜉𝑖𝑖12…𝑁\d...
1,064
Stochastic Mirror Descent for Large-Scale Sparse Recovery 3 Sparse generalized linear regression by stochastic approximation 3.1 Problem setting We now consider again the original problem of recovery of a s𝑠s-sparse signal x∗∈X⊂𝐑nsubscript𝑥𝑋superscript𝐑𝑛x_{*}\in X\subset{\mathbf{R}}^{n} from random observations d...
1,091
19
Assume that 𝔯𝔯\mathfrak{r} is r¯¯𝑟{\overline{r}}-Lipschitz continuous and r¯¯𝑟{\underline{r}}-strongly monotone (i.e., |𝔯​(t)−𝔯​(t′)|≥r¯​|t−t′|𝔯𝑡𝔯superscript𝑡′¯𝑟𝑡superscript𝑡′|\mathfrak{r}(t)-\mathfrak{r}(t^{\prime})|\geq{\underline{r}}|t-t^{\prime}| which implies that 𝔰𝔰\mathfrak{s} is r¯¯𝑟{\underline{...
698
Stochastic Mirror Descent for Large-Scale Sparse Recovery 3 Sparse generalized linear regression by stochastic approximation 3.1 Problem setting Proposition 3.1 Assume that 𝔯𝔯\mathfrak{r} is r¯¯𝑟{\overline{r}}-Lipschitz continuous and r¯¯𝑟{\underline{r}}-strongly monotone (i.e., |𝔯​(t)−𝔯​(t′)|≥r¯​|t−t′|𝔯𝑡𝔯supe...
732
20
Let λ>0𝜆0\lambda>0 and 0<ψ≤10𝜓10<\psi\leq 1, and suppose that for all subsets I⊂{1,…,n}𝐼1…𝑛I\subset\{1,...,n\} of cardinality smaller than s𝑠s the following property is verified: , 1 = ∀z∈𝐑n‖zI‖1≤sλ​‖z‖Σ+12​(1−ψ)​‖z‖1formulae-sequencefor-all𝑧superscript𝐑𝑛subscriptnormsubscript𝑧𝐼1𝑠𝜆subscriptnorm𝑧Σ121𝜓subs...
763
Stochastic Mirror Descent for Large-Scale Sparse Recovery 3 Sparse generalized linear regression by stochastic approximation 3.1 Problem setting Lemma 3.1 Let λ>0𝜆0\lambda>0 and 0<ψ≤10𝜓10<\psi\leq 1, and suppose that for all subsets I⊂{1,…,n}𝐼1…𝑛I\subset\{1,...,n\} of cardinality smaller than s𝑠s the following pro...
796
21
Condition 𝐐​(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) generalizes the classical Restricted Eigenvalue (RE) property [5] and Compatibility Condition [46], and is the most relaxed condition under which classical bounds for the error of ℓ1subscriptℓ1\ell_{1}-recovery routines were established. Validity of 𝐐​(λ,ψ)𝐐𝜆𝜓{\mat...
680
Stochastic Mirror Descent for Large-Scale Sparse Recovery 3 Sparse generalized linear regression by stochastic approximation 3.1 Problem setting Remark 3.1 Condition 𝐐​(λ,ψ)𝐐𝜆𝜓{\mathbf{Q}}(\lambda,\psi) generalizes the classical Restricted Eigenvalue (RE) property [5] and Compatibility Condition [46], and is the mo...
713
22
In the case of linear regression where 𝔯​(t)=t𝔯𝑡𝑡\mathfrak{r}(t)=t, it holds , 1 = g​(x)𝑔𝑥\displaystyle g(x). , 2 = =\displaystyle=. , 3 = 𝐄​{12​(ϕT​x)2−xT​ϕ​η}=12​𝐄​{(ϕT​(x∗−x))2−(ϕT​x∗)2}𝐄12superscriptsuperscriptitalic-ϕ𝑇𝑥2superscript𝑥𝑇italic-ϕ𝜂12𝐄superscriptsuperscriptitalic-ϕ𝑇subscript𝑥𝑥2superscri...
1,886
Stochastic Mirror Descent for Large-Scale Sparse Recovery 3 Sparse generalized linear regression by stochastic approximation 3.1 Problem setting Remarks. In the case of linear regression where 𝔯​(t)=t𝔯𝑡𝑡\mathfrak{r}(t)=t, it holds , 1 = g​(x)𝑔𝑥\displaystyle g(x). , 2 = =\displaystyle=. , 3 = 𝐄​{12​(ϕT​x)2−xT​ϕ​η...
1,915
23
where ς∼𝒩​(0,1)similar-to𝜍𝒩01\varsigma\sim{\cal N}(0,1). Thus, H​(x)𝐻𝑥H(x) is proportional to Σ1/2​x‖x‖ΣsuperscriptΣ12𝑥subscriptnorm𝑥Σ{\Sigma^{1/2}x\over\|x\|_{\Sigma}} with coefficient , 1 = h​(‖x‖Σ)=𝐄​{ς​𝔯​(ς​‖x‖Σ)}.ℎsubscriptnorm𝑥Σ𝐄𝜍𝔯𝜍subscriptnorm𝑥Σh\big{(}\|x\|_{\Sigma}\big{)}={\mathbf{E}}\left\{\va...
296
Stochastic Mirror Descent for Large-Scale Sparse Recovery 3 Sparse generalized linear regression by stochastic approximation 3.1 Problem setting Remarks. where ς∼𝒩​(0,1)similar-to𝜍𝒩01\varsigma\sim{\cal N}(0,1). Thus, H​(x)𝐻𝑥H(x) is proportional to Σ1/2​x‖x‖ΣsuperscriptΣ12𝑥subscriptnorm𝑥Σ{\Sigma^{1/2}x\over\|x\|_...
325
24
In this section, we describe the statistical properties of approximate solutions of Algorithm 1 when applied to the sparse recovery problem. We shall use the following distance-generating function of the ℓ1subscriptℓ1\ell_{1}-ball of 𝐑nsuperscript𝐑𝑛{\mathbf{R}}^{n} (cf. [27, Section 5.7.1]) , 1 = θ​(x)=cp​‖x‖pp,p={2...
409
Stochastic Mirror Descent for Large-Scale Sparse Recovery 3 Sparse generalized linear regression by stochastic approximation 3.2 Stochastic Mirror Descent algorithm In this section, we describe the statistical properties of approximate solutions of Algorithm 1 when applied to the sparse recovery problem. We shall use t...
440
25
For t≳ln⁡Ngreater-than-or-equivalent-to𝑡𝑁t\gtrsim\sqrt{\ln N}, assuming the samples budget is large enough, i.e., N≥m0𝑁subscript𝑚0N\geq m_{0} (so that at least one stage of the preliminary phase of Algorithm 1 is completed), the approximate solution x^Nsubscript^𝑥𝑁\widehat{x}_{N} output satisfies with probability...
841
Stochastic Mirror Descent for Large-Scale Sparse Recovery 3 Sparse generalized linear regression by stochastic approximation 3.2 Stochastic Mirror Descent algorithm Proposition 3.2 For t≳ln⁡Ngreater-than-or-equivalent-to𝑡𝑁t\gtrsim\sqrt{\ln N}, assuming the samples budget is large enough, i.e., N≥m0𝑁subscript𝑚0N\geq...
879
26
Bounds for the ℓ1subscriptℓ1\ell_{1}-norm of the error x^N−x∗subscript^𝑥𝑁subscript𝑥{\widehat{x}}_{N}-x_{*} (or x^N(b)−x∗superscriptsubscript^𝑥𝑁𝑏subscript𝑥\widehat{x}_{N}^{(b)}-x_{*}) established in Proposition 3.2 allows us to quantify prediction error g​(x^N)−g​(x∗)𝑔subscript^𝑥𝑁𝑔subscript𝑥g({\widehat{x}}_{...
1,564
Stochastic Mirror Descent for Large-Scale Sparse Recovery 3 Sparse generalized linear regression by stochastic approximation 3.2 Stochastic Mirror Descent algorithm Remark 3.2 Bounds for the ℓ1subscriptℓ1\ell_{1}-norm of the error x^N−x∗subscript^𝑥𝑁subscript𝑥{\widehat{x}}_{N}-x_{*} (or x^N(b)−x∗superscriptsubscript^...
1,601
27
The proposed approach allows also to address the situation in which regressors are not a.s. bounded. For instance, consider the case of random regressors with i.i.d sub-Gaussian entries such that , 1 = ∀j≤n,𝐄​[exp⁡([ϕi]j2ϰ2)]≤1.formulae-sequencefor-all𝑗𝑛𝐄delimited-[]superscriptsubscriptdelimited-[]subscriptitalic-ϕ...
711
Stochastic Mirror Descent for Large-Scale Sparse Recovery 3 Sparse generalized linear regression by stochastic approximation 3.2 Stochastic Mirror Descent algorithm Remark 3.3 The proposed approach allows also to address the situation in which regressors are not a.s. bounded. For instance, consider the case of random r...
748
28
In this section, we present results of a small simulation study illustrating the theoretical part of the previous section.222The reader is invited to check Section C of the supplementary material for more experimental results. We consider the GLR model (15) with activation function (21) where α=1/2𝛼12\alpha=1/2. In ou...
1,254
Stochastic Mirror Descent for Large-Scale Sparse Recovery 3 Sparse generalized linear regression by stochastic approximation 3.3 Numerical experiments In this section, we present results of a small simulation study illustrating the theoretical part of the previous section.222The reader is invited to check Section C of ...
1,282
29
We use notation 𝐄isubscript𝐄𝑖{\mathbf{E}}_{i} for conditional expectation given x0subscript𝑥0x_{0} and ω1,…,ωisubscript𝜔1…subscript𝜔𝑖\omega_{1},...,\omega_{i}.
73
Stochastic Mirror Descent for Large-Scale Sparse Recovery Appendix A Proofs We use notation 𝐄isubscript𝐄𝑖{\mathbf{E}}_{i} for conditional expectation given x0subscript𝑥0x_{0} and ω1,…,ωisubscript𝜔1…subscript𝜔𝑖\omega_{1},...,\omega_{i}.
91
30
The result of Proposition 2.1 is an immediate consequence of the following statement.
17
Stochastic Mirror Descent for Large-Scale Sparse Recovery Appendix A Proofs A.1 Proof of Proposition 2.1 The result of Proposition 2.1 is an immediate consequence of the following statement.
46
31
Let , 1 = f​(x)=12​g​(x)+h​(x),x∈X.formulae-sequence𝑓𝑥12𝑔𝑥ℎ𝑥𝑥𝑋f(x)=\mbox{\small$\frac{1}{2}$}g(x)+h(x),\quad x\in X.. , 2 = In the situation of Section 2.2, let γi≤(4​ν)−1subscript𝛾𝑖superscript4𝜈1\gamma_{i}\leq(4\nu)^{-1} for all i=0,1,…𝑖01…i=0,1,..., and let x^msubscript^𝑥𝑚\widehat{x}_{m} be defined in ...
1,580
Stochastic Mirror Descent for Large-Scale Sparse Recovery Appendix A Proofs A.1 Proof of Proposition 2.1 Proposition A.1 Let , 1 = f​(x)=12​g​(x)+h​(x),x∈X.formulae-sequence𝑓𝑥12𝑔𝑥ℎ𝑥𝑥𝑋f(x)=\mbox{\small$\frac{1}{2}$}g(x)+h(x),\quad x\in X.. , 2 = In the situation of Section 2.2, let γi≤(4​ν)−1subscript𝛾𝑖supers...
1,615
32
Denote Hi=∇G​(xi−1,ωi)subscript𝐻𝑖∇𝐺subscript𝑥𝑖1subscript𝜔𝑖H_{i}=\nabla G(x_{i-1},\omega_{i}). In the sequel, we use the shortcut notation ϑ​(z)italic-ϑ𝑧\vartheta(z) and V​(x,z)𝑉𝑥𝑧V(x,z) for ϑx0R​(z)superscriptsubscriptitalic-ϑsubscript𝑥0𝑅𝑧\vartheta_{x_{0}}^{R}(z) and Vx0​(x,z)subscript𝑉subscript𝑥0𝑥𝑧V_...
220
Stochastic Mirror Descent for Large-Scale Sparse Recovery Appendix A Proofs A.1 Proof of Proposition 2.1 Proof. Denote Hi=∇G​(xi−1,ωi)subscript𝐻𝑖∇𝐺subscript𝑥𝑖1subscript𝜔𝑖H_{i}=\nabla G(x_{i-1},\omega_{i}). In the sequel, we use the shortcut notation ϑ​(z)italic-ϑ𝑧\vartheta(z) and V​(x,z)𝑉𝑥𝑧V(x,z) for ϑx0R​(z...
251
33
From the definition of xisubscript𝑥𝑖x_{i} and of the composite prox-mapping (8) (cf. Lemma A.1 of [40]), we conclude that there is ηi∈∂h​(xi)subscript𝜂𝑖ℎsubscript𝑥𝑖\eta_{i}\in\partial h(x_{i}) such that , 1 = ⟨γi−1​Hi+γi​ηi+∇ϑ​(xi)−∇ϑ​(xi−1),z−xi⟩≥0,∀z∈𝒳,formulae-sequencesubscript𝛾𝑖1subscript𝐻𝑖subscript𝛾𝑖s...
1,389
Stochastic Mirror Descent for Large-Scale Sparse Recovery Appendix A Proofs A.1 Proof of Proposition 2.1 1o. From the definition of xisubscript𝑥𝑖x_{i} and of the composite prox-mapping (8) (cf. Lemma A.1 of [40]), we conclude that there is ηi∈∂h​(xi)subscript𝜂𝑖ℎsubscript𝑥𝑖\eta_{i}\in\partial h(x_{i}) such that , ...
1,421
34
, 1 = ‖∇G​(x,ω)‖∗2superscriptsubscriptnorm∇𝐺𝑥𝜔2\displaystyle\|\nabla G(x,\omega)\|_{*}^{2}. , 2 = ≤2​‖∇G​(x,ω)−∇G​(x∗,ω)‖∗2+2​‖∇G​(x∗,ω)‖∗2absent2superscriptsubscriptnorm∇𝐺𝑥𝜔∇𝐺subscript𝑥𝜔22superscriptsubscriptnorm∇𝐺subscript𝑥𝜔2\displaystyle\leq 2\|\nabla G(x,\omega)-\nabla G(x_{*},\omega)\|_{*}^{2}+2\|\nabl...
1,970
Stochastic Mirror Descent for Large-Scale Sparse Recovery Appendix A Proofs A.1 Proof of Proposition 2.1 1o. , 1 = ‖∇G​(x,ω)‖∗2superscriptsubscriptnorm∇𝐺𝑥𝜔2\displaystyle\|\nabla G(x,\omega)\|_{*}^{2}. , 2 = ≤2​‖∇G​(x,ω)−∇G​(x∗,ω)‖∗2+2​‖∇G​(x∗,ω)‖∗2absent2superscriptsubscriptnorm∇𝐺𝑥𝜔∇𝐺subscript𝑥𝜔22superscriptsu...
2,002
35
, 1 = ∑i=1mγi−1​(34​⟨∇g​(xi−1),xi−1−x∗⟩+[h​(xi−1)−h​(x∗)])superscriptsubscript𝑖1𝑚subscript𝛾𝑖134∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥delimited-[]ℎsubscript𝑥𝑖1ℎsubscript𝑥\displaystyle\sum_{i=1}^{m}\gamma_{i-1}\Big{(}\mbox{\small$\frac{3}{4}$}{\langle}\nabla g(x_{i-1}),x_{i-1}-x_{*}{\rangle}+[h(x_{i-1})-h(x_{*...
668
Stochastic Mirror Descent for Large-Scale Sparse Recovery Appendix A Proofs A.1 Proof of Proposition 2.1 1o. , 1 = ∑i=1mγi−1​(34​⟨∇g​(xi−1),xi−1−x∗⟩+[h​(xi−1)−h​(x∗)])superscriptsubscript𝑖1𝑚subscript𝛾𝑖134∇𝑔subscript𝑥𝑖1subscript𝑥𝑖1subscript𝑥delimited-[]ℎsubscript𝑥𝑖1ℎsubscript𝑥\displaystyle\sum_{i=1}^{m}\gam...
700
36
We have , 1 = γi−1​⟨ξi,xi−1−x∗⟩subscript𝛾𝑖1subscript𝜉𝑖subscript𝑥𝑖1subscript𝑥\displaystyle\gamma_{i-1}{\langle}\xi_{i},x_{i-1}-x_{*}{\rangle}. , 2 = =\displaystyle=. , 3 = γi−1​⟨[∇G​(xi−1,ωi)−∇G​(x∗,ωi)]−∇g​(xi−1),xi−1−x∗⟩⏞υisubscript𝛾𝑖1superscript⏞delimited-[]∇𝐺subscript𝑥𝑖1subscript𝜔𝑖∇𝐺subscript𝑥subscri...
1,951
Stochastic Mirror Descent for Large-Scale Sparse Recovery Appendix A Proofs A.1 Proof of Proposition 2.1 2o. We have , 1 = γi−1​⟨ξi,xi−1−x∗⟩subscript𝛾𝑖1subscript𝜉𝑖subscript𝑥𝑖1subscript𝑥\displaystyle\gamma_{i-1}{\langle}\xi_{i},x_{i-1}-x_{*}{\rangle}. , 2 = =\displaystyle=. , 3 = γi−1​⟨[∇G​(xi−1,ωi)−∇G​(x∗,ωi)]−∇...
1,983
37
, 1 = rm(3)≤2​3​t​R2​σ∗2​∑i=0m−1γi2≤3​t​R2+3​σ∗2​∑i=0m−1γi2.subscriptsuperscript𝑟3𝑚23𝑡superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖23𝑡superscript𝑅23superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2\displaystyle r^{(3)}_{m}\leq 2\sqrt{{3tR^{2}\sigma...
1,900
Stochastic Mirror Descent for Large-Scale Sparse Recovery Appendix A Proofs A.1 Proof of Proposition 2.1 2o. , 1 = rm(3)≤2​3​t​R2​σ∗2​∑i=0m−1γi2≤3​t​R2+3​σ∗2​∑i=0m−1γi2.subscriptsuperscript𝑟3𝑚23𝑡superscript𝑅2superscriptsubscript𝜎2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖23𝑡superscript𝑅23superscriptsubsc...
1,932
38
, 1 = 21​t​∑i=0m−1γi4≤21​t​γ¯2​∑i=0m−1γi2≤(21​t​γ¯2)221𝑡superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖421𝑡superscript¯𝛾2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2superscript21𝑡superscript¯𝛾2221t\sum_{i=0}^{m-1}\gamma_{i}^{4}\leq{21t\overline{\gamma}^{2}\sum_{i=0}^{m-1}\gamma_{i}^{2}}\leq(21t\overline{...
1,943
Stochastic Mirror Descent for Large-Scale Sparse Recovery Appendix A Proofs A.1 Proof of Proposition 2.1 2o. , 1 = 21​t​∑i=0m−1γi4≤21​t​γ¯2​∑i=0m−1γi2≤(21​t​γ¯2)221𝑡superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖421𝑡superscript¯𝛾2superscriptsubscript𝑖0𝑚1superscriptsubscript𝛾𝑖2superscript21𝑡superscript¯𝛾2221...
1,975
39
for all ωm∈Ωm(2)superscript𝜔𝑚subscriptsuperscriptΩ2𝑚\omega^{m}\in\Omega^{(2)}_{m} such that Prob​(Ωm(2))≥1−2​e−tProbsubscriptsuperscriptΩ2𝑚12superscript𝑒𝑡\hbox{\rm Prob}(\Omega^{(2)}_{m})\geq 1-2e^{-t}. Note that , 1 = Δm(2)≤2​t​R2+14​ν​∑i=0m−1γi2​⟨∇g​(xi),xi−x∗⟩,superscriptsubscriptΔ𝑚22𝑡superscript𝑅214𝜈super...
755
Stochastic Mirror Descent for Large-Scale Sparse Recovery Appendix A Proofs A.1 Proof of Proposition 2.1 2o. for all ωm∈Ωm(2)superscript𝜔𝑚subscriptsuperscriptΩ2𝑚\omega^{m}\in\Omega^{(2)}_{m} such that Prob​(Ωm(2))≥1−2​e−tProbsubscriptsuperscriptΩ2𝑚12superscript𝑒𝑡\hbox{\rm Prob}(\Omega^{(2)}_{m})\geq 1-2e^{-t}. No...
787
40
When substituting bounds (33)–(35) into (32) we obtain , 1 = Rmsubscript𝑅𝑚\displaystyle R_{m}. , 2 = ≤\displaystyle\leq. , 3 = 14​∑i=0m−1γi​⟨∇g​(xi),xi−x∗⟩+12​t​R2+σ∗2​[4​∑i=0m−1γi2+24​t​γ¯2]+2​3​t​R2​σ∗2​∑i=0m−1γi214superscriptsubscript𝑖0𝑚1subscript𝛾𝑖∇𝑔subscript𝑥𝑖subscript𝑥𝑖subscript𝑥12𝑡superscript𝑅2subs...
1,808
Stochastic Mirror Descent for Large-Scale Sparse Recovery Appendix A Proofs A.1 Proof of Proposition 2.1 3o. When substituting bounds (33)–(35) into (32) we obtain , 1 = Rmsubscript𝑅𝑚\displaystyle R_{m}. , 2 = ≤\displaystyle\leq. , 3 = 14​∑i=0m−1γi​⟨∇g​(xi),xi−x∗⟩+12​t​R2+σ∗2​[4​∑i=0m−1γi2+24​t​γ¯2]+2​3​t​R2​σ∗2​∑i=0...
1,840
41
, 1 = 12​[g​(x^m)−g​(x∗)]+[h​(x^m)−h​(x∗)]12delimited-[]𝑔subscript^𝑥𝑚𝑔subscript𝑥delimited-[]ℎsubscript^𝑥𝑚ℎsubscript𝑥\displaystyle\mbox{\small$\frac{1}{2}$}[g({\widehat{x}}_{m})-g(x_{*})]+[h({\widehat{x}}_{m})-h(x_{*})]. , 2 = . , 1 = ≤V​(x0,x∗)+15​t​R2γ​m+h​(x0)−h​(xm)m+γ​σ∗2​(7+24​tm).absent𝑉subscript𝑥0subsc...
351
Stochastic Mirror Descent for Large-Scale Sparse Recovery Appendix A Proofs A.1 Proof of Proposition 2.1 3o. , 1 = 12​[g​(x^m)−g​(x∗)]+[h​(x^m)−h​(x∗)]12delimited-[]𝑔subscript^𝑥𝑚𝑔subscript𝑥delimited-[]ℎsubscript^𝑥𝑚ℎsubscript𝑥\displaystyle\mbox{\small$\frac{1}{2}$}[g({\widehat{x}}_{m})-g(x_{*})]+[h({\widehat{x}}...
383
42
To prove the bound for the minibatch solution x^m(L)=(∑i=0m−1γi)−1​∑i=0m−1γi​xi(L)superscriptsubscript^𝑥𝑚𝐿superscriptsuperscriptsubscript𝑖0𝑚1subscript𝛾𝑖1superscriptsubscript𝑖0𝑚1subscript𝛾𝑖superscriptsubscript𝑥𝑖𝐿{\widehat{x}}_{m}^{(L)}=\left(\sum_{i=0}^{m-1}\gamma_{i}\right)^{-1}\sum_{i=0}^{m-1}\gamma_{i}x...
392
Stochastic Mirror Descent for Large-Scale Sparse Recovery Appendix A Proofs A.1 Proof of Proposition 2.1 5o. To prove the bound for the minibatch solution x^m(L)=(∑i=0m−1γi)−1​∑i=0m−1γi​xi(L)superscriptsubscript^𝑥𝑚𝐿superscriptsuperscriptsubscript𝑖0𝑚1subscript𝛾𝑖1superscriptsubscript𝑖0𝑚1subscript𝛾𝑖superscripts...
424
43
Let us assume that (ξi,ℱi)i=1,2,…subscriptsubscript𝜉𝑖subscriptℱ𝑖𝑖12…(\xi_{i},{\cal F}_{i})_{i=1,2,...} is a sequence of sub-Gaussian random variables satisfying444Here, same as above, we denote 𝐄i−1subscript𝐄𝑖1{\mathbf{E}}_{i-1} the expectation conditional to ℱi−1subscriptℱ𝑖1{\cal F}_{i-1}. , 1 = 𝐄i−1​{et​ξi}≤...
787
Stochastic Mirror Descent for Large-Scale Sparse Recovery Appendix A Proofs A.2 Deviation inequalities Let us assume that (ξi,ℱi)i=1,2,…subscriptsubscript𝜉𝑖subscriptℱ𝑖𝑖12…(\xi_{i},{\cal F}_{i})_{i=1,2,...} is a sequence of sub-Gaussian random variables satisfying444Here, same as above, we denote 𝐄i−1subscript𝐄𝑖1...
812
44
For all x>0𝑥0x>0 one has , 1 = . , 2 = . , 3 = . , 4 = . , 1 = Prob​{Sn≥2​x​rn}Probsubscript𝑆𝑛2𝑥subscript𝑟𝑛\displaystyle\hbox{\rm Prob}\left\{S_{n}\geq\sqrt{2xr_{n}}\right\}. , 2 = ≤e−x,absentsuperscript𝑒𝑥\displaystyle\leq e^{-x},. , 3 = . , 4 = (37a). , 1 = Prob​{Mn≥2​x​(vn+hn)+2​x​s¯2}Probsubscript𝑀𝑛2𝑥subs...
304
Stochastic Mirror Descent for Large-Scale Sparse Recovery Appendix A Proofs A.2 Deviation inequalities Lemma A.1 For all x>0𝑥0x>0 one has , 1 = . , 2 = . , 3 = . , 4 = . , 1 = Prob​{Sn≥2​x​rn}Probsubscript𝑆𝑛2𝑥subscript𝑟𝑛\displaystyle\hbox{\rm Prob}\left\{S_{n}\geq\sqrt{2xr_{n}}\right\}. , 2 = ≤e−x,absentsuperscri...
334
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
3