researchpilot-data / chunks /1806.05049_semantic.json
Subhadip007's picture
feat: Upload full 358k vector database
e8a2c2e
[
{
"chunk_id": "3f0aa911-62b5-4fb5-b8b7-015f0f836c81",
"text": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm Paul Swoboda∗ Vladimir Kolmogorov\nMPI for Informatics, Germany IST Austria\npswoboda@mpi-inf.mpg.de vnk@ist.ac.at Abstract and no information is given to judge it.",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 0,
"total_chunks": 23,
"char_count": 219,
"word_count": 27,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5f4d5dd7-0ff2-4182-9d04-bb3a0601ac31",
"text": "Moreover, algorithms\nfrom the first two paradigms are usually developed ad-hoc for\nWe present a new proximal bundle method for Maximum- specific optimization problems and cannot be easily extended2019\nA-Posteriori (MAP) inference in structured energy minimiza- to other ones. Lagrangean decomposition based algorithms\ntion problems. The method optimizes a Lagrangean relax- are a good middle ground, since they optimize a dual lower\nation of the original energy minimization problem using a bound, hence can output a gap that shows the distance toApr\nmulti plane block-coordinate Frank-Wolfe method that takes optimum, yet use techniques that scale to large problem sizes.\n5 advantage of the specific structure of the Lagrangean decom- Generalization to new problems is also usually much easier,\nposition. We show empirically that our method outperforms since subproblems can be readily combined.\nstate-of-the-art Lagrangean decomposition based algorithms A large number of algorithmic techniques have been proon some challenging Markov Random Field, multi-label dis- posed for optimizing a Lagrangean decomposition for MRFs,\ncrete tomography and graph matching problems. including (i) Message passing [16, 41, 37, 6] (a.k.a. block\ncoordinate ascent, belief propagation), (ii) first order prox-[cs.LG]\nimal splitting methods [26, 31] and (iii) Smoothing based\n1. Introduction methods [9, 29], (iv) Nesterov schemes [28, 10], (v) mirror\nMaximum-A-Posteriori (MAP) inference, that is mini- descent [22], (vi) subgradient based algorithms [30, 36, 18].\nmizing an energy function f : X →R over a discrete In the case of MAP inference in MRFs, the study [11] has\nset of labelings X is a central tool in computer vision and shown that message passing techniques outperform competmachine learning. Many solvers have been proposed for ing Lagrangean decomposition based methods by a large\nvarious special forms of energy f and labeling space X, margin. However, there are two main practical shortcomings\nsee [11] for an overview of solvers and applications for the of message passing algorithms: (i) they need not converge\nprominent special case of Markov Random Fields (MRF). to the optimum of the relaxation corresponding to the LaSolvers can roughly be categorized into three categories: grangean decomposition: while well-designed algorithms\n(i) Exact solvers that use search techniques (e.g. branch- monotonically improve a dual lower bound, they may get\nand-bound) and possibly rely on solving lower bounds with stuck in suboptimal fixed points. (ii) So called min-marginals\nLP-solvers to speed up search, (ii) primal heuristics that must be computable fast for all the subproblems in the given\nposition (a.k.a. dual decomposition) based algorithms that MRFs, for other problems they are. In such cases, alternative\ndecompose the original problem into smaller efficiently opti- techniques must be used. Subgradient based methods can\nmizable subproblems and exchange Lagrangean multipliers help here, since they do not possess the above shortcomings:\nbetween subproblems until consensus between subproblems They converge to the optimum of the Lagrangean relaxation\nis achieved. and only require finding solutions to the subproblems of\nExcept when the energy fulfills special assumptions, exact the decomposition, which is easier than their min-marginals\nsolvers are usually not applicable, since problem sizes in (as needed for (i)), proximal steps (as needed for (ii)) or\ncomputer vision are too large. On the other hand, primal smoothed solutions (as needed for (iii) and (iv)).\nheuristics can be fast but solution quality need not be good The simplest subgradient based algorithm is subgradient\nascent. However, its convergence is typically slow. Bundle\n∗The work was performed while the first author was at IST Austria. methods, which store a series of subgradients to build a local\nThe work was supported by the European Research Council under the\nEuropean Unions Seventh Framework Programme (FP7/2007-2013)/ERC approximation of the function to be optimized, empirically\ngrant agreement no 616160. converge faster. Contribution & Organization into much larger subproblems.",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 1,
"total_chunks": 23,
"char_count": 4174,
"word_count": 618,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8cca61b7-5b1e-4299-ae9e-3430fe83f9ec",
"text": "In general, our decomposition results in fewer dual variables to optimize over, and each\nWe propose a multi plane block-coordinate version of\nFrank-Wolfe update can be expected to give a much larger\nthe Frank-Wolfe method to find minimizing directions in\ngain. Frank-Wolfe was also used in [1] for MAP inference\na proximal bundle framework, see Section 2. Our method\nin dense MRFs with Potts interactions and Gaussian weights.\nexploits the structure of the problem's Lagrangean decompoAs we do, they use Frank-Wolfe to optimize proximal steps\nsition and is inspired by [34]. Applications of our approach\nfor MAP-inference. In constrast to our work, they do not\nto MRFs, discrete tomography and graph matching are preoptimize Lagrangean multipliers, but the original variables\nsented in Section 3. An experimental evaluation on these\ndirectly.",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 2,
"total_chunks": 23,
"char_count": 842,
"word_count": 130,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d7d6bcf8-a9e3-4283-b709-b3f0f9ce0f09",
"text": "In other words, they work in the primal, while we\nproblems is given in Section 4 and suggests that our method\nwork in the dual. We remark that our formulation is appliis superior to comparable established methods.\ncable to more general integer optimization problems than\nAll proofs are given in the Appendix A. A C++-\neither [33, 23, 1] and it does not seem straightforward to\nimplementation of our Frank-Wolfe method is availapply these approaches to our more general setting while\nable at http://pub.ist.ac.at/~vnk/papers/\nonly requiring access to MAP-oracles of subproblems. The MRF, discrete tomography and graph Proximal bundle methods were introduced in [13, 21] to\nmatching solvers built on top of our method can be obtained accelerate subgradient descent algorithms. They work by\nat https://github.com/LPMP/LPMP. locally building an approximation (bundle) to the function\nto be optimized and use this bundle to find a descent direc-1.2. For stability, a quadratic (proximal) term is added [14]. To our knowledge, the Frank-Wolfe method has not yet While not theoretically guaranteed, proximal bundle methbeen used in our general setting, i.e. underlying a proximal ods are often faster than subgradient methods.\nbundle solver for a general class of structured energy minimization problems. Hence, we subdivide related work into 2.",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 3,
"total_chunks": 23,
"char_count": 1338,
"word_count": 205,
"chunking_strategy": "semantic"
},
{
"chunk_id": "33d1c30c-e82a-43a7-9fb9-91e1811d8494",
"text": "Method\n(i) subgradient/bundle methods for energy minimization, (ii)\nOriginal problem We consider the problem of minimizingad-hoc approaches that use Frank-Wolfe for specific tasks\na function of Boolean variables represented as a sum ofand (iii) proximal bundle methods.\nindividual terms:Subgradient based solvers have first been proposed\nby [36] for MAP-inference in MRFs and were later popmin f(x), f(x) := X ft(xAt) (1)ularized by [18]. These works rely on a decomposition of x∈{0,1}d\nt∈T\nMRFs into trees. New decompositions for certain MRFs\nwere introduced and optimized in [25] for max-flow sub- Here term t ∈T is specified by a subset of variables\nproblems and in [32] for perfect matching subproblems. The At ⊆[d] and a function ft : {0, 1}At →R ∪{+∞} of\nwork [43] used a covering of the graph by a tree and opti- |At| variables. Vector xAt ∈RAt is the restriction of vecmized additional equality constraints on duplicated nodes tor x ∈Rd to At. The arity |At| of function ft can be\nvia subgradient ascent. Usage of bundle methods that store arbitrarily large, however we assume the existence of an effia series of subgradients to build a local approximation of cient min-oracle that for a given vector λ ∈RAt computes\nthe objective function was proposed by [12, 32] for MAP x ∈arg min [ft(x) + ⟨λ, x⟩] together with the cost ft(x),\ninference in MRFs. x∈dom ft\nThe Frank-Wolfe algorithm was developed in the 50s [4] where dom ft = {x ∈{0, 1}At | ft(x) < +∞} ̸= ∅is the\nand was popularized recently by [8]. In [20] a block coor- effective domain of ft. It will be convenient to denote\ndinate version of Frank-Wolfe was proposed and applied to\nmintraining structural SVMs. Further improvements were given ht(λ)= [ft(x) + ⟨λ, x⟩]= y∈Yt⟨y,min [λ 1]⟩= y∈Yt⟨y,min [λ 1]⟩ x∈dom ft\nin [34, 24] where, among other things, caching of planes\nwas proposed for the Frank-Wolfe method. Several works where subsets Yt, Yt ⊆[0, 1]At ⊗R are defined as follows:\nhave applied Frank-Wolfe to the MAP-MRF inference problem: (1) [33] used Frank-Wolfe to compute an approximated Yt = {[x ft(x)] : x ∈dom ft} Yt = conv(Yt)\nsteepest-descent direction in the local polytope relaxation for\nMRFs. (2) [23] used Frank-Wolfe to solve a modified prob- The assumption means that we can efficiently compute a\nlem obtained by adding a strictly convex quadratic function supergradient of concave function ht(λ) at a given λ ∈RAt.\nto the original objective (either primal or dual). In contrast to Since (1) is in general an NP-hard problem, our goal will\nthese works, we use Frank-Wolfe inside a proximal method. be to solve a certain convex relaxation of (1), which will\nFurthermore, the papers above use a fine decomposition into turn out to be equivalent to the Basic LP relaxation (BLP)\nmany small subproblems (corresponding to pairwise terms of (1) [17]. This relaxation has been widely studied in the\nof the energy function), while we decompose the problem literature, especially for the MAP-MRF problem (in which case it is usually called the local polytope relaxation [40, 41]). for a center point µ ∈Λ. The proximal quadratic terms\nWe emphasize, however, that our methodology is different act as a trust-region term in the vicinity of µ and make the\nfrom most previous works: before applying the BLP relax- function strongly concave, hence smoothing the dual.",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 4,
"total_chunks": 23,
"char_count": 3338,
"word_count": 558,
"chunking_strategy": "semantic"
},
{
"chunk_id": "bc011d4b-f2f4-422a-a213-9e584ee03808",
"text": "A sucation, we represent the objective as a function of Boolean cessively refined polyhedral approximation [21] is typically\nindicator variables. This allows expressing complicated com- used for solving (6). We develop a proximal method that will\nbinatorial constraints such as those in multi-label discrete alternate between minimizing (6) with the help of a multitomography and graph matching problems (see Section 3). plane block coordinate Frank-Wolfe method and updating\nLagrangean relaxation For a vector y ∈Yt let us denote the proximal center µ.\nthe first |At| components as y⋆∈[0, 1]At and the last one as\ny◦∈R (so that y = [y⋆y◦]). We also denote Y = N t∈T Yt 2.1. Maximizing hµ,c(λ): BCFW algorithm\nand Y = N t∈T Yt = conv(Y). The t-th component of Objectives similar to (6) (without the summation convector y ∈Y will be denoted as yt ∈Yt. Problem (1) can straint on the λ-variables) are used for training structural\nnow be equivalently written as Support Vector Machines (SSVMs). Following [20, 34, 24],\nwe use a block-coordinate Frank-Wolfe algorithm (BCFW)\nmin X yt◦ (2) y∈Y , x∈{0,1}d applied to the dual of (6), more specifically its multi-plane\nt∈T\nyt⋆=xAt ∀t∈T version MP-BCFW [34]. The dual of (6) is formulated below. We form the relaxation of (2) by removing the non-convex\nconstraint x ∈{0, 1}d: Proposition 2. The dual problem to maxλ∈Λ hµ,c(λ) is min X yt◦ (3) min fµ,c(y),\ny∈Y , x∈Rd y∈Y\nt∈T\nyt⋆=xAt ∀t∈T\nfµ,c(y) := max X ⟨yt, [λt 1]⟩−1 −µt∥2 (7)It can be shown that problem (3) is equivalent to the BLP λ∈Λ 2c∥λt\nt∈T\nrelaxation of (1), see [?]. We will not directly optimize this relaxation, but its La- Define ν ∈Rd by νi = |Ti|1 Pt∈Ti(c · yti + µti) for i ∈[d].\ngrangean dual [35]. For each equality constraint yt⋆= xAt Then the optimal λ in (7) is λt = c · yt⋆+ µt −νAt and\nwe introduce Lagrange multipliers λt ∈RAt. The collection\nof these multipliers will be denoted as λ ∈N t∈T RAt.",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 5,
"total_chunks": 23,
"char_count": 1914,
"word_count": 335,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a5d15932-5310-4be4-a374-e87fdafad2e6",
"text": "The c d |Ti|\nidual will be optimized over the set fµ,c(y) = X 2∥yt⋆∥2 + ⟨yt, [µt 1]⟩ − X 2c ν2\nt∈T i=1\n( )\nΛ = λ : X λti = 0 ∀i ∈[d] (4) ∇tfµ,c(y) = [λt 1] (8)\nt∈Ti where ∇t denotes the derivative w.r.t. variables yt.\nwhere we denoted\nNext, we review and adapt to our setting BCFW and\nTi = {t ∈T : i ∈At} . MP-BCFW algorithms for minimizing function fµ,c(y) over\ny ∈Y. We will also describe a practical improvement to the\nProposition 1.",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 6,
"total_chunks": 23,
"char_count": 436,
"word_count": 93,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e73035c9-e1ba-4cdb-8145-d69a96516233",
"text": "The dual of (3) w.r.t. the equality constraints implementation, namely compact representation of planes,\nyt⋆= xAt is and discuss the problem of estimating the duality gap. BCFW [20] The algorithm maintains feasible vectors\nmax h(λ), h(λ) := X ht(λt) (5) y ∈Y. At each step BCFW tries to decrease the objec- λ∈Λ\nt∈T tive fµ,c(y) by updating component yt for a chosen term t\nFurthermore, the optimal values of problems (3) and (5) coin- (while maintaining feasibility). To achieve this, it first lincide. (This value can be +∞, meaning that (3) is infeasible earizes the objective by using the Taylor expansion around\nand (5) is unbounded). the current point: Next, we describe how we maximize function h(λ). fµ,c(z) ≈⟨∇fµ,c(y), z⟩+ const . (9)\nProximal term We have a non-smooth concave maximization problem in λ, hence algorithms that require a differen- The optimal solution zt ∈ Yt of the linearized obtiable objective functions will not work. In proximal bundle jective is computed by calling the t-th oracle: zt ←\nmethods [14] an additional proximal term is added. This arg min⟨∇tfµ,c(y), zt⟩. The new vector y is obtained as the\nzt∈Yt\nresults in the new objective best interpolation of yt and zt with all other components s ̸=\nys, s ̸= t max hµ,c(λ), hµ,c(λ) := h(λ) −1 −µ∥2 (6) t fixed to ys, i.e. ys(γ) ← . λ∈Λ 2c∥λ (1 −γ)yt + γzt, s = t Algorithm 1 One pass of BCFW. Input: vectors y ∈Y, structure of Xt allows storing and manipulating these planes\nµ ∈Λ and ν ∈Rd computed as in Prop. 2. more efficiently. For example, in MAP-MRF inference prob-\n1: for each t ∈T do in a random order lems a variable with k possible values can be represented by\n2: set λt = c · yt⋆+ µt −νAt a single integer, rather than k indicator variables.\n3: call t-th oracle for λt: zt ←arg min⟨zt, [λt 1]⟩ In our implementation we assume that each vector x ∈\nzt∈Yt\ndom ft can be represented by an object s in a possibly more i.e. let x ←arg min[ft(x) + ⟨λt, x⟩] and zt = [x ft(x)]\nx∈Xt compact space Xt. To specify term ft, the user must provide\nys, s ̸= t an array mapt : [|At|] →[d] that determines set At ⊆[d] 4: interpolate y(γ)s ←\n(1 −γ)yt + γzt, s = t in a natural way, specify the size of objects s ∈Xt, and\n5: compute γ ←arg minγ∈[0,1] fµ,c(y(γ)): implement the following functions:\nset γ ←⟨[λt 1],zt−yt⟩ and clip γ to [0, 1] c∥yt⋆−zt⋆∥2 F1. A bijection σt : Xt →Xt. 6: set νi ←νi + |Ti|(y(γ)tc i −yti) for i ∈At and yt ←y(γ)t\n7: end for\nF2. Min-oracle that for a given λt computes x ∈\narg min [ft(x) + ⟨λt, x⟩] and returns its compact reprex∈dom ft\nThe step size γ ∈[0, 1] is chosen to minimize the objective. sentation s (i.e. σt(s)=x) together with the cost ft(x). The optimal γ can be easily computed in closed form (see\nLemma 1 in [?]).",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 7,
"total_chunks": 23,
"char_count": 2730,
"word_count": 511,
"chunking_strategy": "semantic"
},
{
"chunk_id": "7ee5bc71-c41a-4c9f-89ee-fa980c605c57",
"text": "A function that computes inner product ⟨λt, σt(s)⟩for\nAlgorithm 1. To avoid the expensive recomputation of the a given vector λt and object s ∈Xt.\nsum ν in Prop. 2 needed for computing the gradient and the\nNote, calling the approximate oracle in line 3 involvesstep size, it is held and updated explicitly in Algorithm 1.\ncalling the function in (F3) |˜Yt| times; it typically takesMP-BCFW [34] In this paper we use the multi-plane verO(|˜Yt| · sizet) time where sizet is the length of the arraysion of BCFW. This method caches planes zt returned by\nfor storing s ∈Xt.min-oracles for terms ht(λt) = minzt∈Yt⟨zt, [λt 1]⟩. Let\n˜Yt ⊂Yt be the set of planes currently stored in mem- Remark 1. The efficient plane storage mechanism gives\nory for the t-th subproblem. It defines an approximation\nroughly a 25% speedup of a single MP-BCFW pass on the\n˜ht(λt) = minzt∈˜Yt⟨zt, [λt 1]⟩of term ht(λt). Note that protein-folding MRF dataset (see Sections 3 and 4).",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 9,
"total_chunks": 23,
"char_count": 952,
"word_count": 163,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ea81c9f9-7afc-4f69-bb67-14708e9a01b4",
"text": "More-\n˜ht(λt) ≥ht(λt) for any λt. over, it typically results in a slightly better objective value\nMP-BCFW uses exact passes (that call the \"exact\" or- obtained after each MP-BCFW pass, since more approxiacle for ht(λt) in line 3 of Algorithm 1) and approximate mate passes can be done before an exact pass is called (since\npasses (that call the \"approximate\" oracle for ˜ht(λt)). One approximate passes are accelerated by the compact plane\nMP-BCFW iteration consists of one exact pass followed by storage, their objective decrease per unit of time is higher,\nseveral approximate passes. The number of approximate hence they are called more often).\npasses is determined automatically by monitoring how fast\nthe objective decreases per unit of time. Namely, the method 2.2.",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 10,
"total_chunks": 23,
"char_count": 771,
"word_count": 123,
"chunking_strategy": "semantic"
},
{
"chunk_id": "4655134b-ea93-409b-b4ef-d6251d2a3c8f",
"text": "Algorithm's summary\nfµ,c(y◦)−fµ,c(y)\ncomputes the ratio ∆t where y◦is the vector at Recall that our goal is to maximize function h(λ) over\nthe beginning of the MP-BCFW iteration, y is the current λ ∈Λ; this gives a lower bound on the original discrete\nvector and ∆t is the time passed since the beginning of the optimization problem minx∈X f(x). We now summarize\nMP-BCFW iteration. If this ratio drops after an approximate our algorithm for solving maxλ∈Λ h(λ), and describe our\npass then the iteration terminates, and a new MP-BCFW choices of parameters. To initialize, we set µt = 0, yt ←\niteration is started with an exact pass. The number of ap- arg maxyt∈Yt⟨yt, [µt 1]⟩and ˜Y = {yt} for each t ∈T.\nproximate passes will thus depend on the relative speed of Then we start the main algorithm. After every 10 iterations\nexact and approximate oracles. of MP-BCFW we update µ ←λ∗(keeping vectors yt and\nNote that the time for an approximate oracle call is propor- sets ˜Yt unchanged), where λ∗is the vector with the largest\ntional to the size of the working set |˜Yt|. The method in [34] value of objective h(λ∗) seen so far. Since evaluating h(λ)\nuses a common strategy for controlling this size: planes that is an expensive operation, we do it for the current vector λ\nare not used during the last K iterations are removed from only after every 5 iterations of MP-BCFW. We use K = 10, which is the default parameter in [34].",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 11,
"total_chunks": 23,
"char_count": 1426,
"word_count": 252,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c1449210-6241-4fa3-82cd-e7a790c2b992",
"text": "Compact representation of planes Recall that planes in Remark 2 (Convergence). If we evaluated the inner iterathe set ˜Yt have the form [x ft(x)] for some x ∈dom ft ⊆ tions in MP-BCFW exactly, our method would amount to the\n{0, 1}At. A naive approach (used, in particular, in previous proximal point algorithm which is convergent [27]. Even\nworks [34, 24]) is to store them explicitly as vectors of size with non-exact evaluation, convergence can be proved when\n|At| + 1. We observe that in some applications a special the error in the evaluation of the proximal is shrinking fast However, we have use a more ag- This problem is NP-hard for general graphs G, but can be\ngressive scheme that updates the proximal point every 10 solved efficiently for trees. Hence, we choose a covering G\niterations and does not bound the inexactness of the proximal by trees, as done in [18]. Additionally, we seek a small numstep evaluation. Experimentally, we have found that it gives ber of trees, such that the number of Lagrangean variables\ngood results w.r.t. the objective of the overall problem (5). stays small and optimization becomes faster. Arboricity A tree covering of a graph is called a mini-\n2.3. Estimating duality gap\nmal tree cover, if there is no covering consisting of fewer\nTo get a good stopping criterion, it is desirable to have trees. The associated number of trees is called the graph's\na bound on the gap h(λ) −h(λ∗) between the current and arboricity. We compute the graph's arboricity together with\noptimal objectives. This could be easily done if we had a minimal tree cover efficiently with the method [5]\nfeasible primal and dual solutions. Unfortunately, in our Boolean encoding To phrase the problem as an instance\ncase vector y ∈Y is not a feasible solution of problem (3), of (1), we encode labelings x ∈X via indicator variables\nsince it does not satisfy equality constraints yt⋆= xAt. 1 xi;a = [xi = a] ∈{0, 1} for i ∈V, a ∈Xi while adding\nTo get a handle on the duality gap, we propose to use the constraints Pa xi;a = 1 (i.e. assigning infinite cost to confollowing quantities: figurations that do not satisfy this constraint).\nd A tree cover and a Boolean encoding are also used for the\nAy,λ = X⟨yt, [λt 1]⟩−h(λ) , By = X max yti −min yti two problems below; we will not explicitly comment on this\nt∈Ti t∈Ti anymore. t∈T i=1\n(10)\n3.2.",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 12,
"total_chunks": 23,
"char_count": 2360,
"word_count": 416,
"chunking_strategy": "semantic"
},
{
"chunk_id": "629d932f-75fd-49d2-9f91-39295a724d87",
"text": "The discrete tomography problem is to reconstruct anProposition 3. Consider pair (y, λ) ∈Y×Λ.\nimage from a set of linear (tomographic) projections taken at(a) There holds Ay,λ ≥0, By ≥0 and\ndifferent angles, where image intensity values are restricted\nh(λ∗)−h(λ) ≤Ay,λ +By ·∥λ∗−λ∥1,∞ ∀λ∗∈Λ (11) to be from a discrete set of intensities. See Figure 1 for an\nillustration. The problem is ill-posed, since the number ofwhere we denoted ∥δ∥1,∞= maxi∈[d] Pt∈Ti |δti|.\nlinear projections is smaller than the number of pixels to(b) We have Ay,λ = By = 0 if and only if y and λ are\nreconstruct. Hence, we use a regularizer to penalize non-optimal solutions of problems (3) and (5), respectively.\nregular image structures. Formally, we have an MRF G =\nNote, if we knew that an optimal solution λ∗ ∈ (V, E), where the node set corresponds to the pixel grid and\nmaxλ∈Λ h(λ) belongs to some bounded region then we the edges connect nodes which correspond to neighboring\ncould use (11) to obtain a bound on the duality gap. The label space is Xv = {0, 1, . . . , k} for some\nregion can be obtained for some applications, but we did not k ∈N. Additionally, the labeling x ∈X must satisfy linear\npursue this direction. projections Ax = b with A ∈{0, 1}|V |×l. Usually, no local\ninformation is available, hence the unary potentials are zero:\n3.",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 13,
"total_chunks": 23,
"char_count": 1328,
"word_count": 235,
"chunking_strategy": "semantic"
},
{
"chunk_id": "9ab46702-e53d-4fb7-842e-aa9ea1c3b881",
"text": "Applications θv ≡0 for v ∈V . The problem reads\nIn this section we give a detailed description of the three\nmin f(x), f(x) := X θij(xi, xj) (13)applications used in the evaluation: Markov Random Fields x∈X\n(MRFs), discrete tomography and the graph matching prob- ij∈E\nlem. The latter two are both extensions of the MAP-inference where X = {x ∈{0, 1, . . . , k}V : Ax = b}.",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 14,
"total_chunks": 23,
"char_count": 372,
"word_count": 71,
"chunking_strategy": "semantic"
},
{
"chunk_id": "21a37372-43f3-46c3-a544-c4db68dcd025",
"text": "A typical\nproblems for MRFs. Those three problems are reviewed be- choice for the pairwise potentials θij is the truncated L1-\nlow. norm θij(xi, xj) = min(|xi −xj|, c). Markov Random Fields projections Ax = b forms another subproblem. The i-th\nrow of Ax = b hence is of the form Pv∈V :Aiv=1 xv = bi. An MRF consists of a graph G = (V, E) and a dis- Efficient solvers for this problem were recently considered\ncrete set of labels Xv for each v ∈V . We follow their recursive decomposition approach\nMaximum-A-Posteriori (MAP) inference is to find labeling for the solution of the projection subproblems. Details are\n(xv)v∈V ∈N v∈V Xv =: X that is minimal with respect to given below.\nthe potentials: min f(x), f(x) := X θv(xv)+ X θuv(xu, xv) . (12) Discrete tomography subproblems We use a simplified\nx∈X\nv∈V uv∈E version of the recursive decomposition approach of [19] for\nefficiently solving the summation constraints Ax = b of 1We say that vector y is a feasible (optimal) solution of (3) if there\nexists a vector x ∈Rd so that (y, x) is a feasible (optimal) solution of (3). the discrete tomography problem. Below we give the correClearly, x can be easily computed from feasible y, so omit it for brevity. sponding details. Let the i-th row of Ax = b be of the form Constraints on summation variables Whenever we have\npartitions Πi:j = Πi:k ∪Πk+1:j, we add the constraint\nsi:j = sj:k + sk+1:j. Solving (14) We will propose a dynamic programming\napproach to solving (14). First, for each value l of summation variable si:j we store a value φi:j(l) ∈R. We compute φi:j recursively from leaves to the root. For partitions\nΠi:j = Πi:k ∪Πk+1:j we compute\nφi:j(l) = min φi:k(l′) + φk+1,j(l −l′) ∀l . (15)\nl′=0,...,l\nAfter having computed φ1:n, we set s∗1:n = b.",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 15,
"total_chunks": 23,
"char_count": 1757,
"word_count": 315,
"chunking_strategy": "semantic"
},
{
"chunk_id": "7964ad10-9dce-4f5f-b5e3-42af38ec89a2",
"text": "Subsequently,\nwe make a pass from root to leaves to compute the optimal\nlabel sum for each variable si:j as follows:\ns∗i:k, s∗k+1:j = min φi:k(si:k) + φk+1:j(sk+1:j) .Figure 1. Illustration of a discrete tomography problem. Im- si:k+sk+1:j=s∗i:j\nage intensity values are 0 (white), 1 (gray) and 2 (black).",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 16,
"total_chunks": 23,
"char_count": 305,
"word_count": 49,
"chunking_strategy": "semantic"
},
{
"chunk_id": "89ea5e96-66fb-4037-b539-4320b7905b9a",
"text": "Small (16)\narrows on the side denote the three tomographic projection directions (horizontal, vertical, and diagonal). Values at arrow heads\nFast dynamic programming Naively computing (15)denote the intensity value along tomographic projections.\nneeds O(((j −i) · k)2) steps. However, we use an efficient heuristic [3] that tends to have subquadratic complexity\nin practice. The graph matching problem consists of finding a MAPsolution in an MRF G = (V, E) where the labels for each\nnode come from a common universe Xv ⊂L ∀v ∈V . Illustration of a graph matching problem matching The additional matching constraint requires that two nodes\nnose and left/right feet of two penguins. The blue nodes on the cannot take the same label: xu ̸= xv ∀u, v ∈V, u ̸= v.\nleft penguin correspond to the underlying node set V , while the Hence, any feasible solution defines an injective mapping\nblue nodes on the right penguin correspond to the labels L. The into the set of labels. For an illustration see Figure 2. The\ngreen lines denotes the matching. Note that no two labels are problem is\nmatched twice. The red springs denote pairwise costs θij that\nencourage geometric rigidity of the matching. min f(x) s.t. xu ̸= xv ∀u ̸= v . (17) x∈X We use a minimum cost flow solver for handling the matchPv∈V :Aiv=1 xv = bi and recall that xv ∈{0, 1, . . . , k}. ing constraint, see e.g. [39, 44, 38] for an explanation of the\nEach such tomographic projection will correspond to a sin- minimum cost flow solver construction.\ngle subproblem. Taking care of Lagrangean multipliers λ,\n4. Experimentswe can rename variables and rewrite the problem as\nn k n We have chosen a selection of challenging MRF, discrete\nmin X X λi(l) · 1xi=l s.t. X xi = b tomography and graph matching problems where message\nx1,...,xn∈{0,...,k}n passing methods struggle or are not applicable.",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 17,
"total_chunks": 23,
"char_count": 1848,
"word_count": 319,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c22d7dd0-69e7-4c3d-b929-912e437ab5be",
"text": "Detailed i=1 l=0 i=1\n(14) descriptions of these problems can be found in Appendix A. We will follow a recursive decomposition approach. Note that there are a large number of MRF\nthis end, we introduce helper summation variables si:j = and graph matching problems where message passing is\nPju=i xu. the method of choice due to its greater speed and sufficient\nsolution quality, see [11, 38]. In such cases there is no\nVariable partitions We partition the set [1, n] advantage in using subgradient based solvers, which tend to\ninto Π1:⌊n2 ⌋ = {x1, . . . , x⌊n2 ⌋} and Π⌊n2 ⌋+1:n = be slower. However, our chosen evaluation problems contain\n{x⌊n2 ⌋+1, . . . , xn}. We recursively partition Π1:⌊n2 ⌋ some of the most challenging MRF and graph matching\nand Π⌊n2 ⌋+1:n analoguously until reaching single variables. problems with corresponding Lagrangean decompositions\nThis results in a tree with Π1:n as root. not solved satisfactorily with message passing solvers. Averaged lower bound vs. runtime plots for the protein folding MRF dataset, discrete tomography (synthetic images with 2, 4\nand 6 projections, sheep logan image of sizes 64 × 64 and 256 × 256 with 2, 4 and 6 projections), and the 6d scene flow graph matching\ndataset. Values are averaged over all instances of the dataset.",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 18,
"total_chunks": 23,
"char_count": 1283,
"word_count": 219,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0813f81b-8c8c-48f2-acb9-e824989f6055",
"text": "FWMAP CB SA MP\nMRF\nprotein folding 11 33-40 528-780 -12917.44 -12970.61 -12960.30 -13043.67\nDiscrete tomography\nsynthetic 2 proj. 9 1024 1984 266.12 265.89 239.39 †\nsynthetic 4 proj. 9 1024 1984 337.88 336.33 316.61 †\nsynthetic 6 proj. 9 1024 1984 424.36 417.76 391.09 †\nsheep logan 64 × 64 3 4096 8064 897.18 847.87 701.93 †\nsheep logan 256 × 256 3 65536 130560 4580.06 4359.24 370.63 †\nGraph matching\n6d scene flow 6 48-126 1148-5352 -2864.2 -2865.61 -2867.60 -2877.08\nTable 1. Dataset statistics and averaged maximum lower bound. # I denotes number of instances in dataset, |V | the number of nodes and |E|\nthe number of edges in the underlying graphical model. † means method is not applicable. Bold numbers indicate highest lower bound\namong competing algorithms. bounds, hence cannot be directly compared. Second, we see\nthem as complementary solvers, since, when they are applied\non the solved dual problem, their solution quality typically\nimproves. We have run every instance for 10 minutes on\na Intel i5-5200U CPU.",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 19,
"total_chunks": 23,
"char_count": 1024,
"word_count": 172,
"chunking_strategy": "semantic"
},
{
"chunk_id": "eb9745b3-4590-4b03-b75d-c8ebae3f3789",
"text": "Per-dataset plots showing averaged\nlower bound over time can be seen in Figure 3. Dataset\nFigure 4. lower bound vs. run- statistics and final lower bounds averaged over datasets can\nFigure 5. Duality gap quantities\ntime on instance 1CKK of be seen in Table 1. Detailed results for each instance in each Ay,λ and By from (10) over\nthe protein folding dataset time for the synthetic 6 proj. dataset can be found in in Appendix A.\nsolved with FWMAP and differ- Solvers We focus our evaluation on methods that are able to dataset from discrete tomograent values of proximal weight c phy. handle general MAP inference problems in which the access\nfrom (6). to subproblems is given by min-oracles. FWMAP: Our solver as described in Section 2. We have excluded primal heuristics that do not solve a CB: The state-of-the-art bundle method ConicBunrelaxation corresponding to our Lagrangean decomposition dle [7], which does not treat individual subprobat all from comparison. First, they do not deliver any lower lems individually, but performs ascent on all Lagrangean multipliers λ simultaneously. 4.2. SA: Subgradient ascent with a Polyak step size rule. In [19] a dual decomposition based solver was proposed\nThis solver was also used in [12] for MRFs.\nfor the multi-label discrete tomography problem. The decomposition was optimized with ConicBundle [7]. For our\nAdditionally, we tested state-of-the-art versions of message\ndecomposition, message passing solvers are unfortunately\npassing (MP) solvers, when applicable.",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 20,
"total_chunks": 23,
"char_count": 1516,
"word_count": 238,
"chunking_strategy": "semantic"
},
{
"chunk_id": "4440171e-e659-4684-9b82-1e6df079a732",
"text": "MP is a popular\nnot applicable. The main problem seems that due to the\nmethod for MAP-MRF problems, and has recently been\nunary potentials being zero, min-marginals for all subprobapplied to the graph matching problem [38].\nlems are also zero. Hence any min-marginal based step used\nAll solvers optimize the same underlying linear program- in message passing will result in no progress. In other words,\nming relaxation. Additionally, all subgradient based solvers the initially zero Lagrangean multipliers are a local fix-point\nalso use the same decomposition. This ensures that we for message passing algorithms. Therefore, we only compare\ncompare the relevant solver methodology, not differences against SA and CB.\nbetween relaxations or decompositions. Datasets We compare on the synthetically generated text\nChoice of proximal weight c from (6) The performance of images from [19], denoted by synthetic. These are 32 ×\nour algorithm depends on the choice of the proximal weight 32 images of random objects with 2, 4 and 6 projections\nparameter c from (6).",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 21,
"total_chunks": 23,
"char_count": 1059,
"word_count": 166,
"chunking_strategy": "semantic"
},
{
"chunk_id": "bf787669-8821-48ca-b60a-eb6bf980d916",
"text": "A too large value will make each directions. We also compare on the classic sheep logan\nproximal step take long, while a too small value will mean image with resolution 64 × 64 and 256 × 256 and 2, 4 and\ntoo many proximal steps until convergence. This behaviour 6 projections.\ncan be seen in Figure 4, where we have plotted lower bound\n4.3. Graph matching.against time for an MRF problem and FWMAP with different\nchoices of proximal weight c. We see that there is an optimal As shown in [39, 44], Lagrangean decomposition based\nvalue of 100, with larger and smaller values having inferior solvers are superior to solvers based on primal heuristics also\nperformance. However, we can also observe that perfor- in terms of the quality of obtained primal solutions. In parmance of FWMAP is good for values an order of magnitude ticular, [38] has proposed a message passing algorithm that\nlarger or smaller, hence FWMAP is not too sensitive on c. It is typically the method of choice and is on par/outperforms\nis enough to choose roughly the right order of magnitude for other message passing and subgradient based techniques on\nthis parameter. most problems. We have observed that the more subproblems there are Datasets There are a few problems where message passing\nin a problem decomposition (1), the smaller the proximal based solvers proposed so far get stuck in suboptimal fixed\nweight c should be. Since more subproblems usually trans- points. This behaviour occurred e.g. on the dataset [2] in [38],\nlate to more complex dependencies in the decomposition, a which we denote by graph flow.\nsmaller value of c will be beneficial, as it makes the result- 4.4. Discussion\ning more complicated proximal steps better conditioned. A\ngood formula for c will hence be decreasing for increasing Our solver FWMAP achieved the highest lower bound on\nnumbers of subproblems |T|. We have taken three instances each instance. It was substantially better on the hardest\nout of the 50 we evaluated on and roughly fitted a curve and largest problems, e.g. on sheep logan.",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 22,
"total_chunks": 23,
"char_count": 2057,
"word_count": 348,
"chunking_strategy": "semantic"
},
{
"chunk_id": "801e6278-8272-4f1e-97a2-36a39a576e76",
"text": "While message\nthat takes suitable values of proximal weight for these three passing solvers were faster (whenever applicable) in the\ninstances, resulting in beginning stages of the optimization, our solver FWMAP\nwas fastest among subgradient based ones and eventually\n1500000 achieved a higher lower bound than the message passing\nc = . (18)\n(|T| + 22)2 one. We also would like to mention that our solver had a\nmuch lower memory usage than the competing bundle solver\nDuality gap We have plotted the duality gap quantities CB. On the larger problem instances, CB would often use all\nAy,λ and By from (10) for the synthetic 6 proj. dataset available memory on our 8 GB machine.\nfrom discrete tomography in Figure 5.",
"paper_id": "1806.05049",
"title": "MAP inference via Block-Coordinate Frank-Wolfe Algorithm",
"authors": [
"Paul Swoboda",
"Vladimir Kolmogorov"
],
"published_date": "2018-06-13",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.05049v2",
"chunk_index": 23,
"total_chunks": 23,
"char_count": 714,
"word_count": 121,
"chunking_strategy": "semantic"
}
]