researchpilot-data / chunks /1206.4654_semantic.json
Subhadip007's picture
feat: Upload full 358k vector database
e8a2c2e
[
{
"chunk_id": "e3316a9d-daa9-468e-8ed6-ecccab26183f",
"text": "A Generalized Loop Correction Method\nfor Approximate Inference in Graphical Models Siamak Ravanbakhsh mravanba@ualberta.ca\nChun-Nam Yu chunnam@ualberta.ca\nRussell Greiner rgreiner@ualberta.ca\nDepartment of Computing Science, University of Alberta, Edmonton, AB T6G 2E8 CANADA Abstract NP-hard, typically involving a computation that is exponential in the number of variables. Belief Propagation (BP) is one of the most\npopular methods for inference in probabilis- When the conditional dependencies of the variables\ntic graphical models. BP is guaranteed to form a tree structure (i.e., no loops), this exact inferreturn the correct answer for tree structures, ence is tractable, and can be done by a message passbut can be incorrect or non-convergent for ing procedure, Belief Propagation (BP) (Pearl, 1988).\nloopy graphical models. Recently, several The Loopy Belief Propagation (LBP) system applies\nnew approximate inference algorithms based BP repeatedly to graph structures that are not trees\non cavity distribution have been proposed. (called \"loopy graphs\"); however, this provides only an\nThese methods can account for the effect of approximately correct solution (when it converges).\nloops by incorporating the dependency beLBP is related to the Bethe approximation to free tween BP messages. Alternatively, regionenergy (Heskes, 2003), which is the basis for min- based approximations (that lead to methods\nimization of more sophisticated energy approxima- such as Generalized Belief Propagation) imtions and provably convergent methods (Yedidia et al., prove upon BP by considering interactions\n2005; Heskes, 2006; Yuille, 2002). A representative within small clusters of variables, thus takclass of energy approximations is the region-graph ing small loops within these clusters into acmethods (Yedidia et al., 2005), which deal with a count. This paper introduces an approach,\nset of connected variables (called \"regions\"); these Generalized Loop Correction (GLC), that\nmethods subsume both the Cluster Variation Method benefits from both of these types of loop correction. We show how GLC relates to these (CVM) (Pelizzola, 2005; Kikuchi, 1951) and the Junction Graph Method (Aji & McEliece, 2001). Such two families of inference methods, then provide empirical evidence that GLC works ef- region-based methods deal with the short loops of the\ngraph by incorporating them into overlapping regions fectively in general, and can be significantly\n(see Figure 1(a)), and perform exact inference over more accurate than both correction schemes.\neach region. Note a valid region-based methods is exact if its region graph has no loops.\n1. Introduction\nA different class of algorithms, loop correction methMany real-world applications require probabilistic in- ods, tackles the problem of inference in loopy graphical\nference from some known probabilistic model (Koller models by considering the cavity distribution of vari-\n& Friedman, 2009). This paper will use probabilistic ables. A cavity distribution is defined as the marginal\ngraphical models, focusing on factor graphs (Kschis- distribution on Markov blanket of a single (or a cluster\nchang et al., 1998), that can represent both Markov of) variable(s), after removing all factors that depend\nNetworks and Bayesian Networks.",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 0,
"total_chunks": 23,
"char_count": 3289,
"word_count": 478,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2a6779f8-6621-4edc-a16d-8674a2c12036",
"text": "The basic chal- on those initial variables. Figure 1(b) illustrates cavlenge of such inference is marginalization (or max- ity distribution, and also shows that the cavity varimarginalization) over a large number of variables. For ables can interact.",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 1,
"total_chunks": 23,
"char_count": 250,
"word_count": 37,
"chunking_strategy": "semantic"
},
{
"chunk_id": "700eb10e-b605-4486-9cd3-43e778efcce8",
"text": "The key observation in these methdiscrete variables, computing the exact solutions is ods is that, by removing a variable xi in a graphical thAppearing in Proceedings of the 29 International Confer- model, we break all the loops that involve the varience on Machine Learning, Edinburgh, Scotland, UK, 2012.\nable xi, resulting in a simplified problem of findingCopyright 2012 by the author(s)/owner(s). A Generalized Loop Correction Method o J 6 o J o J\n14 S 2 i L 7 S 2 i L S 2 i L m j I 3 m j I 3 m j I 3 13 w T 1 k K 8 w T 1 k K w T 1 k K 5 s Y 4 t 5 s Y 4 t 5 s Y 4 t\n12 v W u Z v W u Z v W u Z",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 2,
"total_chunks": 23,
"char_count": 597,
"word_count": 144,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e8922840-e06a-486a-86e6-8128ddb13d29",
"text": "Part of a factor graph, where circles are variables (circle labeled \"i\" corresponding to variable \"xi\") and squares\n(with CAPITAL letters) represent factors. Note variables {xi, xk, xs} form a loop, as do {xk, xu, xt}, etc.\n(a) An example of absorbing short loops into overlapping regions. Here, a region includes factors around each hexagon\nand all its variables. Factor I and the variables xi, xj, xk appear in the three regions r1, r2, r3. (Figure just shows index\nα for region rα.) Region-based methods provide a way to perform inference on overlapping regions. (In general, regions\ndo not have to involve exactly 3 variables and 3 factors.)\n(b) Cavity variables for xs are {xw, xj, xk, xu, xv}, shown using dotted circles. We define the cavity distribution\nfor xs by removing all the factors around this variable, and marginalizing the remaining factor-graph on dotted circles. Even after removing factors {T, Y, W}, the variables xv, xw, and xj, xk, xu still have higher-order interactions caused by\nremaining factors, due to loops in the factor graph.\n(c) Cavity region r1 = {j, s, k} includes variables shown in pale circles. Variables in dotted circles are the perimeter ⊖r1. Removing the \"pale factors\" and marginalizing the rest of network on ⊖r1, gives the cavity distribution for r1. the cavity distribution. The marginals around xi can mations, for a limited setting.\nthen be recovered by considering the cavity distribuSection 2 explains the notation, factor graph represention and its interaction with xi.",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 3,
"total_chunks": 23,
"char_count": 1521,
"word_count": 249,
"chunking_strategy": "semantic"
},
{
"chunk_id": "1c816ca6-4cf5-4fe7-b7c6-85e5058f5773",
"text": "This is the basis for tation and preliminaries for GLC. Section 3 introduces\nthe loop correction schemes by Montanari & Rizzo's a simple version of GLC that works with regions that\n(2005) on pairwise dependencies over binary variables,\npartition the set of variables; followed by its extenand also Mooij & Kappen's (2007) extension to general\nsion to the more general algorithm. Section 4 presents\nfactor graphs – called Loop Corrected Belief Propaga- empirical results, comparing our GLC against other\ntion (LCBP).\napproaches. This paper defines a new algorithm for probabilistic\ninference, called Generalized Loop Correction (GLC), 2. Framework\nthat uses a more general form of cavity distribution,\ndefined over regions, and also a novel message passing 2.1. Notation\nscheme between these regions that uses cavity distriLet X = {X1, X2, . . . , XN} be a set of N discrete-butions to correct the types of loops that result from\nvalued random variables, where Xi ∈Xi. Supposeexact inference over each region. GLC's combinatheir joint probability distribution factorizes into a\ntion of loop corrections is well motivated, as regionproduct of non-negative functions:\nbased methods can deal effectively with short loops in 1\nthe graph, and the approximate cavity distribution is P(X = x) := Y ψI(xI)\nknown to produce superior results when dealing with I∈F\nwhere each I ⊆{1, 2, . . . , N} is a subset of the vari-long influencial loops (Mooij & Kappen, 2007).\nable indices, and xI = {xi | i ∈I} is the set of\nIn its simplest form, GLC produces update equations values in x indexed by the subset I. Each factor\nsimilar to LCBP's; indeed, under a mild assumption, ψI : Qi∈I Xi →[0, ∞) is a non-negative function, and\nGLC reduces to LCBP for pairwise factors. In its gen- F is the collection of indexing subsets I for all the\neral form, when not provided with information on cav- factors ψI. Below we will use the term \"factor\" interity variable interactions, GLC produces results similar changeably with the function ψI and subset I, and the\nto region-based methods. We theoretically establish term \"variable\" interchangeably for the value xi and\nthe relation between GLC and region-based approxi- index i. Here Z is the partition function. A Generalized Loop Correction Method",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 4,
"total_chunks": 23,
"char_count": 2271,
"word_count": 376,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0d2dfa28-348b-4636-a93a-4a50dc027866",
"text": "This model can be conveniently represented as a bipar- r, is defined over the variables indexed by ⊖r, as:\ntite graph, called the factor graph (Kschischang et al., P \\r(x⊖r) ∝ X ψF\\N(r)(x) = X Y ψI(xI)\n1998), which includes two sets of nodes: variable nodes x\\⊖r x\\⊖r I /∈N(r)\nxi, and factor nodes ψI. A variable node xi is con- Here the summation is over all variables but the ones\nnected to a factor node ψI if and only if i ∈I. We indexed by ⊖r.\nuse the notation N(i) to denote the neighbors of variable xi in the factor graph – i.e., the set of factors In Figure 1(c), this is the distribution obtained by redefined by N(i) := {I ∈F | i ∈I}. To illustrate, moving factors N(r1) = {I, T, Y, K, S, W} from the\nusing Figure 1(a): N(j) = {I, T, S} and T = {j, s, w}. factor gaph and marginalizing the rest over dotted circles, ⊖r1. We use the shorthand ψA(x) := QI∈A(xI) to denote the product of factors in a set of factors A. For The core idea to our approach is that each cavity remarginalizing all possible values of x except the ith gion r can produce reliable probability distribution\nvariable, we define the notation: over r, given an accurate cavity distribution estimate\nX f(x) := X f(x). over the surrounding variables ⊖r. Given the exact\nx\\i xj∈Xj,j̸=i cavity distribution P \\r over ⊖r, we can recover the\nSimilarly for a set of variables r, we use the notation exact joint distribution Pr over ⊕r by:\nPx\\r to denote marginalization of all variables apart\nfrom those in r. Pr(x⊕r) ∝P \\r(x⊖r)ψN(r)(x) = P \\r(x⊖r) Y ψI(xI) . Generalized Cavity Distribution\nIn practice, we can only obtain estimates ˆP \\r(x⊖r)\nThe notion of cavity distribution is borrowed from so- of the true cavity distribution P \\r(x⊖r). However,\ncalled cavity methods from statistical physics (M´ezard suppose we have multiple cavity regions r1, r2, . . . , rM\n& Montanari, 2009), and has been used in analysis that collectively cover all the variables {x1, . . . , xN}.\nand optimization of important combinatorial prob- If ⊖rp intersects with rq, we can improve the estilems (M´ezard et al., 2002; Braunstein et al., 2002). mate of ˆP \\rp(x⊖rp) by enforcing marginal consistency\nThe basic idea is to make a cavity by removing a vari- of ˆPrp(x⊕rp) with ˆPrq(x⊕rq) over the variables in\nable xi along with all the factors around it, from the their intersection. This suggests an iterative correcfactor graph (Figure 1(b)). We will use a more general tion scheme that is very similar to message passing.\nnotion of regional cavity, around a region. In Figure 1(a), let each hexagon (over variables\nDefinition A cavity region is a subset of variables r ⊆ and factors) define a cavity region, here r1, . . . , r5.\n{1, . . . , N} that are connected by a set of factors – i.e., Note r1 can provide good estimates over {j, s, k},\nthe set of variable nodes r and the associated factors given good approximation to cavity distribution over\nN(r) := {N(i) | i ∈r} forms a connected component {o, i, m, t, u, v, w}. This in turn can be improved by\non the factor graph. neighboring regions; e.g., r2 gives a good approximation over {i, o}, and r3 over {i, m}. Starting from an\nFor example in Figure 1(a), the variables indexed initial cavity distribution ˆP0\\rα , for each cavity region\nby r1 = {j, k, s} define a cavity region with factors α ∈{1, . . . , 14}, We perform this improvement for all\nN(r1) = {I, T, Y, S, W, K} cavity regions, in iterations until convergence.",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 5,
"total_chunks": 23,
"char_count": 3438,
"word_count": 630,
"chunking_strategy": "semantic"
},
{
"chunk_id": "cca3f00e-4e54-43dc-8a30-efd8674b2090",
"text": "Remark A \"cavity region\" is different from common When we start with a uniform cavity distribution ˆP0\\rp\nnotion of region in region-graph methods, in that a for all regions, the results are very similar to those of\ncavity region includes all factors in N(r) (and nothing CVM. The accuracy of this approximation depends on\nmore), while common regions allow a factor I to be a the accuracy of the initial ˆP0\\rp .\npart of a region only if I ⊆r. Following Mooij (2008), we use variable clamping to\nThe notation ⊕r := {i ∈I | I ∈N(r)} denotes estimate higher-order interactions in ⊖r: Here, we esthe cavity region r with its surrounding variables, and timate the partition function Zx⊖r after removing fac-\n⊖r := ⊕r \\ r denotes just the perimeter of the cavity tors in N(r) and fixing x⊖r to each possible assignregion r. In Figure 1(c), the dotted circles show the ment. Doing this calculation, we have ˆP \\r(x⊖r) ∝\nindices ⊖r1 = {o, i, m, t, u, v, w} and their union with Zx⊖r. In our experiments, we use the approximation\nthe pale circles defines ⊕r1. to the partition function provided using LBP. However\nthere are some alternatives to clamping: conditioning\nDefinition The Cavity Distribution, for cavity region scheme Rizzo et al. (2007) makes it possible to use",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 6,
"total_chunks": 23,
"char_count": 1265,
"word_count": 221,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ae59935c-51f7-4aff-8f5b-c5a92b6e1d7d",
"text": "A Generalized Loop Correction Method any method capable of marginalization for estimation sistency condition:\nof cavity distribution (clamping requires estimation of\nXˆPrp(x⊕rp)ψN(rp)∩N(rq)(x)−1=XˆPrq(x⊕rq)ψN(rp)∩N(rq)(x)−1,partition function). It is also possible to use techniques\nin answering joint queries for this purpose (Koller & x\\⊖rp,q x\\⊖rp,q\n(2)\nFriedman (2009)).\nwhich we can use to derive update equations for mq→p. Using clamping for this purpose also means that, if the\nStarting from the LHS of Eqn (2),\nresulting network, after clamping, has no loops, then\nX ˆPrp(x⊕rp)ψN(rp)∩N(rq)(x)−1ˆPr(x⊕r) is exact – hence GLC produces exact results\nx\\⊖rp,q\nif for every cluster r, removing ⊕r results in a tree.\n∝ X ˆP0\\rp (x⊖rp)ψN(rp)\\N(rq)(x) Y mq′→p(x⊖rp,q′ )\nx\\⊖rp,q q′∈Nb(p)\n3. Generalized Loop Correction ∝mq→p(x⊖rp,q) X ˆP0\\rp(x⊖rp)ψN(rp)\\N(rq)(x) Y mq′→p(x⊖rp,q′).\n3.1. Simple Case: Partitioning Cavity Regions x\\⊖rp,q q′∈Nb(p)\nq′̸=q\nTo introduce our approach, first consider a simpler case where the cavity regions r1, . . . , rM form Setting this proportional to the RHS of Eqn (2), we\nhave the update equationa (disjoint and exhaustive) partition of the variables\n{1, . . . , N}. mnewq→p(x⊖rp,q)\nLet ⊖rp,q := (⊖rp) ∩rq denote the intersection of the P ˆPrq(x⊕rq)ψN(rp)∩N(rq)(x)−1\nperimeter ⊖rp of rp with another cavity region rq. ∝ x\\⊖rp,q\n(Note ⊖rp,q ̸= ⊖rq,p). As r1, . . . , rM is a partition, P ˆP0\\rp (x⊖rp)ψN(rp)\\N(rq)(x) Q mq→p(x⊖rp,q′ )\nx\\⊖rp,q q′∈Nb(p)\neach perimeter ⊖rp is a disjoint union of ⊖rp,q for q′̸=q\nq = 1 . . . M (some of which might be empty if rp and rq P ˆPrq(x⊕rq)ψN(rp)∩N(rq)(x)−1\nare not neighbors). Let Nb(p) denote the set of regions x\\⊖rp,q\n∝ mq→p(x⊖rp,q) (3)\nq with ⊖rp,q ̸= ∅. We now consider how to improve the P ˆPrp(x⊕rp)ψN(rp)∩N(rq)(x)−1\ncavity distribution estimate over ⊖rp through update x\\⊖rp,q\nmessages sent to each of the ⊖rp,q.",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 7,
"total_chunks": 23,
"char_count": 1888,
"word_count": 294,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3a2e27f4-8a60-4fae-8cba-a8e42511d98a",
"text": "The last line follows from multiplying the numerator\nIn Figure 1(a), the regions r2, r4, r5, r7, r11, r14 form and denominator by the current version of the message\na partitioning. Here, r2 with {m, k, s, w} ⊂⊖r2, re- mq→p. At convergence, when mq→p equals mnewq→p, the\nceives updates over ⊖r2,7 = {m} from r7 and up- consistency constraints are satisfied. By repeating this\ndates over ⊖r2,4 = {k} from r4. This last update update in any order, after convergence, the ˆPr(x⊕r)s\nensures Px\\{k} ˆPr2(x⊕r2) = Px\\{k} ˆPr4(x⊕r4). To- represent approximate marginals over each region.\nwards enforcing this equality, we introduce a message\nThe following theorem stablishes the relation between\nm4→2(x⊖r2,4) into distribution over ⊕r2. GLC and CVM in a limited setting. Here, the distribution over ⊕rp becomes: ˆPrp(x⊕rp) ∝\nTheorem 1 If the cavity regions partition the vari-\nˆP0\\rp (x⊖rp)ψN(rp)(x⊕rp) Y mq→p(x⊖rp,q), (1) ables and all the factors involve no more than 2 variq∈Nb(p) ables, then any GBP fixed point of a particular CVM\nwhere ˆPrp denotes our estimate of the true distribu- construction (details in Appendix A) is also a fixed\ntion Prp. point for GLC, starting from uniform cavity distributions ˆP0\\r = 1. (Proof in Appendix A.)The messages mq→p can be recovered by considering marginalization constraints. When rp and rq\nCorollary 1 If the factors have size two and there areare neighbors, their distributions ˆPrp(x⊕rp) and\nno loops of size 4 in the factor graph, for single variable\nˆPrq(x⊕rq) should satisfy partitioning with uniform cavity distribution, any fixed\nX ˆPrp(x⊕rp) = X ˆPrq(x⊕rq). points of LBP can be mapped to fixed points of GLC.\nx\\⊕rp∩⊕rq x\\⊕rp∩⊕rq\nWe can divide both sides by the factor product Proof If there are no loops of size 4 then no two fac-\nψN(rp)∩N(rq)(x), as the domain of the factors in tors have identical domain. Thus the factors are all\nN(rp) ∩N(rq) is completely contained in ⊕rp ∩⊕rq maximal and GBP applied to CVM with maximal facand independent of the summation. Hence we have\nˆPrp(x⊕rp) ˆPrq(x⊕rq) tor domains, is the same as LBP. On the other hand\nX = X (refering to CVM construction of Appendix A) under ψN(rp)∩N(rq)(x) ψN(rp)∩N(rq)(x)\nx\\⊕rp∩⊕rq x\\⊕rp∩⊕rq the given condition, GLC with single variable partiAs ⊖rp,q ⊂⊕rp ∩⊕rq , this implies the weaker con- tioning shares the fixed points of GBP applied to CVM A Generalized Loop Correction Method",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 8,
"total_chunks": 23,
"char_count": 2398,
"word_count": 393,
"chunking_strategy": "semantic"
},
{
"chunk_id": "84e2ad60-1bb7-458d-8584-fa576af1c817",
"text": "with maximal factors. Therefore GLC shares the fixed\npoints of LBP. { o, i} { i , m} { m, t} { t, u} { v, w} Theorem 2 If all factors have size two and no two\nfactors have the same domain, GLC is identical to {o} {m} {t}\nLCBP under single variable partitioning. Proof Follows from comparison of two update equa- Figure 2. The ⊖r1-region-graph consisting of all the mestions – i.e., Eqn (3) and Eqn (5) in (Mooij & Kappen, sages to r1. The variables in each region and its counting\n2007)– under the assumptions of the theorem. number are shown. The upward and downward messages\nare passed along the edges in this ⊖r1-region-graph.\n3.2. General Cavity Regions\nthe M¨obius formula:\nWhen cavity regions do not partition the set of vari- cn(ρ) := 1 − X cn(ρ′)\nρ′∈A(ρ)\nables, the updates are more involved. As the perimeter where A(ρ) is the set of ancestors of ρ.\n⊖rp is no longer partitioned, the ⊖rp,q's are no longer\nWe can now define the belief over cavity regions rp as:disjoint. For example in Figure 1, for r1 we have ⊖r1,2 ={o, i}, ˆPrp(x⊕rp) ∝ˆP0\\rp (x⊖rp)ψN(rp)(x⊕rp) Y brp(xρ)cn(ρ) (4)\n⊖r1,3 = {i, m}, ⊖r1,4 = {t, u}, ⊖r1,5 = {v, w} and also ρ∈⊖Rp\n⊖r1,6 = {i}, ⊖r1,7 = {m}, ⊖r1,8 = {m, t}, ⊖r1,9 = {t},\nThis avoids any double-counting of variables, and re-etc. This means xi appears in messages m2→1, m3→1\nduces to Eqn (1) in the case of partitioning cavity re-and m6→1.\ngions. Directly adopting the correction formula for ˆPr in\nTo apply Eqn (4) effectively, we need to enforceEqn (1) as a product of messsages over ⊖rp,q could\nmarginal consistency of the intersection regions withdouble-count variables. To avoid this problem, we\ntheir parents, which can be accomplished via messageadopt a strategy similar to CVM to discount extra\npassing in a downward pass, Each region ρ′ sendscontributions from overlapping variables in ⊖rp. For\nto each of its child ρ, its marginal over the child'seach cavity region rp, we form a ⊖rp-region graph\nvariables:(Figure 2) with the incoming messages forming the\ndistributions over top regions. For computational rea- µρ′→ρ(xρ) := Xx\\ρ brp(xρ′)\nsons, we only consider maximal ⊖rp,q domains.1 here, Then set the belief over each child region to be the\nthis means dropping m6→1 as ⊖r1,6 ⊂⊖r1,2 and so on. geometric average of the incoming messages:\n|pr(ρ)|Our region-graph construction is similar to brp(xρ) := Yρ′∈pr(ρ) µρ′→ρ(xρ)\nCVM (Pelizzola, 2005) – i.e., we construct new\nsub-regions as the intersection of ⊖rp,q's, and we The downward pass updates the child regions in ⊖Rp\\\nrepeat this recursively until no new region can be ⊖ROp .",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 9,
"total_chunks": 23,
"char_count": 2577,
"word_count": 444,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c9fac19a-32f6-4c7b-b944-6d9c7d4480d2",
"text": "We update the beliefs at the top regions using\nadded. We then connect each sub-region to its a modified version of Eqn (3): brp(x⊖rp,q) ∝\nimmediate parent. Figure 2 shows the ⊖r1-region\ngraph for the example of Figure 1(a). If the cavity P ˆPrq(x⊕rq)ψN(rq)∩N(rp)(x⊕rq)−1\nregions are a partition, the ⊖rp-region graph includes x\\⊖rp,q beffrp (x⊖rp,q)cn(ρ), (5)\nonly the top regions. Below we use ⊖Rp to denote P ˆPrp(x⊕rp)ψN(rp)∩N(rq)(x⊕rp)−1\nthe ⊖rp-region graph for rp; ⊖ROp to denote its top x\\⊖rp,q\n(outer) regions; and brp(xρ) to denote the belief over for all top regions ⊖rp,q ∈⊖ROp .\nregion ρ in ⊖rp-region graph. For top-regions, the\ninitial belief is equal to the basic messages obtained Here beffrp (x⊖rp,q) is the effective old message over\nusing Eqn (3). ⊖rp,q: Next we assign \"counting numbers\" to regions, in beffrp (x⊖rp,q) = X Y brp(xρ)\na way similar to CVM: top regions are assigned x\\⊖rp,q ρ∈⊖Rp\ncn(⊖rp,q) = 1, and each sub-region ρ is assigned using That is, in the update equation, we need the calculation of the new message to assume this value as the 1This does not noticably affect the accuracy in our experiments. When using uniform cavity distributions, the old message from q to p. This marginalization is imresults are identical. portant because it allows the belief at the top region A Generalized Loop Correction Method",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 10,
"total_chunks": 23,
"char_count": 1348,
"word_count": 223,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e663e84e-c681-4990-a0e4-be06a966f6d9",
"text": "brp(x⊖rp,q) to be influenced by the beliefs brp(xρ) of\nthe sub-regions after a downward pass. It enforces\nmarginal consistency between the top regions, and at\nconvergence we have beffrp (x⊖rp,q) = brp(x⊖rp,q). Notice also Eqn (5) is equivalent to the old update Eqn (3)\nin the partitioning case. To calculate this marginalization more efficiently,\nGLC uses an upward pass in the ⊖rp-region-graph.",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 11,
"total_chunks": 23,
"char_count": 396,
"word_count": 62,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0ecc61eb-fa4d-4086-95dc-396c15fefd31",
"text": "Starting from the parents of the lowest regions, we define beffrp (xρ) as: beffr (xρ)\nbeffrp (xρ′) := brp(xρ′) Yρ∈ch(ρ′) µρ→ρ′(xρ) Figure 4. Time vs error for 3-regular Ising models with local field and interactions sampled from a standard normal. Each method in the graph has 10 points, each representingReturning to the example, the previous text provides\na method to update ˆPr1(x⊕r1). GLC performs this an Ising model of different size (10 to 100 variables).\nfor the remaining regions as well, and then iterates\n4.1.",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 12,
"total_chunks": 23,
"char_count": 520,
"word_count": 85,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3a3321a7-7043-4b78-8ba1-1a7a0716c350",
"text": "Grids\nthe entire process until convergence – i.e., until the\nchange in all distributions is less than a threshold. We experimented with periodic Ising grids in which\nxi ∈{−1, +1} is a binary variable and the probability distribution of a setting when xi and xj4. Experiments\nare connected in the graph is given by P(x) ∝\nThis section compares different variations of our exp( Pi θixi + 12 Pi,j∈I Ji,jxixj ) where Ji,j controls\nmethod against LBP as well as CVM, LCBP and variable interactions and θi defines a single node poTreeEP (Minka & Qi, 2003) methods, each of which tential – a.k.a. a local field. In general, smaller local\nperforms some kind of loop correction. For CVM, we fields and larger variable interactions result in more\nuse the double-loop algorithm of (Heskes, 2006), which difficult problems. We sampled local fields indepenis slower than GBP but has better convergence prop- dently from N(0, 1) and interactions from N(0, β2).\nerties. All methods are applied without any damping.",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 13,
"total_chunks": 23,
"char_count": 999,
"word_count": 167,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f7b02c9a-b9d1-44dd-9685-a088e1dd2e25",
"text": "Figure 3(left) summarize the results for 6x6 grids for\nWe stop each method after a maximum of 1E4 itera- different values of β.\ntions or if the change in the probability distribution\nWe also experimented with periodic grids of different\n(or messages) is less than 1E-9. We report the time\nsizes, generated by sampling all factor entries indepenin seconds and the error for each method as the avdently from N(0, 1). Figure 3(middle) compares the\nerage of absolute error in single variable marginals –\ncomputation time and error of different methods for\ni.e., Pxi,v |ˆP(xi =v)−P(xi =v)|. For each setting, we grids of sizes that range from 4x4 to 10x10.\nreport the average results over 10 random instances of\nthe problem. We experimented with grids, 3-regular\n4.2. Regular Graphsrandom graphs, and the ALARM network as typical\nbenchmark problems.2 We generated two sets of experiments with random\nBoth LCBP and GLC can be used with a uniform 3-regular graphs (all nodes have degree 3) over 40\nvariables. Here we used Ising model when both localinitial cavity or with an initial cavity distribution esfields and couplings are independently sampled fromtimated via clamping cavity variables. In the experN(0, β2). Figure 3(right) show the time and error foriments, full and uniform refer to the kind of cavity\ndistribution used. We use GLC to denote the par- different values of β. Figure 4 shows time versus error\nfor graph size between 10 to 100 nodes for β = 1. Fortitioning case, and GLC+ when overlapping clusters\nlarger βs, few instances did not converge within allo-of some form are used. For example, GLC+(Loop4,\ncated number of iterations. The results are for casesfull) refers to a setting with full cavity that contains\nin which all methods converged.all overlapping loop clusters of length up to 4. If a\nfactor does not appear in any loops, it forms its own\ncluster.",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 14,
"total_chunks": 23,
"char_count": 1874,
"word_count": 312,
"chunking_strategy": "semantic"
},
{
"chunk_id": "89530bd3-987b-465c-a575-26c801760be3",
"text": "The same form of clusters are used for CVM. 2The evaluations are based on implementation in libdai\ninference toolbox (Mooij, 2010).",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 15,
"total_chunks": 23,
"char_count": 131,
"word_count": 21,
"chunking_strategy": "semantic"
},
{
"chunk_id": "bcaefb95-7154-43ee-ada8-8be3dd640bcd",
"text": "A Generalized Loop Correction Method Average Run-time and accuracy for: (Left) 6x6 spinglass grids for different values of β. Variable interactions\nare sampled from N(0, β2), local fields are sampled from N(0, 1). (Middle) various grid-sizes: [5x5, . . . , 10x10]; Factors\nare sampled from N(0, 1). (Right) 3-regular Ising models with local field and interactions sampled from N(0, β2). lacking in general single-loop GBP implementations. Performance of varoius methods on Alarm\nMethod Time(s) Avg. Error GLC's time complexity (when using full cavity, and\nLBP 3.00E-2 8.14E-3 using LBP to estimate the cavity distribution) is\nTreeEP 1.00E-2 2.02E-1 O(τMN|X|u + λM|X|v)), where λ is the number of\nCVM (Loop3) 5.80E-1 2.10E-3\nCVM (Loop4) 7.47E+1 6.35E-3 iterations of GLC, τ is the maximum number of itCVM (Loop5) 1.22E+3 1.21E-2 erations for LBP, M is the number of clusters, N\nCVM (Loop6) 5.30E+4 1.29E-2 is the number of variables, u = maxp | ⊖rp| and\nLCBP (Full) 3.87E+1 1.07E-6 v = maxp | ⊕rp|. Here the first term is the cost of\nGLC+ (Factor, Uniform) 6.69E 0 3.26E-4\nestimating the cavity distributions and the second is GLC+ (Loop3, Uniform) 6.71E 0 4.58E-4\nGLC+ (Loop4, Uniform) 4.65E+1 3.35E-4 the cost of exact inference on clusters. This makes\nGLC+ (Factor, Full) 1.23E+3 1.00E-9 GLC especially useful when regional Markov blankets\nGLC+ (Loop3, Full) 1.36E+3 1.00E-9 are not too large. GLC+ (Loop4, Full) 1.79E+3 1.00E-9\n5. Alarm Network We introduced GLC, an inference method that proAlarm is a Bayesian network with 37 variables and vide accurate inference by utilizing the loop correc-\n37 factors. Variables are discrete, but not all are bi- tion schemes of both region-based and recent cavitynary, and most factors have more than two variables. based methods. Experimental results on benchmarks\nTable(1) compares the accuracy versus run-time of dif- support the claim that, for difficult problems, these\nferent methods. GLC with factor domains as regions – schemes are complementary and our GLC can suci.e., rp = I for I ∈F – and all loopy clusters produces cessfully exploit both. We also believe that our scheme\nexact results up to the convergence threshold. motivates possible variations that can also deal with\ngraphical models with large Markov blankets.\n4.4. Acknowledgements\nThese results show that GLC consistently provides We thank the anonymous reviewers for their excellent demore accurate results than both CVM and LCBP, al- tailed comments.",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 16,
"total_chunks": 23,
"char_count": 2467,
"word_count": 393,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d31035fb-8e99-4b53-b53b-c5e9a2277b28",
"text": "This research was partly funded by\nthough often at the cost of more computation time. NSERC, Alberta Innovates – Technology Futures (AICML)\nThey also suggest that one may not achieve this trade- and Alberta Advanced Education and Technology.\noffbetween time and accuracy simply by including References\nlarger loops in CVM regions. When used with uniform\ncavity, the performance of GLC (specifically GLC+) Aji, S and McEliece, R. The Generalized distributive law\nis similar to CVM, and GLC appears stable, which is and free energy minimization. In Allerton Conf, 2001.",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 17,
"total_chunks": 23,
"char_count": 567,
"word_count": 89,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b55a8e4c-1aa5-4b60-89af-e8a9065fd303",
"text": "Braunstein, A., M´ezard, M., and Zecchina, R. Survey propA Generalized Loop Correction Method agation: an algorithm for satisfiability. TR, 2002. of 1, while each subregion has a counting number of −1. Stable fixed points of loopy belief propagation Since we assume the cavity regions rp form a partition and\nare local minima of the Bethe free energy. In NIPS, 2003. each factor contains no more than 2 variables, this region\nHeskes, T. Convexity arguments for efficient minimization graph construction counts each variable and each factor\nof the Bethe and Kikuchi free energies. JAIR, 26, 2006. exactly once.",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 18,
"total_chunks": 23,
"char_count": 609,
"word_count": 97,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a6704e0f-bcfa-44e8-8d46-6cd0bfc7b81f",
"text": "A theory of cooperative phenomena. We focus on the parent-to-child algorithm for GBP. For the specific region graph construction outlined, weKoller, D. and Friedman, N. Probabilistic Graphical Modhave 2 types of messages: internal region to subregion els: Principles and Techniques. 2009.\nmessage (µisq→p sent from Rintq to Rsubq,p ), and bridge regionKschischang, F, Frey, B, and Loeliger, H. Factor graphs\nand the sum-product algorithm. IEEE Info Theory, 47, to subregion message (µbsq→p sent from Rbrq,p to Rsubq,p ).\n1998. Note that Rsubq,p and Rsubp,q are the intersection of Rbrp,q\nM´ezard, M. and Montanari, A. Information, physics, and with Rintq and Rintp respectively. We use the notation\ncomputation. Oxford, 2009. µ to differentiate from messages m used in GLC. Below\nM´ezard, M, Parisi, G, and Zecchina, R.",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 19,
"total_chunks": 23,
"char_count": 819,
"word_count": 126,
"chunking_strategy": "semantic"
},
{
"chunk_id": "00476a45-ef90-44d9-9aea-a82b94315831",
"text": "Analytic and algo- we drop the arguments to make the equations more\nrithmic solution of random satisfiability problems. The parent-to-child algorithm uses the following\nence, 2002. fixed-point equations:\nMinka, T and Qi, Y. Tree-structured approximations by µisq→p ∝Px\\Rsubq,p ψRintq Qq′∈Nb(q),q′̸=p µbsq′→q expectation propagation. In NIPS, 2003.\nµbsq→p ∝Px\\Rsubq,p ψRbrq,pµisq→pMontanari, A and Rizzo, T. How to compute loop corrections to the Bethe approximation. J Statistical Mechanics, 2005. Suppose GBP converges to a fixed point with messages\nMooij, J. Understanding and Improving Belief Propaga- µisq→p and µbsq→p satisfying the fixed point conditions above;\ntion. PhD thesis, Radboud U, 2008. we show that messages defined by mq→p := µisq→p are fixed\nMooij, J. libDAI: A free and open source C++ library points of update Eqn (3) – i.e., satisfy the consistency confor discrete approximate inference in graphical models. dition of Eqn (2)\nJMLR, 2010. \\r\nMooij, J and Kappen, H. Loop corrections for approximate Assuming uniform initial cavity ˆP0 = 1, for LHS of\ninference on factor graphs. Eqn (2), we have\nPearl, J.",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 20,
"total_chunks": 23,
"char_count": 1126,
"word_count": 171,
"chunking_strategy": "semantic"
},
{
"chunk_id": "9d875be0-c81b-440a-a327-0739bd01e214",
"text": "Probabilistic reasoning in intelligent systems. P x\\⊖rp,q ˆPrp(x⊕rp)ψN(rp)∩N(rq)(x)−1 1988.\n∝ mq→p Px\\⊖rp,q ψN(rp)\\N(rq) Qq′∈Nb(p),q′̸=q mq′→pPelizzola, A. Cluster variation method in statistical physics\nand probabilistic graphical models. J Physics A, 2005. ∝ mq→p = µisq→p,\nRizzo, T, Wemmenhove, B, and Kappen, H. On cavity as the domain of the expression inside the summation sign\napproximations for graphical models. J Physical Review, is disjoint from ⊖rp,q.\n76(1), 2007. As for the RHS of Eqn (2) we have\nYedidia, J, Freeman, W, and Weiss, Y.",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 21,
"total_chunks": 23,
"char_count": 548,
"word_count": 82,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8fc476c8-c130-4407-a7ee-60b18fa9115c",
"text": "Constructing free\nenergy approximations and generalized belief propaga- X ˆPrq(x⊕rq)ψN(rp)∩N(rq)(x)−1\ntion algorithms. IEEE Info Theory, 2005.\nx\\⊖rp,q\nYuille, A. CCCP algorithms to minimize the Bethe and\nKikuchi free energies. Neural Computation, 2002. ∝ X ψN(rq)ψN(rp)∩N(rq)(x)−1 Y mq′→q\nx\\⊖rp,q q′∈Nb(q)\nA. Appendix\nX ∝ Y µisq′→q (ψRintq Y ψRbrq′,q)ψRbrp,q(x)−1We prove the equality of GLC to CVM, in the setting q′∈Nb(q) q′∈Nb(q) x\\Rsub q,p\nwhere each factor involves no more than 2 variables and\nq′→q (6)the cavity distributions ˆP \\r(x⊖r) is uniform.3 ∝ X ψRintq Y ψRbrq′,qµis\nx\\Rsubq,p q′∈Nb(q),q′̸=pConsider the following CVM region-graph:\nq′→q (7) • internal region (Rintp ): it contains all the variables in ∝ X ψRintq Y X ψRbrq′,qµis\nrp, and factors that are internal to rp – i.e., {I ∈F | x\\Rsubq,p q′∈Nb(q),q′̸=p x\\Rsubq′,q\nI ⊆rp}.\n∝ X ψRintq Y µbsq′→q ∝µisq→p • bridge region (Rbrp,q): it contains all the variables and\nx\\Rsubq,p q′∈Nb(q),q′̸=p factors that connect rp and rq — i.e., variables ⊕rp ∩\n⊕rq and factors N(rp) ∩N(rq).\n• sub region (Rsubp,q ): the intersection of internal Rintp Removing µisp→q in line (6) is valid because, in the absence\nand bridge Rbrp,q. It contains only variables and no of ψRbrp,q, its domain is disjoint from the rest of the terms.\nfactors. (Note Rsubp,q = ⊖rq,p) Moving the summation inside the product in line (7) is\nvalid because partitioning guarantees that product terms'\ndomains have no overlap and they are also disjoint fromNote each internal and bridge region has a counting number\nψRintq .\n3 To differentiate from GLC's cavity regions r, we use\nThus the LHS and RHS of Eqn (2) agrees and mq→p :=the capital notation R to denote the corresponding region\nin the CVM region graph construction. µisq→p is a fixed point of GLC.",
"paper_id": "1206.4654",
"title": "A Generalized Loop Correction Method for Approximate Inference in Graphical Models",
"authors": [
"Siamak Ravanbakhsh",
"Chun-Nam Yu",
"Russell Greiner"
],
"published_date": "2012-06-18",
"primary_category": "cs.AI",
"arxiv_url": "http://arxiv.org/abs/1206.4654v1",
"chunk_index": 22,
"total_chunks": 23,
"char_count": 1780,
"word_count": 290,
"chunking_strategy": "semantic"
}
]