researchpilot-data / chunks /1203.3887_semantic.json
Subhadip007's picture
feat: Upload full 358k vector database
e8a2c2e
[
{
"chunk_id": "00a56f14-577d-468a-b6d5-eaa3b9ddc425",
"text": "The Annals of Statistics\n2013, Vol. 41, No. 2, 401–435\n⃝Institutec of Mathematical Statistics, 2013 LEARNING LOOPY GRAPHICAL MODELS WITH LATENT\nVARIABLES: EFFICIENT METHODS AND GUARANTEES By Animashree Anandkumar1 and Ragupathyraj Valluvan22013 University of California, Irvine The problem of structure estimation in graphical models with la-Apr tent variables is considered. We characterize conditions for tractable\ngraph estimation and develop efficient methods with provable guarantees. We consider models where the underlying Markov graph is22\nlocally tree-like, and the model is in the regime of correlation decay. For the special case of the Ising model, the number of samples n required for structural consistency of our method scales as\nn = Ω(θ−δη(η+1)−2min log p), where p is the number of variables, θmin\nis the minimum edge potential, δ is the depth (i.e., distance from\na hidden node to the nearest observed nodes), and η is a parameter\nwhich depends on the bounds on node and edge potentials in the Ising\nmodel. Necessary conditions for structural consistency under any al-[stat.ML]\ngorithm are derived and our method nearly matches the lower bound\non sample requirements. Further, the proposed method is practical\nto implement and provides flexibility to control the number of latent\nvariables and the cycle lengths in the output graph. Learning latent variable models from observed samples\ninvolves mainly two tasks: discovering relationships between the observed\nand hidden variables, and estimating the strength of such relationships. One\nof the simplest latent variable models is the so-called latent class model or\nn¨aive Bayes model, where the observed variables are conditionally independent given the state of the latent factor. An extension of these models are\nlatent tree models with many hidden variables forming a tree hierarchy.\nspecies in bio-informatics (popularly known as phylogenetics) [21, 43], for",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 0,
"total_chunks": 68,
"char_count": 1931,
"word_count": 291,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e0bcf889-06f0-4be2-b016-fa923960d66c",
"text": "Received March 2012; revised October 2012.\n1Supported in part by NSF Award CCF-1219234, AFOSR Award FA9550-10-1-0310\nand ARO Award W911NF-12-1-0404.\n2Supported by the ONR Award N00014-08-1-1015. AMS 2000 subject classifications. Primary 62H12; secondary 05C12.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 1,
"total_chunks": 68,
"char_count": 260,
"word_count": 34,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0747aa84-6011-4497-a789-e52bc0a1d93e",
"text": "Key words and phrases. Graphical model selection, latent variables, quartet methods. This is an electronic reprint of the original article published by the\nInstitute of Mathematical Statistics in The Annals of Statistics,\n2013, Vol. 41, No. 2, 401–435. This reprint differs from the original in pagination\nand typographic detail.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 2,
"total_chunks": 68,
"char_count": 329,
"word_count": 49,
"chunking_strategy": "semantic"
},
{
"chunk_id": "58439380-643d-41f0-be32-da4d6b9cb306",
"text": "financial and topic modeling [17] and for modeling contextual information\nfor object recognition in computer vision [16]. Prior works on learning latent\ntree models (e.g., [17, 23, 35]), demonstrate that latent tree models can be\nlearned efficiently in high dimensions. In other words, the number of samples required for consistent learning is much smaller than the number of\nvariables at hand. Moreover, inference in latent tree models is computationally tractable by means of simple algorithms such as belief propagation. Despite all the above advantages, the assumption of a tree structure may\nbe too restrictive.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 3,
"total_chunks": 68,
"char_count": 616,
"word_count": 94,
"chunking_strategy": "semantic"
},
{
"chunk_id": "1f7d7aa0-2026-44a9-b6df-4b3a3959cab0",
"text": "For instance, in an analysis of the relationships between\ntopics (encoded as latent variables) and words (corresponding to observed\nvariables), a latent tree model posits that the words are generated from a\nsingle topic, while, in reality there are common words across topics. Loopy\ngraphical models are able to capture such relationships, while retaining many\nadvantages of the latent tree models. Relaxing the tree assumption leads to nontrivial challenges: in general,\nlearning these models is NP-hard [8, 28], even when there are no latent\nvariables, and developing methods for learning such fully observed models is\nitself an area of active research (e.g., [3, 27, 40]). In this paper, we consider\nstructure estimation in latent graphical models Markov on locally tree-like\ngraphs, meaning that local neighborhoods in the graph do not contain cycles. Learning such graphs has many nontrivial challenges: are there parameters\nregimes where these models can be learned consistently and efficiently? If so,\nare there practical learning algorithms? Are learning guarantees for loopy\nmodels comparable to those for latent trees? How does learning depend on\nvarious graph attributes such as node degrees, girth of the graph and so on? We provide answers to these questions in this paper. Our approach and contributions.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 4,
"total_chunks": 68,
"char_count": 1318,
"word_count": 203,
"chunking_strategy": "semantic"
},
{
"chunk_id": "661f143f-3013-47bf-a9f2-7af9a0f9f5c1",
"text": "We consider learning latent graphical Markov models on locally tree-like graphs in the regime of correlation\ndecay. In this regime, there are no long-range correlations, and the local\nstatistics converge to a tree limit. The implication of correlation decay is\nimmediately clear: we can employ the available latent tree methods to learn\n\"local\" subgraphs consistently, as long as they do not contain any cycles. However, a nontrivial challenge remains: how does one merge these estimated local subgraphs (i.e., latent trees) to obtain an overall graph estimate? Specifically, merging involves matching latent nodes across different latent\ntree estimates, and it is not clear if this can be performed in an efficient\nmanner. We employ a different philosophy for building locally tree-like graphs with\nlatent variables. We decouple the process of introducing cycles and latent\nvariables in the output model. We initialize a loopy graph consisting of only\nthe observed variables, and then iteratively add latent variables to local LEARNING LOOPY GRAPHICAL MODELS WITH LATENT VARIABLES 3 neighborhoods of the graph.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 5,
"total_chunks": 68,
"char_count": 1111,
"word_count": 169,
"chunking_strategy": "semantic"
},
{
"chunk_id": "7d257c3d-e22f-43ff-b764-2f4681317e8e",
"text": "We establish correctness of our method under\na set of natural conditions. We provide precise conditions for structural consistency of LocalCLGrouping\nunder the probably approximately correct (PAC) model of learning ([29],\npage 7), for general discrete models. We simplify these conditions for the\nIsing model, where each node is a binary random variable, to obtain better\nintuitions. We establish that for structural consistency, the number of samples is required to scale as n = Ω(θ−δη(η+1)−2min log p), where p is the number\nof observed variables, θmin is the minimum edge potential, δ is the depth\n(i.e., graph distance from a hidden node to the nearest observed nodes) and\nη is a parameter which depends on the minimum and maximum node and\nedge potentials of the Ising model (η = 1 for homogeneous models). When\nthere are no hidden variables (δ = 1), the sample complexity is strengthened\nto n = Ω(θ−2min log p), which matches with the best known sample complexity\nfor learning fully-observed Ising models [3, 27]. We also establish necessary conditions for any (deterministic) algorithm\nto recover the graph structure and establish that n = Ω(∆minρ−1 log p) samples are necessary for structural consistency, where ∆min is the minimum\ndegree and ρ is the fraction of observed nodes. This is comparable to the\nrequirement of the proposed method under uniform node sampling (i.e.,\nselecting the observed nodes uniformly), given by n = Ω(∆2maxρ−2(log p)3),\nwhere ∆max is the maximum degree in the graph. Thus, our method is competitive with respect to the lower bound on learning. Our proposed method has a number of attractive features for practical\nimplementation: the method is amenable to parallelization which makes it\nefficient on large datasets. The method provides flexibility to control the\nlength of cycles and the number of latent variables introduced in the output\nmodel. The method can incorporate penalty scores such as the Bayesian\ninformation criterion (BIC) [42] to trade-offmodel complexity and fidelity. Moreover, by controlling the cycle lengths in the output model, we can obtain models with good inference accuracy under simple algorithms such as\nloopy belief propagation (LBP). Preliminary experiments on the newsgroup\ndataset suggests that the method can discover intuitive relationships efficiently, and also compares well with the popular latent Dirichlet allocation\n(LDA) [7] in terms of topic coherence and perplexity. The classical latent class models (LCM) consists of\nmultivariate distributions with a single latent variable and the observed\nvariables are conditionally independent under each state of the latent variable [32]. Hierarchical latent class (HLC) models [15, 47, 48] generalize\nthese models by allowing multiple latent variables.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 6,
"total_chunks": 68,
"char_count": 2774,
"word_count": 430,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5843231a-c2af-4d87-a347-f02e4480ed45",
"text": "However, the proposed\nlearning algorithms are based on greedy local search in a high-dimensional space, which is computationally expensive. Moreover, the algorithms do not\nhave theoretical guarantees. Similar shortcomings also hold for expectationmaximization (EM) based approaches [22, 30].",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 7,
"total_chunks": 68,
"char_count": 291,
"word_count": 38,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3273e7d3-4626-4c3e-b4d9-3ba3e418354d",
"text": "Learning latent trees has\nbeen studied extensively before, mainly in the context of phylogenetics. See\n[21, 43] for a thorough overview. Efficient algorithms with provable performance guarantees are available (e.g., [1, 17, 19, 23]). Our proposed method\nin this paper is inspired by [17]. Works on high-dimensional graphical model selection are more recent. The approaches can be mainly classified into two groups: local approaches\n[3, 9, 27, 37] and those based on convex optimization [14, 33, 40, 41]. There is\na general agreement that the success of these methods is related to the presence of correlation decay in the model [3, 6]. This work makes the connection\nexplicit: it relates the extent of correlation decay (i.e., the convergence rate\nto the tree limit) with the learning efficiency for latent models on large girth\ngraphs. An analogous study of the effect of correlation decay for learning\nfully observed models is presented in [3]. This paper is the first work to provide provable guarantees for learning discrete latent models on loopy graphs in high dimensions (which can\nalso be easily be extended to Gaussian models; see remarks following Theorem 2). Chandrasekharan et al. [13] consider learning latent Gaussian graphical models using a convex relaxation method. However, the method cannot\nbe easily extended to discrete models. Moreover, the \"incoherence\" conditions required for the success of convex methods are hard to interpret\nand verify in general. In contrast, our conditions for success are transparent and based on the presence of correlation decay in the model. Bresler\net al. [9] considers graphical model selection with hidden variables, but proposes learning Markov graph of marginal distribution (upon marginalizing\nthe hidden variables) and then replacing the cliques in the estimated graphs\nwith hidden variables.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 8,
"total_chunks": 68,
"char_count": 1850,
"word_count": 287,
"chunking_strategy": "semantic"
},
{
"chunk_id": "534b35ff-5814-4220-be12-1069f34df68e",
"text": "Sample complexity results are not provided, and the\nmethod performs poorly in high dimensions, since it aims to estimate dense\ngraphs.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 9,
"total_chunks": 68,
"char_count": 134,
"word_count": 21,
"chunking_strategy": "semantic"
},
{
"chunk_id": "7082033b-3d3c-4a77-bf8a-4385463df521",
"text": "A graphical model is a family of multivariate distributions which are Markov in accordance to a particular undirected graph\nG = (W,E) [31], page 32. For any distribution belonging to the model class,\na random variable Xi taking value in a set X is associated with each node\ni ∈W in the graph. We consider discrete graphical models where X is a\nfinite set. The set of edges E captures the set of conditional independence\nrelations among the random variables. We say that a set of random variables LEARNING LOOPY GRAPHICAL MODELS WITH LATENT VARIABLES 5 XW := {Xi,i ∈W} with probability mass function (p.m.f.) P is Markov on\nthe graph G if it factorizes according to the cliques of G,\n(1) P(x) = exp θc(xc) −A(θ) ∀x ∈X m,\nc∈C where C is the set of cliques of G, m := |W| is the number of variables,\nand xc is the set of configurations corresponding to clique c.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 10,
"total_chunks": 68,
"char_count": 859,
"word_count": 159,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d592626d-faac-4eab-8ba9-5d9dd4b44d14",
"text": "The quantity\nA(θ) is known as the log-partition function and serves to normalize the\nprobability distribution. The functions θc are known as potential functions\nand correspond to the canonical parameters of the exponential family. A special case is the Ising model, which is the class of pairwise distributions over binary variables {−1,+1}m with probability mass function\n(p.m.f.) of the form\nX X\n(2) P(x) = exp θi,jxixj + φixi −A(θ) ∀x ∈{−1,1}m.\ne∈E i∈V We specialize some of our results to the class of Ising models. We consider a multivariate distribution belonging to the class of latent\ngraphical models in which a subset of nodes is latent or hidden. Let H ⊂W\ndenote the hidden nodes and V = W \\H denote the observed nodes. Our goal\nis to discover the presence of hidden variables XH and learn the unknown\ngraph structure G(W), given n i.i.d. samples from observed variables XV . Let p := |V | denote the number of observed nodes and m := |W| denote the\ntotal number of nodes. Tractable graph families: Girth-constrained graphs. In general, structure estimation of graphical models is NP-hard [8, 28]. We now characterize\na tractable class of models for which we can provide guarantees on graph\nestimation. We consider the family of graphs with a bound on the girth, which is the\nlength of the shortest cycle in the graph. There are many graph constructions\nwhich lead to a bound on girth. For example, the bipartite Ramanujan graph\n([18], page 107) and the random Cayley graphs [25] have bounds on the\ngirth. Recently, efficient algorithms have been proposed to generate large\ngirth graphs efficiently [5].",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 11,
"total_chunks": 68,
"char_count": 1614,
"word_count": 274,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c212cbea-c2d1-4d18-aef1-6c231d3b63ef",
"text": "Although girth-constrained graphs are locally tree-like, in general, their\nglobal structure makes them hard instances for learning. Specifically, girthconstrained graphs have a large tree-width: it is known that a graph with average degree at least ∆avg and girth at least g has a tree width as Ω( g+1(∆avg−1\n1)⌊(g−1)/2⌋) [12]. Thus learning is nontrivial for graphical Markov models on\ngirth-constrained graphs, even when there are no latent variables due to\ntheir large tree width [28]. Local convergence to a tree limit. This work establishes tractable\nlearning when the graphical model converges locally to a tree limit. A sufficient condition for the existence of such limits is the regime of correlation\ndecay,3 which refers to the property that there are no long-range correlations in the model [26, 34, 46]. This regime is also known as the uniqueness\nregime since under such an assumption, the marginal distribution at a node\nis asymptotically independent of the configuration of a growing boundary. We tailor the definition of correlation decay to node neighborhoods and\nprovide the definition below. Given a graph G = (W,E) and a distribution\nPXW |G Markov on it, and any subset A ⊂W, let PXA|G denote the marginal\ndistribution of variables in A. For some subgraph F ⊂G, let PXA|F denote\nthe marginal distribution on A obtained by setting the potentials of edges in\nG\\F to zero. Thus, PXA|F is Markov on graph F. Let N[i;G] := N(i;G)∪i\ndenote the closed neighborhood of node i in G. For any two sets A1,A2 ⊂W,\nlet dist(A1,A2) := mini∈A1,j∈A2 dist(i,j) denote the minimum graph distance.4 Let Bl(i) denote the set of nodes within graph distance l from node\ni and ∂Bl(i) denote the boundary nodes, that is, exactly at distance l from\nnode i. Let Fl(i;G) := G(Bl(i)) denote the induced subgraph on Bl(i). For\nany distributions P,Q, let ∥P −Q∥1 denote the ℓ1 norm.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 12,
"total_chunks": 68,
"char_count": 1871,
"word_count": 312,
"chunking_strategy": "semantic"
},
{
"chunk_id": "79171625-e834-4d42-9471-838d2e7e76ce",
"text": "Definition 1 (Correlation decay). A distribution PXW |G Markov on\ngraph G = (W,E) is said to exhibit correlation decay with a nonincreasing\nrate function ζ(·) > 0 if for all l ∈N, (3) ∥PXA|G −PXA|Fl(i;G)∥1 ≤ζ(dist(A,∂Bl(i))) ∀i ∈W,A ⊂Bl(i). In words, the total variation distance5 between the marginal distribution\nof a set A of a distribution Markov on G and the corresponding distribution\nMarkov on subgraph Fl(i;G) decays as a function of the graph distance to\nthe boundary. This implies that for a class of functions ζ(·), the effect of\ngraph configuration beyond l hops from any node i has a decaying effect on\nthe local marginal distributions. For the class of Ising models in (2), the regime of correlation decay can\nbe explicitly characterized, in terms of the maximum edge potential and the\nmaximum degree of the graph, and this is studied in Section 4.2. 3Technically, correlation decay can be defined in multiple ways ([34], page 520), and\nthe notion we use is the uniqueness or the weak spatial mixing condition.\n4We distinguish between the terms graph distance and information distances. The former refers to the number of edges on the shortest path connecting the two nodes on the\n(unweighted) graph, while the latter refers to the quantity in (8).\n5Recall that the total variation distance between two probability distributions P,Q on\nthe same alphabet is given by 12∥P −Q∥1.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 13,
"total_chunks": 68,
"char_count": 1390,
"word_count": 232,
"chunking_strategy": "semantic"
},
{
"chunk_id": "588832af-57e2-417b-ac7b-d4c09197d6cd",
"text": "LEARNING LOOPY GRAPHICAL MODELS WITH LATENT VARIABLES 7 Background on latent tree models. We first recap the results for latent tree models which will subsequently extended to more general latent\ngraphical models. It is well known that tree-structured Markov distributions\non a tree T = (W,E) have a special form of factorization given by\nY Y PXi,j(xi,xj)\n(4) P(xW ) = PXi(xi) PXi(xi)PXj(xj). i∈W (i,j)∈T Comparing with general distributions, we note that tree distributions are\ndirectly parameterized in terms of pairwise marginal distributions on the\nedges. Similarly, a Markov distribution can be described on a rooted directed\n→ →\ntree T with root r ∈W, where the edges of T are directed away from the\nroot. Let Pa(i) denote the (unique) parent of node i ̸= r and PXi|XPa(i) denote\nthe corresponding conditional distribution. The Markov distribution is given\n(5) P(xW ) = PXr(xr) PXi|XPa(i)(xi|xPa(i)).\ni∈W,i̸=r A Markov model is said to be nonsingular [36, 45] if (a) for all e ∈ T , the\nconditional distributions satisfy 0 < |det(PXi|XPa(i))| < 1, and (b) for all i ∈\nV , PXi(x) > 0 for all x ∈X . A nonsingular Markov model on an undirected\ntree T and its directed counterpart T are equivalent [36, 45]. Note that\nnonsingularity is equivalent to positivity (i.e., bounded potential functions)\nfor Markov tree models. In particular, Ising models on trees with bounded\nnode and edge potentials are nonsingular. This is because under positivity,\nthere is positive probability for any global configuration of node states which\nimplies that the conditional probability at a node given any of its neighbors\ncannot be degenerate.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 14,
"total_chunks": 68,
"char_count": 1629,
"word_count": 269,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f4f2cb7b-b48c-4d05-ae24-65d49fa2ac3f",
"text": "Latent tree models or phylogenetic tree models are tree-structured graphical models in which a subset of nodes are hidden or latent. Our goal in this\npaper is to leverage on the techniques developed for learning latent tree\nmodels to analyze a more general class of latent graphical models. Learning latent tree models. Learning the structure of latent tree\nmodels is an extensively studied topic. A majority of structure learning\nmethods (known as distance based methods) rely on the presence of an\nadditive tree metric. The additive tree metric can be obtained by considering\nthe pairwise marginal distributions of a tree structured joint distribution. For\ninstance, Mossel [35] considers the following metric for discrete distributions\nsatisfying the nonsingular condition (6) d(i,j) := −log|det(PXi,j)| ∀i,j ∈W. By nonsingularity assumption, we have that |det(PXi,j)| > 0 for all i,j ∈W. The distance metric further simplifies for some special distributions, for\nexample, for symmetric Ising models, it is given by the negative logarithm\nof the correlation between the node pair under consideration [43]. Quartet-based methods. A popular class of learning methods are\nbased on the construction of quartets or splits (e.g., [10, 23, 35]), and various\nprocedures to merge the inferred quartets. A quartet is a structure over four\nobserved nodes, as shown in Figure 1. We now recap the classical quartet\ntest operating on any additive tree metric.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 15,
"total_chunks": 68,
"char_count": 1448,
"word_count": 225,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a17f7caa-a975-435b-ae99-c4db7d3e29eb",
"text": "The path structure refers to the\nconfiguration of paths between the given nodes. Definition 2 (Quartet or four-point condition on trees). Given an additive metric on a tree [d(i,j)]i,j∈V , the tuple of four nodes a,b,u,v ∈V has\nthe structure in Figure 1 if and only if (7) d(a,b) + d(u,v) < min(d(a,u) + d(b,v),d(b,u) + d(a,v)), and the structure in Figure 1 is denoted by Q(ab|uv).",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 16,
"total_chunks": 68,
"char_count": 382,
"word_count": 67,
"chunking_strategy": "semantic"
},
{
"chunk_id": "964174ed-099a-409a-bb78-bfca895a2125",
"text": "It is well known that the set of all quartets uniquely characterize a latent\ntree. In [23], it was shown that a subset of quartets, termed as representative\nquartets, suffices to uniquely characterize a latent tree. The set of representative quartets consists of one quartet for each edge in the latent tree with\nshortest (graph) distances between the observed nodes. We recap the recursive grouping RG(bdn(V ),\nΛ,τ) method proposed in [17] (and its refinement in [1]). The method is\nbased on a robust quartet test Quartet(bdn,Λ) given in Algorithm 1. If the\nconfidence bound is not met, a ⊥result is declared. In the first iteration of\nRG in Algorithm 2, the algorithm searches for node pairs which occur on the\nsame side of all the quartets, output by the quartet test Quartet(bdn,Λ) and\ndeclares them as siblings and introduces hidden variables. In later iterations\nof RG, sibling relationships between hidden variables are inferred through\nquartets involving their children. Finally, weak edges are merged and a tree",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 17,
"total_chunks": 68,
"char_count": 1020,
"word_count": 167,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8846aee8-c97c-4cfa-9946-23a5000a576e",
"text": "LEARNING LOOPY GRAPHICAL MODELS WITH LATENT VARIABLES 9 Algorithm 1 Quartet(bdn(V ),Λ) test using distance estimates bdn(V ) :=\n{bd(i,j)}i,j∈V and confidence bound Λ. Input: Distance estimates between the observed nodes bdn(V ) :=\n{bd(i,j)}i,j∈V and confidence bound Λ. Denote (·)+ := max(·,0). Initialize set of quartets Q(V ) ←∅.\nfor {i,j,i′,j′} ∈V do\nif (e−bd(i,j) −Λ)+(e−bd(i′,j′) −Λ)+ > (e−bd(i,j′) + Λ)+(e−bd(i,j) + Λ)+ then\nDeclare Quartet: Q(V ) ←Q(ij|i′j′).\nend if\nif No quartet declared for {i,j,i′,j′} then\n⊥i,j,i′,j′ (Declare null).\nend if\nend for",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 18,
"total_chunks": 68,
"char_count": 559,
"word_count": 86,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c8c1c8e9-5cd0-407f-9837-d91df44cc803",
"text": "(and more generally a forest) is output. We later use a modified version\nof recursive grouping method as a routine in our algorithm for estimating\nlocally tree-like graphs. In the end, the neighboring nodes (at least one of\nwhich is hidden) are merged based on the threshold τ. See Section 4 for\ndetails.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 19,
"total_chunks": 68,
"char_count": 304,
"word_count": 53,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b9cdfaa7-3135-409e-bd64-d2708f4917af",
"text": "An alternative method, known as Chow–Liu\ngrouping (CLGrouping), was proposed in [17]. Although the theoretical results for CLGrouping are similar to earlier results (e.g., [23]), experiments\non both synthetic and real data sets revealed significant improvement over\nearlier methods in terms of likelihood scores and number of hidden variables\nadded. The CLGrouping method always maintains a candidate tree structure and\nprogressively adds more hidden nodes in local neighborhoods.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 20,
"total_chunks": 68,
"char_count": 480,
"word_count": 68,
"chunking_strategy": "semantic"
},
{
"chunk_id": "beb4901f-df21-4a11-bcdf-2002e7c0c7e2",
"text": "The initial\ntree structure is the minimum spanning tree (MST) over the observed nodes\nwith respect to the tree metric. The method then considers neighborhood\nsets on the MST and constructs local subtrees (using quartet based method\nor any other tree reconstruction algorithm). This local reconstruction property of CLGrouping makes it especially attractive for reconstructing girthconstrained graphs. Method and guarantees for structure estimation. Overview of algorithm. We now describe our algorithm, which we\nterm as LocalCLGrouping, for structure estimation of latent graphical Markov\nmodels on graphs with long cycles. The algorithm leverages on the Chow–\nLiu grouping algorithm developed for latent tree models [17], described in\nthe previous section. The main intuition for learning a girth-constrained Algorithm 2 RG(bdn(V ),Λ,τ) test using distance estimates bdn(V ) :=\n{bd(i,j)}i,j∈V , confidence bound Λ and threshold τ for merging nodes. Input: Distance estimates between the observed nodes bdn(V ) :=\n{bd(i,j)}i,j∈V , confidence bound Λ and threshold τ. Let C(a) denote the\nchildren of node a. Initialize A ←V , C(i) ←{i} for all i ∈V and Q(V ) ←Quartet(bdn(A),Λ). while A ̸= ∅do\nif ∃i,j ∈A s.t. for each a ∈C(i) and b ∈C(j), c,d /∈C(i) ∪C(j),\n{ac|bd,ad|bc} /∈Q(V ), that is, a,b are on same side of all such quartets in Q(V ). then\nDeclare i,j as siblings and introduce hidden node h as parent and\nC(h) ←C(i) ∪C(j). Remove i,j from A and add h to A.\nelse\nSibling relationships cannot be further inferred. Break.\nend if\nend while\nForm forest bT based on sibling and child/parent relationships.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 21,
"total_chunks": 68,
"char_count": 1606,
"word_count": 259,
"chunking_strategy": "semantic"
},
{
"chunk_id": "4de52751-fd13-46ae-bbba-24ad88187da6",
"text": "Compute distances between any two hidden nodes as average distance\nbetween their observed children. Merge edges in bT of length less than τ and output bT . graph is based on reconstructing \"local\" parts of the graph which are acyclic\nand piecing them together. However, this approach has many challenges. First, it is not clear if the local acyclic pieces can be learned efficiently since\nit requires the presence of an additive tree metric.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 22,
"total_chunks": 68,
"char_count": 441,
"word_count": 74,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8bc4286c-bb7b-4dcf-ae59-ba6fbd351d25",
"text": "This is addressed by\nconsidering models satisfying correlation decay (see Section 2.3). A second\nand a more difficult challenge involves merging the reconstructed local latent\ntrees with provable guarantees due to the introduction of unlabeled latent\nnodes in different pieces. We circumvent this challenge by leveraging on the\nChow–Liu grouping algorithm [17] and merging the different pieces before\nintroducing the latent nodes. The algorithm is described in Algorithm 3.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 23,
"total_chunks": 68,
"char_count": 473,
"word_count": 69,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ebe0f1b2-7bb6-4960-8f49-d9855e8113cb",
"text": "Let bdn(i,j) denote the estimated distance between nodes i and j according to (6) using the empirical\ndistribution bP n computed using n samples, that is, Xi,j\n(8) bdn(i,j) := −log|det(bPXi,j)|n ∀i,j ∈V. The set of distance estimates bdn(V ) := {bdn(i,j):i,j ∈V } are input to the\nalgorithm along with a parameter r. Recall that Br(i; bdn(V )) := {j : bdn(i,j) ≤r}. LEARNING LOOPY GRAPHICAL MODELS WITH LATENT VARIABLES 11 Algorithm 3 LocalCLGrouping(bdn(V ),Λ,τ,r) for graph estimation using distance estimates bdn(V ) := {bd(i,j)}i,j∈V , confidence bound Λ, threshold τ and\ndistance parameter r. Input: Distance estimates between the observed nodes bdn(V ) :=\n{bd(i,j)}i,j∈V , confidence bound Λ, threshold τ and bound r on distances\nused for local reconstruction. Let Br(v; bdn) := {u: bdn(u,v) ≤r} and let\nMST(A; bdn) denote the minimum spanning tree over A ⊂V based on\nedge weights bdn(A). Given a graph G, let Leaf(G) denote the set of nodes\nwith unit degree. Let N[i;G] denote the closed neighborhood of node i\nin graph G. RG(bdn(A),Λ,τ) represents the recursive grouping method\nfor building latent trees (see Section 3.1) over the set of nodes A using\ndistance estimates bdn(A) with confidence bound Λ and threshold τ for\nmerging nodes.\nfor v ∈V do\nTv ←MST(Br(v); bdn).\nend for\nInitialize bG, bG0 ←S v Tv.\nfor v ∈V \\ Leaf(bG0) do\nA ←N[v; bG]. S ←RG(bdn(A),Λ,τ).\nbG(A) ←S (Replace subgraph over A with S in bG)\nend for\nOutput bG. For each observed node i ∈V , the set of nodes Br(i; bdn(V )) is considered,\nand the minimum spanning tree is constructed. The graph estimate bG is\ninitialized by taking the union of all the local minimum spanning trees. The\nlatent nodes are now iteratively added by considering local neighborhoods\nof bG and using any latent tree algorithm for reconstruction (e.g., [17, 35]). Note that the running time is polynomial (in the number of nodes) as long as\npolynomial time algorithms are employed for local latent tree reconstruction.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 24,
"total_chunks": 68,
"char_count": 1969,
"word_count": 334,
"chunking_strategy": "semantic"
},
{
"chunk_id": "9dc3eb3a-aba9-489f-b7d5-bdf1915631cc",
"text": "The proposed method is efficient for practical implementation due to the\n\"divide and conquer\" feature, that is, the local, latent tree-building operations can be parallelized to obtain speedups. For real datasets, a trade-off\nbetween model complexity and fidelity is typically enforced by optimizing\nscores such as the Bayesian information criterion (BIC) [42]. Such criteria\ncan be easily enforced through a greedy local search in each iteration of\nour method, and this limits the number of hidden variables added by our\nmethod.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 25,
"total_chunks": 68,
"char_count": 529,
"word_count": 81,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0deffced-3799-45a6-804f-2b10295b30c4",
"text": "In our experiments in Section 6, we found that this method is quick\nto implement on real and synthetic datasets. We subsequently establish the correctness of the proposed method under\na set of natural conditions. We require that the parameter r, which determines the set Br(i;d) for each node i, needs to be chosen as a function of\nthe depth δ (i.e., distance from a hidden node to its closest observed nodes)\nand girth g of the graph. In practice, the parameter r provides flexibility in\ntuning the length of cycles added to the graph estimate. When r is large\nenough, we obtain a latent tree, while for small r, the graph estimate can\ncontain many short cycles (and potentially many components). In experiments, we evaluate the performance of our method for different values of r. The tuning of parameters Λ and τ has been previously discussed in the context of learning latent trees (e.g., [17], page 1796), and we leverage on those\nresults here. For more details, see Section 6.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 26,
"total_chunks": 68,
"char_count": 982,
"word_count": 171,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f006fb2d-d42e-4b8c-91a5-79c42389f7d3",
"text": "Simple example with a single cycle. To demonstrate the steps of\nthe above proposed method, consider the simple case of a single cycle of\nlength g, where all the nodes on the cycle are hidden, and each hidden node\nhas two observed leaves, as shown in Figure 2(a). When the cycle length g is\nsufficiently large, information distances on local neighborhoods are approximately additive, as depicted in Figure 2(b). Moreover, in Figure 2(b), let\n\"*\" denote the observed node closest to each hidden node (termed as its surrogate), in terms of information distance. The minimum spanning tree over Various steps of LocalCLGrouping method on a simple cycle, where observed\nvariables are shaded. LEARNING LOOPY GRAPHICAL MODELS WITH LATENT VARIABLES 13",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 27,
"total_chunks": 68,
"char_count": 742,
"word_count": 120,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8cdaf0b0-58f5-4609-baa4-628b9e2d61f7",
"text": "the set of four nodes, which are zoomed in, corresponds to a chain shown in\nFigure 2(c). Similarly, if in different local neighborhoods of observed nodes\n(based on a threshold on information distances), the surrogate relationships\nare similar (i.e., every hidden node has one of its children as its surrogate),\nthen the local MSTs are simple chains, and their merging gives rise to graph\nG in Figure 2(d). Now if a local neighborhood is selected on the merged\ngraph G, as shown in Figure 2(e), then we can discover the local latent tree\nstructure based on information distances as shown in Figure 2(f), since they\nare approximately additive. Similarly, when different neighborhoods on G\nare selected, local latent trees are discovered, and distances between nearby\nhidden nodes are computed. This way we recover the latent cycle graph in\nFigure 2(a) in the end.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 28,
"total_chunks": 68,
"char_count": 861,
"word_count": 143,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a67c282c-1bfd-47ca-97f0-aaadb9c67e96",
"text": "Results for Ising models. We first limit ourselves to providing asymptotic guarantees for the Ising model in (2), and then extend the results to\nnonasymptotic guarantees in general discrete distributions. Conditions for recovery in Ising models. We present a set of natural conditions on the graph structure and model parameters under which\nour proposed method succeeds in structure estimation. (A1) Minimum degree of latent nodes: We require that all latent nodes\nhave degree at least three.\n(A2) Distance bounds: Assume bounds on the edge potentials θ := {θi,j}\nof the Ising model (9) θmin ≤|θi,j| ≤θmax ∀(i,j) ∈G. Similarly assume bounded node potentials. We now define certain quantities which depend on the edge potential bounds. Given a distribution\nbelonging to the class of Ising models P with edge potentials θ = {θi,j}\nand node potentials φ = {φi}, consider its attractive counterpart ¯P with\nedge potentials ¯θ := {|θi,j|} and node potentials ¯φ := {|φi|}. Let φ′max :=\nmaxi∈V atanh(¯E(Xi)), where ¯E is the expectation with respect to the distribution ¯P. Let P(X1,2;{θ,φ1,φ2}) denote a distribution belonging to the\nclass of Ising models on two nodes {1,2} with edge potential θ and node\npotentials {φ1,φ2}. Our learning guarantees depend on dmin and dmax satisfying\n(10) dmin ≥−log|detP(X1,2;{θmax,φ′max,φ′max})|,\n(11) dmax ≤−log|detP(X1,2;{θmin,0,0})|,\ndmax\n(12) η := .\ndmin (A3) Correlation decay: We assume correlation decay in the Ising model\nand require that αg/2\n(13) α := ∆max tanhθmax < 1, = o(1),\nθη(η+1)+2min\nwhere ∆max is the maximum node degree, g is the girth, θmin,θmax are the\nminimum and maximum (absolute) edge potentials in the model and o(1) is\nwith respect to m, the number of nodes in the graph.6\n(A4) Girth vs. depth: The depth δ characterizes how close the latent\nnodes are to observed nodes on graph G: for each hidden node h ∈H, find a\nset of four observed nodes which form the shortest quartet with h as one of\nthe middle nodes, and consider the largest graph distance in that quartet. The depth δ is the worst-case distance over all hidden nodes.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 29,
"total_chunks": 68,
"char_count": 2087,
"word_count": 344,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0623ff1d-e7ca-4d76-836d-211a23a1762c",
"text": "We require the\nfollowing trade-offbetween the girth g and the depth δ,\n(14) −δη(η + 1) = ω(1). Further, the parameter r in our algorithm is chosen as\n(15) r > δ(η + 1)dmax + ε for some ε > 0, 4dmin −r = ω(1).\n(A1) is a natural assumption on the minimum degree of the hidden nodes\nfor identifiability and has been imposed before for latent tree models [17]. Note that the latent nodes of degree two or lower can be marginalized to\nobtain an equivalent representation of the observed statistics.\n(A2) relates certain distance bounds to bounds on edge potentials. Intuitively, dmin and dmax are bounds on information distances given by the local\ntree approximation of the loopy model, and its precise definition is given in\n(18). Note that e−dmax = Ω(θmin) and e−dmin = O(θmax).\n(A3) uses bounds on the edge potentials to impose correlation decay on\nthe model. It is natural that the sample requirement of any graph estimation\nalgorithm depends on the \"weakest\" edge characterized by the minimum\nedge potential θmin. Further, the maximum edge potential θmax characterizes\nthe presence/absence of long-range correlations in the model. Moreover, (A3)\nprescribes that the extent of correlation decay be strong enough (i.e., a small\nα and a large enough girth g) compared to the weakest edge in the model. Conditions similar to (A3) have been imposed before for graphical model\nselection in the regime of correlation decay when there are no hidden variables [3]. For instance, in [3], an upper bound is imposed on the edge potentials to limit the effect of long paths on local conditional independence tests. 6Unless otherwise noted, the notation O(·),o(·),Ω(·),ω(·) are with respect to m, the\nnumber of nodes in the graph.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 30,
"total_chunks": 68,
"char_count": 1716,
"word_count": 289,
"chunking_strategy": "semantic"
},
{
"chunk_id": "1adfee4a-f240-4d40-9b83-aa81d319a2a5",
"text": "LEARNING LOOPY GRAPHICAL MODELS WITH LATENT VARIABLES 15 A lower bound on edge potentials is needed for edges to pass the conditional\nindependence test.\n(A4) provides the trade-offbetween the girth g and the depth δ. Intuitively, the depth needs to be smaller than the girth to avoid encountering\ncycles during the process of graph reconstruction. Recall that the parameter\nr in our algorithm determines the neighborhood over which local MSTs are\nbuilt in the first step. It is chosen such that it is roughly larger than the\ndepth δ in order for all the hidden nodes to be discovered.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 31,
"total_chunks": 68,
"char_count": 584,
"word_count": 99,
"chunking_strategy": "semantic"
},
{
"chunk_id": "4931f3ad-5138-43f6-90cc-f2c7562c2d6d",
"text": "The upper bound\non r ensures that the distortion from an additive metric is not too large. The parameters for latent tree learning routines (such as confidence bounds\nfor quartet tests) are chosen appropriately depending on dmin and dmax. Guarantees for Ising models. We now establish that the proposed\nmethod correctly estimates the graph structure of an Ising model in high\ndimensions. Recall that δ is the depth (distance from a hidden node to its\nclosest observed nodes), θmin is the minimum (absolute) edge potential and\nη = dmax is the ratio of distance bounds. dmin",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 32,
"total_chunks": 68,
"char_count": 572,
"word_count": 96,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2cad19ab-d2c7-4d9a-8fb8-882920d618a8",
"text": "Theorem 1 (Structural consistency for Ising models). Under (A1)–\n(A4), the probability that LocalCLGrouping is structurally consistent tends\nto one, when the number of samples scales as\n(16) n = Ω(θ−δη(η+1)−2min log p). See the supplementary material [4]. □ (1) For learning Ising models on locally tree-like graphs, the sample complexity is dependent both on the minimum edge potential θmin and on the\ndepth δ. Our method is efficient in high dimensions since the sample requirement is only logarithmic in the number of nodes p.\n(2) Dependence on maximum degree: For the correlation decay to hold\n(A3), we require θmin ≤θmax = Θ(1/∆max). This implies that the sample\ncomplexity is at least n = Ω(∆δη(η+1)+2max log p).\n(3) Comparison with fully observed models: In the special case when\nall the nodes are observed (δ = 1) and the graph is locally tree-like, we\nstrengthen the results for our method and establish that the sample complexity for graph estimation is n = Ω(θ−2min log p). This matches the best known\nsample complexity for learning fully observed Ising models [3, 27]. The\nsample complexity result holds for a modified version of LocalCLGrouping:\nthreshold r is applied to the information distances at each node and local MSTs are formed as before. The threshold r can be chosen as r = dmax + ε,\nfor some ε > 0.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 33,
"total_chunks": 68,
"char_count": 1323,
"word_count": 225,
"chunking_strategy": "semantic"
},
{
"chunk_id": "74e05580-33d6-46d2-9098-86a87acd40fb",
"text": "The graph estimate is obtained as the union of local MSTs\nand local latent tree routines are not implemented in this case. We prove\nan improved sample complexity in this special case which matches the best\nknown bounds.\n(4) Comparison with learning latent trees: Our method is an extension of\nlatent tree methods for learning locally tree-like graphs. The sample complexity of our method matches the sample requirements for learning general\nlatent tree models [17, 23, 35]. Thus, we establish that learning locally treelike graphs is akin to learning latent trees in the regime of correlation decay. Extension to general discrete models. We now extend the results\nto general discrete models and provide nonasymptotic sample requirement\nguarantees for success of our proposed method. Local tree approximation. We first define the notion of a local tree metric dtree(V ) computed by limiting the model to acyclic neighborhood subgraphs between the respective node pairs. Given a graph G = (W,E), let\ntree(i,j;G) := G(Bl(i) ∪Bl(j)), for l = ⌊g/2⌋−1, denote the induced subgraph on Bl(i)∪Bl(j), where g is the girth of the graph. Recall that Bl(i;G)\ndenotes the set of nodes within graph distance l from i in G. When l < g/2−1\nno cycles are encountered, and thus the induced subgraph tree(i,j;G) is\nacyclic. Recall that PXi,j|G denotes the pairwise marginal distribution between i and j induced by the distribution P(xW ) Markov on graph G. Let\nPXi,j| tree(i,j) denote the pairwise marginal distribution between i and j induced by considering only the subgraph tree(i,j;G) ⊂G. d(i,j;tree) := −log|detPXi,j| tree(i,j)|,\n(17)\nd(i,j;G) := −log|detPXi,j|G|. Denote dtree(V ) := {d(i,j;tree):i,j ∈V } and d(V ) := {d(i,j;G):i,j ∈V }.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 34,
"total_chunks": 68,
"char_count": 1724,
"word_count": 277,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b7150b66-92c6-4384-ae9e-946e284125d8",
"text": "The learner has access only to the empirical versions bd(V ) of the distances d(V ), and thus the learner cannot estimate dtree(V ). However, we\nuse dtree(V ) to characterize the performance of our algorithm, and we list\nthe relevant assumptions below. Conditions on the model parameters. (B1) Minimum degree: The minimum degree of any hidden node in the\ngraph is three.\n(B2) Bounds on local tree metric: Given a distribution PXW |G Markov on\ngraph G, the pairwise marginal distribution PXi,j| tree(i,j) between any two\nneighbors (i,j) ∈G are nonsingular and the distances d(i,j;tree) := −log|detPXi,j| tree(i,j)| LEARNING LOOPY GRAPHICAL MODELS WITH LATENT VARIABLES 17 satisfy\ndmax\n(18) 0 < dmin ≤d(i,j;tree) ≤dmax < ∞ ∀(i,j) ∈G, η :=\ndmin\nfor suitable parameters dmin and dmax.\n(B3) Regime of correlation decay: The pairwise statistics of the distribution converge locally to a tree limit according to Definition 1 with function\nζ(·) in (3) satisfying\ng r υ\n(19) 0 ≤ζ − −1 < ,\n2 dmin |X|2\nwhere g is the girth, r is the distance bound parameter in LocalCLGrouping,\n|X| is the dimension of each variable, dmin,dmax are the distance bounds in\n(18) and υ := min dmin,0.5e−r(edmin −1),e−0.5dmax(r/dmin+2), (20)\n4dmin −r,r −dmaxδ(η + 1) .\n(B4) Confidence bound for quartet test: The confidence bound in\nQuartet(bd,Λ) routine in Algorithm 1 is chosen as\nr(21) Λ = exp −dmax + 2 .\n2 dmin\n(B5) Threshold for merging nodes: The threshold τ in RG(bd,Λ,τ) routine\nin Algorithm 2 is chosen as\ndmin g(22) τ = −|X|2ζ −1 > 0,\n2 2\nwhere |X| is the dimension of the variable at each node, and ζ(·) is the\ncorrelation decay function according to (3). (B1) is a natural assumption on the minimum degree of the hidden nodes\nfor identifiability, which is also needed for latent trees. Assumption (B2)\nstates that every edge has bounded distances under local tree approximations. Recall that in the special case of Ising models, this can be expressed via\nbounds on edge potentials. Assumption (B3) on correlation decay imposes\na constraint on the rate function ζ(·), in terms of the girth of the graph g,\nthe distance threshold r used by the proposed method, the distance bounds\ndmin and dmax and depth δ. (B3) implies that we require that the depth δ\nsatisfies\n(23) 4dmin > δ(η + 1)dmax. Similarly, (B3) imposes constraints on the parameter r used by the proposed\nalgorithm for building local minimum spanning trees in the first step. (B3) implies that r needs to be chosen as\n(24) δ(η + 1)dmax < r < 4dmin −r.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 36,
"total_chunks": 68,
"char_count": 2492,
"word_count": 434,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5d72f19a-f421-4477-95b0-7cb9169eec5c",
"text": "Intuitively, the above constraint implies that r is relatively small compared\nto the girth of the graph and large enough for every hidden node to be\ndiscovered. This enables the proposed algorithm to correct reconstruct latent\ntrees locally. The confidence bound constraint in (B4) is based on the concentration\nbounds for the empirical distances. The threshold for merging nodes in (B5)\nensures that spurious hidden nodes are not added. These conditions are\ninherited from latent tree algorithms. Guarantees for the proposed method. We now establish that the\nLocalCLGrouping algorithm is structurally consistent under the above conditions. Theorem 2 (Structural consistency of LocalCLGrouping). Under assumptions (B1)–(B5), the LocalCLGrouping algorithm is structurally consistent with probability at least 1 −κ, for any κ > 0, when the sample size n\nsatisfies\n2|X|2 κ\n(25) n > 4log p + |X|log 2 −log ,\n(υ −|X|2ζ(g/2 −r/dmin −1))2 7\nwhere υ is given by (20). (1) We provide PAC guarantees for reconstructing latent graphical models on girth-constrained graphs. The conditions for success imposed on the\ngirth of the graph are relatively mild. We require that the girth be roughly\nlarger than the depth and that the correlation decay function ζ(·) be sufficiently strong (B3). Thus, learning girth-constrained graphs is akin to learning latent tree models (in terms of sample and computational complexities)\nunder a wide range of conditions.\n(2) One notable additional condition required for learning girth-constrained graphs in contrast to latent trees is the requirement of correlation\ndecay (B3).",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 37,
"total_chunks": 68,
"char_count": 1599,
"word_count": 246,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e52be60e-e6ea-4092-a1b8-a21a08366c88",
"text": "However, we note that this is only a sufficient condition, and\nnot necessary for learnability. For instance, the result in [20] establishes that\nthe pairwise statistics converge locally to a tree limit for all attractive Ising\nmodels with strictly positive node potentials, but without any additional\nconstraints on the parameters. Our results and analysis hold in such scenarios since we only require local convergence to a tree metric.\n(3) The results above are applicable for discrete models but can be extended to Gaussian models using the notion of walk-summability in place of\ncorrelation decay according to (3) (see [2]) and the negative logarithm of the LEARNING LOOPY GRAPHICAL MODELS WITH LATENT VARIABLES 19 correlation coefficient as the distance metric; see [17]. The results can also\nbe extended to more general linear models such as multivariate Gaussian\nmodels, Gaussian mixtures and so on, along the lines of [1]. The detailed proof is given in the supplementary material [4]. It consists of the following main steps:",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 38,
"total_chunks": 68,
"char_count": 1034,
"word_count": 164,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6e1a1258-2ff7-4606-b9ca-33e6f7649d86",
"text": "(1) We first prove correctness of LocalCLGrouping under the tree limit\n[i.e., distances dtree(V ) := {d(i,j;tree)}i,j∈V ] and then show sample-based\nconsistency. The latter is based on concentration bounds, along the lines\nof analysis for latent tree models [23, 35], with an additional distortion\nintroduced due to the presence of a loopy graph.\n(2) We now briefly describe the proof establishing the correctness of\nLocalCLGrouping algorithm under dtree in girth-constrained graphs. Intuitively, the distances d(i,j;tree) correspond to a tree metric when the graph\ndistance dist(i,j) < g/2 −1, where g is the girth. Since LocalCLGrouping infers latent trees only locally, it avoids running into cycles and thus correctly\nreconstructs the local latent trees. The initialization step in LocalCLGrouping\ncorresponds to the correct merge of this local latent trees under the assumptions on parameter r in (24), and the correctness of LocalCLGrouping\nis established. □ Guarantees under uniform sampling. We have so far given guarantees for graph reconstruction, given an arbitrary set of observed nodes in\nthe graph. We now specialize the results to the case when there is a uniform\nsampling of nodes and provide learning guarantees. This analysis provides\nintuitions on the relationship between the fraction of sampled nodes and the\nresulting learning performance. Consider an ensemble of graphs on m nodes with girth at least g and\nminimum degree ∆min ≥3 and maximum degree ∆max. Let ρ := mp denote\nthe uniform sampling probability for selecting observed nodes. We have the\nfollowing result on the depth δ. Define a constant ε0 > 0 as\n−ρ)(∆min−1)g/2)(26) ε0 = −log(4m∆max(1 .\nlog m",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 39,
"total_chunks": 68,
"char_count": 1679,
"word_count": 264,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6e9b6367-7e85-4b3a-956f-67215a1fe2af",
"text": "Lemma 1 (Depth under uniform sampling). Given uniform sampling\nprobability of ρ, for any ε ≤max(0,ε0),\n1 log(4m1+ε∆max)(27) δ < log w.p. ≥1 −m−ε.\nlog(∆min −1) |log(1 −ρ)| The proof is by straightforward arguments on binomial random\nvariables and the union bound. See the supplementary material [4]. □ (1) Assuming that the girth satisfies g > 2δ(1+ dmax/dmin) w.h.p., when\nthe sampling probability and the degrees are both constant, then ρ = Θ(1), ∆min,∆max = O(1) ⇒δ = O(log log m) ⇒n = Ω(poly(log m)), where poly(log m) refers to a polylogarithmic dependence in m. On the other\nhand, with vanishing sampling probability, for β ∈[0,1), we have\nρ = Θ(mβ−1), ∆min,∆max = O(1) ⇒δ = O(log m) ⇒n = Ω(poly(m)), (2) Recall that for Ising models, the best-case sample complexity of\nLocalCLGrouping for structural consistency [when η = 1 and θmin = θmax =\nΘ(1/∆max)] scales as\nn = Ω(∆2(δ+1)max log p). Thus, under uniform sampling, the sample complexity required for consistency scales as",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 40,
"total_chunks": 68,
"char_count": 980,
"word_count": 165,
"chunking_strategy": "semantic"
},
{
"chunk_id": "7004948f-d972-456b-80cc-06d3f7f9ca02",
"text": "4log ∆max/log(∆min−1) log p\nn = Ω ∆2max log p . |log(1 −ρ)| For the special case when the graph is regular (∆min = ∆max), this reduces\n(28) n = Ω(∆2maxρ−2(log p)3). Necessary conditions for graph estimation. We have so far provided\nsufficient conditions for recovering latent graphical Markov models on girthconstrained graphs. We now provide necessary conditions on the number of\nsamples required by any algorithm to reconstruct the graph. Let bGn :(X |V |)n →\nGm denote any deterministic graph estimator using n i.i.d. samples from\nnode set V , and Gm is the set of all possible graphs on m nodes. We first define the notion of the graph edit distance based on inexact\ngraph matching [11]. Let G, bG be two graphs with common labeled node\nset V and unlabeled node sets U and bU. Without loss of generality, let\nb be|U| ≥|bU| and add |U| −|bU| number of isolated nodes to bG. Let AG,A G\nthe resulting adjacency matrices. Then the edit distance between G, bG is\ndefined as b dist(bG,G;V ) := min ∥A −π(AG)∥1, G π\nwhere π is any permutation on the unlabeled nodes while keeping the labeled\nnode set V fixed. LEARNING LOOPY GRAPHICAL MODELS WITH LATENT VARIABLES 21",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 41,
"total_chunks": 68,
"char_count": 1163,
"word_count": 208,
"chunking_strategy": "semantic"
},
{
"chunk_id": "097ef933-b608-44f9-9423-b01b0fb4c9bc",
"text": "In other words, the edit distance is the minimum number of entries that\nare different in A b and in any permutation of AG over the unlabeled nodes. G\nIn our context, the labeled nodes correspond to the observed nodes V while\nthe unlabeled nodes correspond to latent nodes H. We now provide necessary\nconditions for graph reconstruction up to certain edit distance. Theorem 3 (Necessary condition). For any deterministic estimator bGm :\n(X mβ)n 7→Gm based on n i.i.d. samples from mβ observed nodes β ∈[0,1]\nof a latent graphical Markov model on graph Gm on m nodes with girth g,\nminimum degree ∆min and maximum degree ∆max, for all ε > 0, we have\n|X|nmβm(2ε+1)m3εm\n(29) P[dist(bGm,Gm;V ) > εm] ≥1 − , m0.5∆minm(m −g∆gmax)0.5∆minm\nunder any sampling process used to choose the observed nodes. The proof is based on counting arguments. See the supplementary material [4] for details. □ (1) The above result states that roughly\n∆min\n(30) n = Ω(∆minm1−β log m) = Ω log p\nsamples are required for structural consistency. Thus, when β = 1 (constant\nfraction of observed nodes), logarithmic number of samples are necessary\nwhile when β < 1 (vanishing fraction of observed nodes), polynomial number\nof samples are necessary for reconstruction. From (28), recall that for Ising\nmodels, under uniform sampling of observed nodes, the best-case sample\ncomplexity of LocalCLGrouping [for homogeneous models on regular graphs\nwith degree ∆and θmin = θmax = Θ(1/∆)] scales as\nn = Ω(∆2ρ−2(log p)3), and thus nearly matches the lower bound on sample complexity in (30).",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 42,
"total_chunks": 68,
"char_count": 1552,
"word_count": 261,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0dc90f61-77ab-44c2-b936-9d1a8eb1a728",
"text": "In this section we present experimental results on real\nand synthetic data. We evaluate performance in terms of perplexity, predictive perplexity and topic coherence, used frequently in topic modeling. In addition, we also study trade-offbetween model complexity and data fitting through the Bayesian information criterion (BIC) [42]. Experiments are\nconducted using the 20-newsgroup data set, monthly stock returns from the\nS&P 100 companies and synthetic data. The datasets, software code and\nresults are available at http://newport.eecs.uci.edu/anandkumar.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 43,
"total_chunks": 68,
"char_count": 559,
"word_count": 76,
"chunking_strategy": "semantic"
},
{
"chunk_id": "36b5eed1-0f55-4bf3-8f2d-3a41acb4959b",
"text": "We generate samples from an\nIsing model Markov on a cycle (see Figure 2) with a fixed depth δ = 1, a fixed latent node degree ∆= 4 and different girths g = 10,20,30,... ,100. The node potentials are kept at zero, while the edge potentials are chosen\nrandomly in the range [0.05,0.2]. This ensures that the model remains in the\nregime of correlation decay since the critical potential θ∗= atanh(∆−1) =\n0.2554 > 0.2.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 44,
"total_chunks": 68,
"char_count": 414,
"word_count": 75,
"chunking_strategy": "semantic"
},
{
"chunk_id": "1e29eb29-0ffc-46a7-b6c7-f10147fb90c9",
"text": "We employ latent graphical models for topic modeling,\nthat is, modeling the relationships between various words co-occurring in\ndocuments. Each hidden variable in the model can be thought of as representing a topic, and topics and words in a document are drawn jointly from\nthe graphical model. For a latent tree graphical model, topics and words are\nconstrained to form a tree, while loopy models relax this assumption. We\nconsider n = 16,242 binary samples of p = 100 keywords selected from the 20\nnewsgroup data. Each binary sample indicates the appearance of the given\nwords in each posting. These samples are divided in to two equal groups,\ntraining and test sets for learning and testing purposes. We also employ latent graphical models for financial modeling\nand in particular, for estimating the dependencies between the stock trends\nof different companies. The data set consists of monthly stock returns of p =\n84 companies7 listed in S&P 100 index from 1990 to 2007.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 45,
"total_chunks": 68,
"char_count": 976,
"word_count": 162,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a561bdd1-a43c-4da5-9365-e7b3514fa369",
"text": "Experiments with\nthis dataset allows us to demonstrate the performance of our algorithm on\ndata using a Gaussian graphical model. The Gaussian model is a simplifying\nassumption but reveals interesting relationships between the companies. We\nnote that more sophisticated kernel models can indeed be used in place of\nthe Gaussian approximation, for example, [44]. This allows us to trade-offmodel complexity and data fitting. In addition,\nwe obtain better generalization by avoiding overfitting. Note that our proposed method only deals with structure estimation and we use expectation\nmaximization (EM) for parameter estimation. For the newsgroup data we\ncompare the proposed method with the LDA model.8\nImplementation. The above method is implemented in MATLAB. We used\nthe modules for LBP, made available with UGM9 package. The LDA models\nare learned using the lda package.10\nThreshold selection r for our method.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 46,
"total_chunks": 68,
"char_count": 914,
"word_count": 138,
"chunking_strategy": "semantic"
},
{
"chunk_id": "35c705b7-2986-4bf0-b827-b45509effcf4",
"text": "Recall that the parameter r in our\nmethod controls the size of neighborhoods over which the local MSTs are\nconstructed in the first step of our method. We earlier presented ranges of r,\nwhere recovery of the loopy structure is theoretically guaranteed (w.h.p.). However, in practice, this range is unknown, since the model parameters",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 47,
"total_chunks": 68,
"char_count": 333,
"word_count": 54,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2bc7879e-d95d-46a6-b2d9-97624861a1df",
"text": "7The 16 companies added after 1990 are dropped from the list of 100 companies listed\nin S&P 100 stock index for this analysis.\n8Typically, LDA models the counts of different words in documents. Here, since we have\nbinary data, we consider a binary LDA model where the observed variables are binary.\n9These codes can be downloaded from UGM.html UGM.html.\n10http://chasen.org/~daiti-m/dist/lda/. LEARNING LOOPY GRAPHICAL MODELS WITH LATENT VARIABLES 23 are unknown to the learner, and also since there is no ground truth with\nrespect to real datasets. Here, we present intuitive criterion for selecting the\nthreshold based on the BIC score. We choose the range for threshold r as\n(31) rmax := max d(i,j), rmin := max min d(i,j),\n(i,j)∈V ×V j∈V i∈V",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 48,
"total_chunks": 68,
"char_count": 745,
"word_count": 122,
"chunking_strategy": "semantic"
},
{
"chunk_id": "71bb1293-0cd8-4c5a-83a2-9c19c0c51e9e",
"text": "thereby disallowing disconnected components in the output graph. Note that\nif we choose r ≥rmax, then the output is a latent tree. In our experiments,\nwe choose one value above rmax to find a reference tree model and compare\nit with other outcomes. For the 20 newsgroup dataset, we find that rmin =\n2.3678 and rmax = 12.2692. Therefore, we choose r ∈{3,5,7,9,11,13} for\nour experiments on newsgroup data. For the monthly stock returns data,\nrmin = 1.0337 and rmax = 8.1172, and we choose r from 1.1 to 8.2.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 49,
"total_chunks": 68,
"char_count": 506,
"word_count": 90,
"chunking_strategy": "semantic"
},
{
"chunk_id": "747f3d0a-d7c6-4ef3-905d-64109714cde2",
"text": "The\ntuning of parameters Λ and τ has been previously discussed in the context\nof learning latent trees (e.g., [17], page 1796), and we leverage on those\nresults here. Performance evaluation. We evaluate performance based on the test perplexity [38] given by\n\" #\n(32) Perp-LL := exp −1 log P(xtest(k)) ,\nk=1\nwhere n is the number of test samples and p is the number of observed\nvariables (i.e., words). Thus the perplexity is monotonically decreasing in\nthe test likelihood and a lower perplexity indicates a better generalization\nperformance. Along the lines of (32), we also evaluate the predictive perplexity [7]\n\" #\n(33) Pred-Perp-LL := exp −1 log P(xtestpred(k)|xtestobs (k)), np\nk=1\nwhere a subset of word occurrences xtestobs is observed in test data, and the\nperformance of predicting the rest of words is evaluated. In our experiments,\nwe randomly select half the words in test samples. We also consider regularized versions of perplexity that capture trade-off\nbetween model complexity and likelihood, given by\n(34) Perp-BIC := exp −1 BIC(xtest) ,\nwhere the BIC score [42] is defined as\n(35) BIC(xtest) := log P(xtest(k)) −0.5(df)log n, k=1\nwhere df is the degrees of freedom in the model. For a graphical model, we set\ndfGM := m + |E|, where m is the total number of variables (both observed\nand hidden), and |E| is the number of edges in the model. model, we set dfLDA := (p(m −p) −1), where p is the number of observed\nvariables (i.e., words) and m −p is the number of hidden variables (i.e.,\ntopics). This is because a LDA model is parameterized by a p × (m −p)\ntopic probability matrix and a (m −p)-length Dirichlet prior. Thus, the\nBIC perplexity in (34) is monotonically decreasing in the BIC score, and a\nlower BIC perplexity indicates better trade-offbetween model complexity\nand data fitting. However, the likelihood and BIC score in (32) and (34)\nare not tractable for exact evaluation in general graphical models since they\ninvolve the partition function. We employ loopy belief propagation (LBP)\nto evaluate them.11 Note that it is exact on a tree model and approximate\nfor loopy models. Along the lines of predictive perplexity in (33), we also\nconsider its regularized version\n(36) Pred-Perp-BIC := exp −1 BIC(xtestpred|xtestobs ) , np",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 50,
"total_chunks": 68,
"char_count": 2259,
"word_count": 382,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2a91f089-5686-434f-9ce0-f65236b58eb3",
"text": "where the conditional BIC score is given by (37) BIC(xtestpred|xtestobs ) := log P(xtestpred(k)|xtestobs (k)) −0.5(df)log n.\nk=1 In addition, we also evaluate topic coherence, frequently considered in\ntopic modeling. It is based on the average pointwise mutual information\n(PMI) score\nX X 1\nPMI := PMI(Xi;Xj),\n45|H|\nh∈H i,j∈A(h)\ni<j\n(38)\nP(Xi = 1,Xj = 1)\nPMI(Xi;Xj) := log P(Xi = 1)P(Xj = 1),",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 51,
"total_chunks": 68,
"char_count": 392,
"word_count": 65,
"chunking_strategy": "semantic"
},
{
"chunk_id": "4d382226-3fc5-4178-bf1c-b03857031c7e",
"text": "where the set A(h) represents the \"top-10\" words associated with topic\nh ∈H. The number of such word pairs for each topic is ( 10 ) = 45, and is used 2\nfor normalization. In [39], it is found that the PMI scores are a good measure\nof human evaluated topic coherence when it is computed using an external\ncorpus. It is also observed that using a related external corpus gives a high\nPMI. Hence, in our experiments, we choose a corpus containing news articles\nfrom the NYT articles bag-of-words dataset. This dataset has a vocabulary\nof 102,660 words from 300,000 separate articles [24].",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 52,
"total_chunks": 68,
"char_count": 585,
"word_count": 104,
"chunking_strategy": "semantic"
},
{
"chunk_id": "67bc53b1-6fbc-4b9b-9336-ae30485d4ed4",
"text": "11The likelihood is evaluated using P(xV ) = PP(xH(xV |xV∪H)), where P(xH|xV ) and P(xV ∪H)\nare computed using LBP, which is exact for trees. The above expression holds for any\nconfiguration of hidden variables xH, however we use the most likely hidden state to\navoid numerical issues. LEARNING LOOPY GRAPHICAL MODELS WITH LATENT VARIABLES 25 Results for synthetic data with girth g = 10 using the proposed method. top 10 words for each topic are selected based on the topic probability vector. For latent graphical models, we use the criterion of information distances on\nthe learned model to select the 10 nearest words for each topic.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 53,
"total_chunks": 68,
"char_count": 637,
"word_count": 107,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0c5f22c4-9d4d-4878-8216-cfea1d0fdeb8",
"text": "Experimental results. Results for synthetic data. We observe that our method outputs graphs\nwith a similar number of latent variables as the ground truth when r is\nchosen close to the bound rmax, defined in (31). On the other hand, lower\nvalues of r lead to more cycles and hidden variables in the output graph. The\nnormalized BIC scores (normalized with respect to n and p) of the loopy\ngraphs improve with the number of samples n, as shown in Figure 3(b). This is expected since the data becomes less noisy with more samples. Figure 3(b) shows an overall improvement in the normalized BIC score with\nincreasing number of samples n for different thresholds r. Figure 3(b) shows\nthe variation of normalized BIC scores for graphs learned using thresholds\nr = 4 to 9 with girth g = 10. We observe that the normalized BIC score\ndecreases for the lowest threshold (r = 4), where the output graph shows a\nsignificant increase in latent nodes and edges, resulting in overfitting, and\nhigher thresholds have better BIC. However, once the threshold results in a\ntree model, the BIC degrades since the cycles are no longer present.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 54,
"total_chunks": 68,
"char_count": 1122,
"word_count": 195,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5dfd62b5-be21-44dc-af31-37a561989281",
"text": "Graph structure for newsgroup data. We employ our method to learn the\ngraph structures under different thresholds r ∈{3,5,7,9,11,13} on newsgroup data, which controls the length of cycles. At r = 13 as shown in\nFigure 4, we obtain a latent tree, and for r ∈{3,5,7,9}, we obtain loopy\nmodels. The first long cycle appears at r = 9 shown in Figure 5. At r = 7,\nwe find a combination of short and long cycles. We find that models with\ncycles are more effective in discovering intuitive relationships.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 55,
"total_chunks": 68,
"char_count": 497,
"word_count": 89,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ba16b99d-1333-453d-96c1-74e03f345edd",
"text": "data.\nnewsgroup\nRegLocalCLGrouping\nwith using\nLearned\nGraph\nTree LEARNING LOOPY GRAPHICAL MODELS WITH LATENT VARIABLES 27",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 56,
"total_chunks": 68,
"char_count": 121,
"word_count": 16,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e683a361-b404-48cb-a87e-d0a15b12c92e",
"text": "data.\nnewsgroup\nRegLocalCLGrouping\nwith using\nLearned\nGraph\nLoopy Table 1\nComparison of proposed method under different thresholds (r) with LDA under different\nnumber of topics (i.e., number of hidden variables) on 20 newsgroup data.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 57,
"total_chunks": 68,
"char_count": 233,
"word_count": 34,
"chunking_strategy": "semantic"
},
{
"chunk_id": "169d128c-f5bc-4346-821d-9ee12303a63e",
"text": "For definition\nof perplexity and predictive perplexity based on test likelihood and BIC scores, and PMI,\nsee (32), (33), (34), (36) and (38) Method r Hidden Edges PMI Perp-LL Perp-BIC Pred-Perp-LL Pred-Perp-BIC Proposed 3 55 265 0.2638 1.1533 1.1560 1.0695 1.0720\nProposed 5 39 293 0.4875 1.1567 1.1594 1.0424 1.0448\nProposed 7 32 183 0.4313 1.1498 1.1518 1.0664 1.0682\nProposed 9 24 129 0.6037 1.1543 1.1560 1.0780 1.0795\nProposed 11 26 125 0.4585 1.1555 1.1571 1.0787 1.0802\nProposed 13 24 123 0.4289 1.1560 1.1576 1.0788 1.0803\nLDA NA 10 NA 0.2921 1.1480 1.1544 1.1623 1.1656\nLDA NA 20 NA 0.1919 1.1348 1.1474 1.1572 1.1638\nLDA NA 30 NA 0.1653 1.1421 1.1612 1.1616 1.1715\nLDA NA 40 NA 0.1470 1.1494 1.1752 1.1634 1.1767 in the latent tree (r = 13), the link between \"computer\" and \"software\" is\nmissing due to the tree constraint, but is discovered when r ≤9. Moreover,\nwe see that common words across different topics tend to connect the local\nsubgraphs. For instance, the word \"program\" is used in the context of both\nspace program and computer programs. Similarly, the word \"earth\" is used\nboth in the context of religion and space exploration.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 58,
"total_chunks": 68,
"char_count": 1150,
"word_count": 195,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e8fbf578-9dec-4c1b-8cce-7f84ad2514d0",
"text": "Perplexity and topic coherence for newsgroup data. In Table 1, we present\nresults under our method and under LDA modeling on newsgroup data. For the LDA model, we vary the number of hidden variables (i.e., topics)\nas {10,20,30,40}. In contrast, our method is designed to optimize for the\nnumber of hidden variables, and does not need this input. We note that\nour method is competitive in terms of both predictive perplexity and topic\ncoherence. We find that the topic coherence (i.e., PMI) for our method\nis optimal at r = 9, where the graph has a single long cycle and a few\nshort cycles. Intuitively, this model is able to discover more relationships\nbetween words, which the latent tree (r = 13) is unable to do so. On the other\nhand, for r < 9, topic coherence is degraded which suggests that adding too\nmany cycles is counterproductive. However, the model at r = 5 performs\nbetter in terms of predictive perplexity indicating that it is able to use\nevidence from more observed words for prediction on test data. Moreover,\nall of our latent graphical models outperform the LDA models in terms of\npredictive perplexity. The top 10 topic words for selected topics are given\nfor our method at (r = 9) and for the LDA model (with 10 topics) are given\nin Tables 2 and 3. Graph structure for stock market data. The outcome of applying the proposed algorithm to stock market data is presented in Table 4. LEARNING LOOPY GRAPHICAL MODELS WITH LATENT VARIABLES 29 Table 2\nTop 10 topic words from selected topics in loopy graphical model\nwith threshold r = 9, the topic number corresponds to the labels\nof hidden variables in the loopy graph shown in Figure 5",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 59,
"total_chunks": 68,
"char_count": 1653,
"word_count": 293,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2110dbf5-aacd-4e92-9a52-390c14c7b37c",
"text": "Topic 16 Topic 18 Topic 12 Topic 1 Topic 8 lunar disk card god software\nmoon drive video jesus pc\norbit dos windows bible computer\nsolar memory driver christian system\nmission windows graphics religion dos\nsatellite pc dos earth windows\nearth software version question disk\nshuttle scsi ftp fact science\nmars computer pc jews drive\nspace system disk evidence university that the number of edges and hidden variables remain fairly constant over a\nlarge range of thresholds. Specifically for r ∈[5.9,6.7] ∪[6.8,7.7], we obtain\nthe same graph structure (for r > rmax, we obtain a tree). Another general\ntrend observed is the improvement of the BIC score as the threshold decreases up till a certain point. The graphs learned using r = 5,7.7 and 8.2\nare shown in Figures 6, 7 and 8.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 60,
"total_chunks": 68,
"char_count": 778,
"word_count": 133,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6faa4fec-3a99-4e7a-8079-c648956f1314",
"text": "Interesting connections between companies\nemerge. The latent tree structure in Figure 8 captures many key relationships. In particular, the S&P index node has a high degree since it captures\nthe overall trend of various companies. Companies in similar sectors and\ndivisions are grouped together. For instance, retail stores such as \"Target,\"\n\"Walmart,\" \"CVS\" and \"Home Depot\" are grouped together. However, additional relationships emerge as the threshold is decreased and cycles are Table 3\nTop 10 topic words corresponding to selected topics from the LDA model with 10 topics Topic 4 Topic 8 Topic 7 Topic 6 Topic 5 Space windows card god drive\nnasa files graphics world states\ninsurance dos video fact research\nearth format driver christian disk\nmoon ftp windows jesus university\norbit program computer religion mac\nmission software pc bible scsi\nlaunch win version evidence computer\ngun version software human system\nshuttle pc system question power",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 61,
"total_chunks": 68,
"char_count": 953,
"word_count": 148,
"chunking_strategy": "semantic"
},
{
"chunk_id": "807d8539-f2c2-4d4e-b5cf-7d68bc4adfa4",
"text": "Table 4\nComparison of proposed method under different thresholds (r) on Stock data\nusing the proposed method. For definition of perplexity based on test likelihood\nand BIC scores; see (32) and (34) r Hidden Edges Perp-LL Perp-BIC 2.7 35 154 1.9498 2.0296\n3.9 39 139 2.0200 2.0993\n4.9 35 129 2.0210 2.0960\n5 36 131 2.0169 2.0927\n6.7 26 111 2.0344 2.1016\n7.7 26 111 2.0353 2.1025\n8.2 26 110 2.0405 2.1076",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 62,
"total_chunks": 68,
"char_count": 402,
"word_count": 72,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5f448480-8651-4a39-9d77-ae2dc9520506",
"text": "We observe that the first cycle that is added connects the various\noil companies which suggests strong interdependencies and influence on the\nS&P 100 index. In addition, more cycles emerge when the threshold is decreased further. For instance, in Figure 6, we find a cycle connecting aviation\ncompany \"Boeing\" with \"Honeywell\" which is in the aviation industry, but\nalso additionally is in the chemical industry and connects to companies such\nas \"Dow Chemicals.\" Thus as in newsgroup data, we find that companies in\nmultiple categories lead to cycles in the underlying graph.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 63,
"total_chunks": 68,
"char_count": 575,
"word_count": 92,
"chunking_strategy": "semantic"
},
{
"chunk_id": "824a9fb6-2970-499c-9b47-7e60b736edd1",
"text": "Edge density vs. threshold r. We now study the edge density (i.e., number of edges) in the initialization step of our method as a function of the\nthreshold r for both newsgroup and stock data. Recall that the initialization\nstep involves building a loopy graph on observed variables (and no hidden\nvariables). The edge density in this step is indicative of the number of cycles added to the ultimate latent model. We observe that the graphs become\ndenser as r is reduced from rmax. However, when r is very small, the number\nof edges decreases since the nodes have sparser neighborhoods.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 64,
"total_chunks": 68,
"char_count": 586,
"word_count": 101,
"chunking_strategy": "semantic"
},
{
"chunk_id": "017c418a-2a77-4e5f-923c-ec9716aa0536",
"text": "This trend is\nseen in both Figures 9(a) and 9(b) which show the variation for newsgroup\nand stock data. For the newsgroup data, the graph density peaks at r = 5,\nwhich also achieves the highest predictive perplexity; see Table 1. Thus, we\nsee a direct relationship between the edge density and the corresponding\npredictive perplexity in the learned model. Intuitively, this is because as the\nnumber of edges increases, prediction at any node involves more evidence. However, as the threshold r is reduced further, graphs become less denser,\nand there is also a corresponding degradation in the predictive perplexity.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 65,
"total_chunks": 68,
"char_count": 616,
"word_count": 100,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8646878d-653c-458c-b089-c9b7b1988e01",
"text": "The above experiments confirm the effectiveness of our approach for discovering hidden topics and are in line with the theoretical guarantees established earlier in the paper. Our analysis reveals that a large class of loopy\ngraphical models with latent variables can be learned efficiently in different\ndomains. LEARNING LOOPY GRAPHICAL MODELS WITH LATENT VARIABLES 31 data.\nreturn\nstock\nmonthly\nS&P\nLocalCLGrouping\nwith using\nLearned\nGraph\nLoopy data.\nreturn\nstock\nmonthly\nS&P\nLocalCLGrouping\nwith\n7.7 using\nLearned\nGraph\nLoopy LEARNING LOOPY GRAPHICAL MODELS WITH LATENT VARIABLES 33 data.\nreturn\nstock\nmonthly\nS&P\nLocalCLGrouping\nwith\n8.2",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 66,
"total_chunks": 68,
"char_count": 642,
"word_count": 94,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e42c14ec-75ba-418a-b690-a75d75d9004e",
"text": "using\nLearned\nGraph\nTree Variation of edge density of graphs at the initialization stage of LocalCLGrouping\nvs. threshold r. In this paper, we considered latent graphical models Markov on girth-constrained graphs and proposed a novel approach for structure\nestimation. We established the correctness of the method when the model\nis in the regime of correlation decay and also derived PAC learning guarantees. We compared these guarantees with other methods for graphical model\nselection, where there are no latent variables. Our findings reveal that latent variables do not add much complexity to the learning process in certain\nmodels and regimes, even when the number of hidden variables is large. These findings push the realm of tractable latent models for learning. Mossel (Berkeley) for detailed\ndiscussions in the beginning regarding problem formulation, modeling and\nalgorithmic approaches and Padhraic Smyth (UCI) and David Newman\n(UCI) for evaluation measures for topic models.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 67,
"total_chunks": 68,
"char_count": 987,
"word_count": 148,
"chunking_strategy": "semantic"
},
{
"chunk_id": "38bbe848-6381-42d9-8b32-fdf292cb028b",
"text": "The authors also thank\nthe editor Tony Cai (Wharton) and anonymous reviewers whose comments\nsubstantially improved the paper. An abridged version of this work appears\nin the Proceedings of NIPS 2012. SUPPLEMENTARY MATERIAL Supplementary material to \"Learning loopy graphical models with latent\nvariables: Efficient methods and guarantees\" (DOI: 10.1214/12-AOS1070SUPP;\n.pdf). Proofs of various theorems.",
"paper_id": "1203.3887",
"title": "Learning loopy graphical models with latent variables: Efficient methods and guarantees",
"authors": [
"Animashree Anandkumar",
"Ragupathyraj Valluvan"
],
"published_date": "2012-03-17",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1203.3887v4",
"chunk_index": 68,
"total_chunks": 68,
"char_count": 403,
"word_count": 54,
"chunking_strategy": "semantic"
}
]