[ { "chunk_id": "402b6b4d-54b4-4d9e-8a66-adb30ef35935", "text": "Identifying the Relevant Nodes Without Learning the Model Pe˜na Roland Nilsson Johan Bj¨orkegren Jesper Tegn´er\nLink¨oping University Link¨oping University Karolinska Institutet Link¨oping University\n58183 Link¨oping, Sweden 58183 Link¨oping, Sweden 17177 Stockholm, Sweden 58183 Link¨oping, Sweden", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 0, "total_chunks": 29, "char_count": 298, "word_count": 35, "chunking_strategy": "semantic" }, { "chunk_id": "b904d35b-8c21-4ff5-a476-d1e4e21a7d83", "text": "Abstract efficient in terms of both runtime and data requirements and, thus, can be applied to high-dimensional\nWe propose a method to identify all the nodes domains. We believe that our method can be helpful\nthat are relevant to compute all the conditio- to those working with such domains, where identifying\nnal probability distributions for a given set the optimal domain before learning the BN can reduce\nof nodes. Our method is simple, efficient, costs drastically.\nconsistent, and does not require learning a Although, to our knowledge, the problem addressed in\nBayesian network first. Therefore, our me- this paper has not been studied before, there do exist\nthod can be applied to high-dimensional da- papers that address closely related problems. Geiger et\ntabases, e.g. gene expression databases. al. (1990) and Shachter (1988, 1990, 1998) show how\nto identify in a BN structure all the nodes whose BN\nparameters are needed to answer a particular query.\n1 INTRODUCTION Lin and Druzdzel (1997) show how to identify some\nnodes that can be removed from a BN without affecAs part of our project on atherosclerosis gene expres- ting the answer to a particular query. Mahoney and\nsion data analysis, we want to learn a Bayesian net- Laskey (1998) propose an algorithm based on the work\nwork (BN) to answer any query about the state of by Lin and Druzdzel (1997) to construct, from a set of\ncertain atherosclerosis genes given the state of any BN fragments, a minimal BN to answer a particular\nother set of genes. If U denotes all the nodes (genes) query. Madsen and Jensen (1998) show how to idenand T ⊆U denotes the target nodes (atherosclero- tify some operations in the junction tree of a BN that\nsis genes), then we want to learn a BN to answer any can be skipped without affecting the answer to a parquery of the form p(T|Z = z) with Z ⊆U \\ T. Unfor- ticular query. Two are the main differences between\ntunately, learning a BN for U is impossible with our these works and our contribution.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 1, "total_chunks": 29, "char_count": 1998, "word_count": 345, "chunking_strategy": "semantic" }, { "chunk_id": "78168d83-b4f4-4e97-9a73-1f5316849e3f", "text": "First, they focus\nresources because U contains thousands of nodes. Ho- on a single query while we focus on all the queries\nwever, we do not really need to learn a BN for U to about the target nodes. Second, they require learning\nbe able to answer any query about T: It suffices to a BN structure first while we do not. Before going\nlearn a BN for U \\ I where I, the irrelevant nodes, is into the details of our contribution, we review some\na maximal subset of U \\ T such that I ⊥⊥T|Z for all key concepts in the following section. We prove that I is unique and, thus,\nthat U \\ I is the optimal domain to learn the BN, because R = U \\ T \\ I is the (unique) minimal subset of 2 PRELIMINARIES\nU \\ T that contains all the nodes that are relevant to\nanswer all the queries about T. The following definitions and results can be found in\nWe propose the following method to identify R: R ∈R most books on Bayesian networks, e.g. Pearl (1988)\niffthere exists a sequence of nodes starting with R and and Studen´y (2005). Let U denote a non-empty fiending with some T ∈T such that every two consecu- nite set of random variables. A Bayesian network\ntive nodes in the sequence are marginally dependent. (BN) for U is a pair (G, θ) where G, the structure,\nWe prove that the method is consistent under the as- is a directed and acyclic graph (DAG) whose nodes\nsumptions of strict positivity, composition and weak correspond to the random variables in U and θ, the\ntransitivity. We argue that these assumptions are not parameters, are parameters specifying a probability\ntoo restrictive. It is worth noting that our method is distribution for each X ∈U given its parents in G, A BN (G, θ) represents a probability X, because p(T|Z, Z′) = p(T|Z) for all Z ⊆U \\ T \\ X\ndistribution for U, p(U), through the factorization and Z′ ⊆X due to decomposition. Therefore, we do Q\np(U) = X∈U p(X|Pa(X)). Therefore, it is clear that not really need to learn a BN for U to be able to\na BN for U can answer any query p(T|Z = z) with answer any query about T: It suffices to learn a BN\nT ⊆U and Z ⊆U \\ T. Hereinafter, all the probabi- for U \\ X.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 2, "total_chunks": 29, "char_count": 2114, "word_count": 410, "chunking_strategy": "semantic" }, { "chunk_id": "a1414460-0013-4618-98d2-89ed7f99cda6", "text": "The following theorem characterizes all the\nlity distributions and DAGs are defined over U, unless sets of nodes that are irrelevant.\notherwise stated. Theorem 1 Let I = {X1, . . . , Xn} denote all the\nLet X, Y and Z denote three mutually disjoint subnodes in U \\ T such that, for all i, Xi ⊥⊥T|Z for all\nsets of U. Let X⊥⊥Y|Z denote that X is independent\nZ ⊆U \\ T \\ {Xi}. Then, I is the (unique) maximal\nof Y given Z in a probability distribution p. Let subset of U\\T such that I⊥⊥T|Z for all Z ⊆U\\T\\I.\nsep(X, Y|Z) denote that X is separated from Y by\nZ in a graph G. If G is a DAG, then sep(X, Y|Z) is Proof: Let Z ⊆U \\ T \\ I. Since X1 ⊥⊥T|Z and\ntrue when for every undirected path in G between a X2 ⊥⊥T|Z∪{X1}, then {X1, X2}⊥⊥T|Z due to contracnode in X and a node in Y there exists a node Z in tion. This together with X3 ⊥⊥T|Z ∪{X1, X2} implies\nthe path such that either (i) Z does not have two pa- {X1, X2, X3} ⊥⊥T|Z due to contraction again. Contirents in the path and Z ∈Z, or (ii) Z has two parents nuing this process for the rest of the nodes in I proves\nin the path and neither Z nor any of its descendants that I⊥⊥T|Z.\nin G is in Z. On the other hand, if G is an undirected graph (UG), then sep(X, Y|Z) is true when for Let us assume that there exists some I′ ⊆U such that\nevery path in G between a node in X and a node in I′ \\ I ̸= ∅and I′ ⊥⊥T|Z for all Z ⊆U \\ T \\ I′. Let\nY there exists some Z ∈Z in the path. A probabi- X ∈I′ \\ I. Then, X ⊥⊥T|Z for all Z ⊆U \\ T \\\nlity distribution p is faithful to a DAG or UG G when {X} due to decomposition and weak union. This is a\nX ⊥⊥Y|Z iffsep(X, Y|Z). G is an independence map contradiction because X /∈I.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 3, "total_chunks": 29, "char_count": 1661, "word_count": 359, "chunking_strategy": "semantic" }, { "chunk_id": "49782e34-244f-4c0e-bae6-b0faff58acc9", "text": "Consequently, I is the\nof p when X ⊥⊥Y|Z if sep(X, Y|Z). G is a minimal (unique) maximal subset of U such that I ⊥⊥T|Z for\nindependence map of p when removing any edge from all Z ⊆U \\ T \\ I. G makes it cease to be an independence map of p. If It follows from the theorem above that a set of nodes\np is strictly positive, then it has a unique minimal un- is irrelevant iffit is a subset of I due to decomposition\ndirected independence map G, and it can be built via and weak union. Therefore, U\\I is the optimal domain\nthe edge exclusion algorithm: Two nodes X and Y are to learn the BN, because R = U\\T\\I is the (unique)\nadjacent in G iffX ̸⊥⊥Y |U \\ {X, Y }. Alternatively, G minimal subset of U\\T that contains all the nodes that\ncan be built via the Markov boundary algorithm: Two are relevant to answer all the queries about T. The\nnodes X and Y are adjacent in G iffY belongs to the theorem above characterizes R as X ∈R iffX ̸⊥⊥T|Z\nMarkov boundary of X. The Markov boundary of X is for some Z ⊆U\\T\\{X}. Unfortunately, this is not a\nthe set X ⊆U\\{X} such that (i) X ⊥⊥U\\X\\{X}|X, practical characterization due to the potentially huge\nand (ii) no proper subset of X satisfies (i). number of conditioning sets to consider. The following\nLet X, Y, Z and W denote four mutually disjoint sub- theorem gives a more practical characterization of R.\nsets of U. Any probability distribution p satisfies the\nfollowing four properties: Symmetry X⊥⊥Y|Z ⇒Y⊥⊥ Theorem 2 Let p be a strictly positive probability disX|Z, decomposition X⊥⊥Y ∪W|Z ⇒X⊥⊥Y|Z, weak tribution satisfying weak transitivity. Then, X ∈R iff\nunion X⊥⊥Y∪W|Z ⇒X⊥⊥Y|Z∪W, and contraction there exists a path between X and some T ∈T in the\nX ⊥⊥Y|Z ∪W ∧X ⊥⊥W|Z ⇒X ⊥⊥Y ∪W|Z. If p is minimal undirected independence map G of p.\nstrictly positive, then it satisfies the intersection proProof: If there exists no path for X like the one des-perty X⊥⊥Y|Z∪W ∧X⊥⊥W|Z∪Y ⇒X⊥⊥Y ∪W|Z.\ncribed in the theorem, then X ⊥⊥T|Z for all Z ⊆If p is DAG-faithful or UG-faithful, then it satisfies\nU \\ T \\ {X} because G is an undirected independencethe following two properties: Composition X⊥⊥Y|Z∧\nmap of p. Consequently, X /∈R by Theorem 1.X ⊥⊥W|Z ⇒X ⊥⊥Y ∪W|Z, and weak transitivity\nX⊥⊥Y|Z∧X⊥⊥Y|Z∪{W} ⇒X⊥⊥{W}|Z∨{W}⊥⊥Y|Z Let X1, . . . , Xn with Xi ∈U \\ T for all i < n and\nwith W ∈U \\ X \\ Y \\ Z. Xn ∈T denote the sequence of nodes in the shortest\npath in G between X1 and a node in T.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 4, "total_chunks": 29, "char_count": 2417, "word_count": 456, "chunking_strategy": "semantic" }, { "chunk_id": "67511011-fa59-4e8d-9d47-9d6ae350bed8", "text": "Since G is the\n3 IDENTIFYING THE RELEVANT minimal undirected independence map of p, then\nNODES Xi ̸⊥⊥Xj|U \\ {Xi, Xj} (1) We say that X ⊆U \\ T is irrelevant to answer any iffXi and Xj are consecutive in the sequence. We\nquery about T ⊆U when X⊥⊥T|Z for all Z ⊆U\\T\\ prove that X1 ̸⊥⊥Xn|U \\ {X1, . . . , Xn}, which implies that X1 ̸⊥⊥T|U\\T\\{X1, . . . , Xn−1} due to weak union can be identified as follows. First, initialize R with T.\nand, thus, that X1 ∈R by Theorem 1. If n = 2, then Second, repeat the following step while possible: For\nthis is true by equation (1). We now prove it for n > 2. each node in R that has not been considered before,\nWe start by proving that Xi ̸⊥⊥Xj|U\\{Xi, Xj, Xk} for find all the nodes in U\\R that are adjacent to it in G\nall Xi and Xj that are consecutive in the sequence. Let and add them to R. Finally, remove T from R.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 5, "total_chunks": 29, "char_count": 854, "word_count": 177, "chunking_strategy": "semantic" }, { "chunk_id": "4ed8d7b4-df18-45b1-8e04-cc1014487b34", "text": "The\nus assume that i < j < k. The proof is analogous for second step can be solved with the help of the edge exk < i < j. By equation (1), clusion algorithm or the Markov boundary algorithm. The conditioning set in every independence test that\nXi ̸⊥⊥Xj|U \\ {Xi, Xj} (2) the edge exclusion algorithm performs is of size |U|−2. On the other hand, the largest conditioning set in the\nand\ntests that the Markov boundary algorithm performs\nXi ⊥⊥Xk|U \\ {Xi, Xk}. (3) is at least of the size of the largest Markov boundary\nLet us assume that in the connected components.1 Therefore, both algorithms can require a large learning database to return\nXi ⊥⊥Xj|U \\ {Xi, Xj, Xk}. (4) the true adjacent nodes with high probability, because\nthe conditioning sets in some of the tests can be quite\nThen, Xi ⊥⊥{Xj, Xk}|U\\{Xi, Xj, Xk} due to contraclarge. This is a problem if only a small learning dation on equations (3) and (4) and, thus, Xi ⊥⊥Xj|U \\\ntabase is available as, for instance, in gene expression\n{Xi, Xj} due to weak union. This contradicts equadata analysis where collecting data is expensive.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 6, "total_chunks": 29, "char_count": 1090, "word_count": 196, "chunking_strategy": "semantic" }, { "chunk_id": "1bfda19e-1392-402e-840e-5933f79803f2", "text": "The\ntion (2) and, thus,\nfollowing theorem gives a characterization of R that\nXi ̸⊥⊥Xj|U \\ {Xi, Xj, Xk}. (5) only tests for marginal independence and, thus, the\nmethod to identify R it gives rise to requires minimal\nlearning data. We now prove that Xi ̸⊥⊥Xk|U \\ {Xi, Xj, Xk} for\nall Xi, Xj and Xk that are consecutive in the seTheorem 3 Let p be a strictly positive probability disquence. By equation (5), Xi ̸⊥⊥Xj|U \\ {Xi, Xj, Xk}\ntribution satisfying composition and weak transitivity.\nand Xj ̸⊥⊥Xk|U \\ {Xi, Xj, Xk} and, thus, Xi ̸⊥⊥\nThen, X1 ∈R iffthere exists a sequence X1, . . . , XnXk|U \\ {Xi, Xj, Xk} or Xi ̸⊥⊥Xk|U \\ {Xi, Xk} due to\nwith Xi ∈U \\ T for all i < n, Xn ∈T, andweak transitivity. Since the latter contradicts equation\nXi ̸⊥⊥Xi+1|∅for all i.(1), we conclude that Xi ̸⊥⊥Xk|U \\ {Xi, Xj, Xk}. (6) Proof: Let X denote all the nodes in U \\ T for which\nthere exists a sequence like the one described in the\ntheorem. Let W denote all the\nFinally, we prove that Xi ⊥⊥Xj|U\\{Xi, Xj, Xk} for all\nnodes in U\\T from which there exists a path to some\nXi, Xj and Xk such that neither the first two nor the\nT ∈T in the minimal undirected independence map\nlast two are consecutive in the sequence. If X ∈X then all the nodes in its sequence\n(1), Xi ⊥⊥Xj|U \\ {Xi, Xj} and Xj ⊥⊥Xk|U \\ {Xj, Xk}.\nexcept the last one must be in W, otherwise there\nThen,\nis a contradiction because two adjacent nodes in the\nXi ⊥⊥Xj|U \\ {Xi, Xj, Xk} (7)\nsequence are marginally independent in p. Then, X ⊆\ndue to intersection and decomposition. W and, thus, X ⊆R because R = W by Theorem 2. It can be seen from equations (5), (6) and (7) that Moreover, X ⊥⊥Y |∅and Y ⊥⊥T|∅for all X ∈X, Y ∈Y\nthe sequence X1, X3, . . . , Xn satisfies equation (1) re- and T ∈T, otherwise there is a contradiction because\nplacing U by U \\ {X2}: Equations (5) and (6) ensure there exists a sequence for Y like the one described in\nthat every two consecutive nodes are dependent, while the theorem. Then, Y ⊥⊥X ∪T|∅due to composition\nequation (7) ensures that every two non-consecutive and, thus, Y ⊥⊥T|Z for all Z ⊆U \\ T \\ Y due to\nnodes are independent. Therefore, we can repeat the decomposition and weak union. Consequently, Y ⊆I\ncalculations above for the sequence X1, X3, . . . , Xn re- by Theorem 1 and, thus, R ⊆X.\nplacing U by U \\ {X2}. This allows us to successiIn the method to identify R that follows from the\nvely remove the nodes X2, . . . , Xn−1 from the sequence\ntheorem above, we do not really need to perform all\nX1, . . . , Xn and conclude that the sequence X1, Xn satisfies equation (1) replacing U by U\\{X2, . . . , Xn−1}. 1We assume that the Markov boundary of a node is obThen, X1 ̸⊥⊥Xn|U \\ {X1, . . . , Xn}. tained via the incremental association Markov boundary\nalgorithm (Tsamardinos et al. 2003) which, to our knowIn the theorem above, we do not really need to learn G: ledge, is the only existing algorithm that satisfies our reIt suffices to identify the nodes in the connected com- quirements of scalability and consistency when assuming\nponents of G that include some T ∈T. These nodes composition (Pe˜na et al. 2006a).", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 7, "total_chunks": 29, "char_count": 3107, "word_count": 601, "chunking_strategy": "semantic" }, { "chunk_id": "9bf87261-c2ea-4505-a1e9-e9763c9427b3", "text": "the |U|(|U| −1)/2 marginal independence tests if we C includes some nodes from I. To identify the optimal\nadopt the following implementation.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 8, "total_chunks": 29, "char_count": 141, "word_count": 22, "chunking_strategy": "semantic" }, { "chunk_id": "627ec7a5-7fc4-4bf1-9908-0f8318ed7f13", "text": "First, initialize domain to learn the BN, C has to be purged as follows\nR with T. Second, repeat the following step while before running Theorem 4: Find some X ∈C such\npossible: For each node in R that has not been consi- that X /∈R(C \\ {X}), remove X from C, and contidered before, find all the nodes in U\\R that are mar- nue purging the resulting set. We prove that purging a\nginally dependent on it and add them to R. Finally, node never makes an irrelevant node (including those\nremove T from R. Therefore, this implementation is purged before) become relevant.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 9, "total_chunks": 29, "char_count": 565, "word_count": 104, "chunking_strategy": "semantic" }, { "chunk_id": "52561517-2c30-4a1c-9c1b-e61f540143d2", "text": "It suffices to prove\nefficient in terms of both runtime and data require- that if Y /∈R(C), X ∈C, and X /∈R(C \\ {X}), then\nments and, thus, can be applied to high-dimensional Y /∈R(C \\ {X}). Let Z ⊆U \\ T \\ (C \\ {X}) \\ {Y }. If\ndomains, where identifying the optimal domain before X ∈Z then Y ⊥⊥T|(C \\ {X}) ∪Z because Y /∈R(C).\nlearning the BN can reduce costs drastically. It is also On the other hand, if X /∈Z then Y ⊥⊥T|C ∪Z\nworth noting that the irrelevant nodes are not neces- because Y /∈R(C) and X ⊥⊥T|(C \\ {X}) ∪Z besarily mutually independent, i.e. they can have an cause X /∈R(C \\ {X}). Then, Y ⊥⊥T|(C \\ {X}) ∪Z\narbitrary dependence structure. due to contraction and decomposition. Consequently,\nY ⊥⊥T|(C\\{X})∪Z for all Z ⊆U\\T\\(C\\{X})\\{Y }\nIt goes without saying that there is no guarantee that\nand, thus, Y /∈R(C \\ {X}). When U \\ I(C) is the\nU \\ I will not still be too large to learn a BN. If this\noptimal domain, U \\ I(C) ⊆U \\ I which proves that\nis the case, then one may have to reduce T. Another\ncontext nodes can help to reduce U \\ I.\nsolution may be to consider only the queries about T\nwhere the conditioning set includes the context nodes Finally, it is worth mentioning that Theorems 1-4,\nC ⊆U \\ T. In other words, the BN to be learnt which prove that the corresponding methods to idenshould be able to answer any query p(T|C = c, Z = z) tify R are correct if the independence tests are corwith Z ⊆U \\ T \\ C but not the rest. Repeating our rect, also prove that the methods are consistent if the\nreasoning above, it suffices to learn a BN for U \\ I(C) tests are consistent, since the number of tests that the\nwhere I(C) is a maximal subset of U\\T\\C such that methods perform is finite. Kernel-based independence\nI(C) ⊥⊥T|C ∪Z for all Z ⊆U \\ T \\ C \\ I(C). The tests that are consistent for any probability distribufollowing theorem shows that I(C) can be obtained by tion exist (Gretton et al. 2005a, 2005b). For Gaussian\napplying Theorem 3 to p(U\\C|C = c) for any c, under distributions, the most commonly used independence\nthe assumption that p(U \\ C|C = c) has the same test is Fisher's z test which is consistent as well (Kaindependencies for all c.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 10, "total_chunks": 29, "char_count": 2173, "word_count": 419, "chunking_strategy": "semantic" }, { "chunk_id": "09cb4cf4-9dc9-430c-8bee-912a2037e35a", "text": "We discuss this assumption lish and B¨uhlmann 2005). Specifically, these papers\nin the next section. show that the probability of error for these tests decays exponentially to zero when the sample size goes\nTheorem 4 Let p be a strictly positive probability dis- to infinity.\ntribution satisfying composition and weak transitivity,\nand such that p(U \\ C|C = c) has the same inde-\n4 DISCUSSION ON THEpendencies for all c. Then, the result of applying\nTheorem 3 to p(U \\ C|C = c) for any c is R(C) = ASSUMPTIONS\nU \\ T \\ C \\ I(C). Equivalently, X1 ∈R(C) iffthere\nexists a sequence X1, . . . , Xn with Xi ∈U \\ T \\ C for We now argue that the assumptions of strict positivity,\nall i < n, Xn ∈T, and Xi ̸⊥⊥Xi+1|C for all i. composition and weak transitivity made in Theorems\n2-4 are not too restrictive. The assumption of strict\nProof: Let X, Y and Z denote three mutually dis- positivity is justified in most real-world applications\njoint subsets of U \\ C. Since p(U \\ C|C = c) has the because they typically involve measurement noise.2\nsame independencies for all c then, for any c, X⊥⊥Y|Z We note that Gaussian distributions are strictly poin p(U \\ C|C = c) iffX ⊥⊥Y|(Z ∪C) in p.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 11, "total_chunks": 29, "char_count": 1176, "word_count": 220, "chunking_strategy": "semantic" }, { "chunk_id": "ec37c64e-36a1-4c71-952d-40a7054a0fee", "text": "We recall from section 2 that DAG-faithful and\ntwo implications. First, Theorem 3 can be applied to UG-faithful probability distributions satisfy composip(U \\ C|C = c) for any c because it satisfies the strict tion and weak transitivity. Gaussian distributions sapositivity, composition and weak transitivity proper- tisfy composition and weak transitivity too (Studen´y\nties since p satisfies them. Second, the result of ap- 2005).", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 12, "total_chunks": 29, "char_count": 432, "word_count": 64, "chunking_strategy": "semantic" }, { "chunk_id": "9e8ae944-c428-45b6-bb85-1c5b121d9178", "text": "These are important families of probability displying Theorem 3 to p(U \\ C|C = c) for any c is tributions. The following theorem, which extends ProR(C). However, Xi ̸⊥⊥Xi+1|∅in p(U \\ C|C = c) iff position 1 in Chickering and Meek (2002), shows that\nXi ̸⊥⊥Xi+1|C in p for all i. composition and weak transitivity are conserved when We note that the theorem above implies that I(C) is\n2Note that the fact that the learning data are sparse\nunique. It is also worth noting that we have not clai- does not imply that the assumption of strict positivity does\nmed that U \\ I(C) is the optimal domain to learn the not hold. We cannot conclude from a finite sample that\nBN. The reason is that it may not be minimal, e.g. if some combinations of states are impossible. hidden nodes and selection bias exist. For instance, the cies do not exist or are very rare.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 13, "total_chunks": 29, "char_count": 851, "word_count": 156, "chunking_strategy": "semantic" }, { "chunk_id": "01389460-f7a8-4873-9ceb-814da0628925", "text": "If p is a Gaussian\nprobability distribution that results from hiding some distribution, then p(U \\ W|W = w) has the same\nnodes and instantiating some others in a DAG-faithful independencies for all w, because the independencies\nprobability distribution may not be DAG-faithful but in p(U \\ W|W = w) only depend on the variancesatisfies composition and weak transitivity. This is an covariance matrix of p (Anderson 1984). Let us now\nimportant result for gene expression data analysis (see focus on all the multinomial distributions p for which a\nsection 6).", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 14, "total_chunks": 29, "char_count": 557, "word_count": 91, "chunking_strategy": "semantic" }, { "chunk_id": "6538a49d-97db-43d1-97b0-373c29f66150", "text": "DAG G is an independence map and denote them by\nM(G). The following theorem, which is inspired by\nTheorem 5 Let p be a probability distribution satis- Theorem 7 in Meek (1995), shows that the probability\nfying composition and weak transitivity. Then, p(U \\ of randomly drawing from M(G) a probability distriW) satisfies composition and weak transitivity. Mo- bution with context-specific independencies is zero.\nreover, if p(U \\ W|W = w) has the same indepenTheorem 6 The probability distributions p in M(G)dencies for all w, then p(U \\ W|W = w) for any w\nfor which there exists some W ⊆U such that p(U \\satisfies composition and weak transitivity. W|W = w) does not have the same independencies\nfor all w have Lebesgue measure zero wrt M(G). Proof: Let X, Y and Z denote three mutually disjoint subsets of U \\ W. Then, X⊥⊥Y|Z in p(U \\ W) Proof: The proof basically proceeds in the same way\niffX ⊥⊥Y|Z in p and, thus, p(U \\ W) satisfies the as that of Theorem 7 in Meek (1995), so we refer the\ncomposition and weak transitivity properties because reader to that paper for more details. Let\np satisfies them. Moreover, if p(U \\ W|W = w) has X, Y and Z denote three disjoint subsets of U \\ W.\nthe same independencies for all w then, for any w, For a constraint such as X ⊥⊥Y|Z to be true in\nX ⊥⊥Y|Z in p(U \\ W|W = w) iffX ⊥⊥Y|(Z ∪W) p(U \\ W|W = w) but false in p(U \\ W|W = w′), the\nin p. Then, p(U \\ W|W = w) for any w satisfies the following equations must be satisfied: p(X = x, Y =\ncomposition and weak transitivity properties because y, Z = z, W = w)p(Z = z, W = w) −p(X = x, Z =\np satisfies them. z, W = w)p(Y = y, Z = z, W = w) = 0 for all\nx, y and z. Each equation is a polynomial in theIf we are not interested in all the queries about T but\nBN parameters corresponding to G, because each termonly in a subset of them, then it seems reasonable to\np(V = v) in the equations is the summation of pro-remove from U all the nodes that do not take part in\nducts of BN parameters (Meek 1995). Furthermore,any of the queries of interest before starting the anaeach polynomial is non-trivial, i.e. not all the valueslysis of section 3. Let W denote the nodes removed.\nof the BN parameters corresponding to G are solutionsThe theorem above guarantees that p(U \\ W) satisto the polynomial. To see it, it suffices to rename wfies the assumptions for the analysis of section 3 if p\nto w′ and w′ to w because, originally, X ̸⊥⊥Y|Z insatisfies them.\np(U \\ W|W = w′). Let sol(x, y, z, w) denote the set\nIn the theorem above, we assume that p(U \\ W|W = of solutions to the polynomial for x, y and z.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 15, "total_chunks": 29, "char_count": 2588, "word_count": 506, "chunking_strategy": "semantic" }, { "chunk_id": "c622e4f3-2594-4a38-b58a-8d4440a253e2", "text": "Then,\nw) has the same independencies for all w. Although sol(x, y, z, w) has Lebesgue measure zero wrt Rn,\nsuch an assumption is not made in Proposition 1 in where n is the number of linearly independent BN paChickering and Meek (2002), the authors agree that it rameters corresponding to G, because it consists of the\nis necessary for the proposition to be correct (perso- solutions to a non-trivial polynomial (Okamoto 1973). S S T\nnal communication).", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 16, "total_chunks": 29, "char_count": 453, "word_count": 78, "chunking_strategy": "semantic" }, { "chunk_id": "b0f70fba-045d-4f72-9cfe-9f72bb97b621", "text": "Without the assumption, there Then, sol = X,Y,Z,W w x,y,z sol(x, y, z, w) has\ncan exist context-specific independencies that violate Lebesgue measure zero wrt Rn because the finite union\ncomposition or weak transitivity. An example fol- and intersection of sets of Lebesgue measure zero has\nlows. Let p(X, Y, Z, W) be the probability distribu- Lebesgue measure zero too. Consequently, the probation represented by a BN with four binary nodes, bility distributions in M(G) with context-specific instructure {Pa(X) = Pa(Y ) = Pa(W) = ∅, Pa(Z) = dependencies have Lebesgue measure zero wrt Rn be-\n{X, Y, W}}, and parameters p(X) = p(Y ) = p(W) = cause they are contained in sol. Finally, since M(G)\n(0.5, 0.5), p(Z|X, Y, W = 0) = XOR(X, Y ) and has positive Lebesgue measure wrt Rn (Meek 1995),\np(Z|X, Y, W = 1) = OR(X, Y ). Then, p(X, Y, Z, W) the probability distributions in M(G) with contextis faithful to the BN structure and, thus, satisfies com- specific independencies have Lebesgue measure zero\nposition while p(X, Y, Z|W = 0) does not. wrt M(G). We now argue that it is not too restrictive to assume,\nlike in Theorems 4 and 5, that p(U \\ W|W = w) has 5 AN EXAMPLE\nthe same independencies for all w. Specifically, we\nshow that there are important families of probability In this section, we illustrate the method to identify R\ndistributions where such context-specific independen- that follows from Theorems 3 and 4 with an example (a) (b) (c) (d) C1 in the DAG (d) in figure 1 if we want to answer\nany query about C2 with context nodes {I, T}. As the\nT C_1 T authors note, the algorithms in Geiger et al. (1990)\nand Shachter (1988, 1990, 1998) also suffer from this\nC_2 T T C_2 drawback. On the other hand, our method avoids it\nby studying the probability distribution instead of a\npossibly inaccurate BN structure learnt from it.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 17, "total_chunks": 29, "char_count": 1837, "word_count": 329, "chunking_strategy": "semantic" }, { "chunk_id": "3e5c1627-01fc-4512-b656-4aa291341ed3", "text": "S I C_2 I 6 CONCLUSIONS Figure 1: (a) DAG faithful before selection bias, (b) We have reported a method to identify all the nodes\nUG faithful after selection bias, (c, d) the only minimal that are relevant to compute all the conditional prodirected independence maps after selection bias. bability distributions for a given set of nodes without\nhaving to learn a BN first. We have showed that the\nmethod is efficient and consistent under some assumptions. We have argued that the assumptions are not\nthat includes selection bias and context nodes. For instance, composition and weak\np(C1, C2, I, S, T) be any Gaussian distribution that is transitivity, which are the two main assumptions, are\nfaithful to the DAG (a) in figure 1. Such probabiweaker than faithfulness. We believe that our work\nlity distributions exist (Meek 1995).", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 18, "total_chunks": 29, "char_count": 830, "word_count": 138, "chunking_strategy": "semantic" }, { "chunk_id": "681971fa-addf-442c-8636-1dc6c24a45a0", "text": "Let us consider\ncan be helpful to those dealing with high-dimensional\nthe selection bias S = s. Then, p(C1, C2, I, T|S = s) domains. This paper builds on the fact that the miniis faithful to the UG (b) in figure 1 and, thus, is\nmal undirected independence map of a strictly positive\nnot DAG-faithful (Chickering and Meek 2002). Let\nprobability distribution that satisfies weak transitivity\nus now assume that we want to learn a BN from\ncan be used to read certain dependencies (Theorem 2).\np(C1, C2, I, T|S = s) to answer any query about T In (Pe˜na et al. 2006b), we introduce a sound and comwith context nodes {C1, C2}. Since p(C1, C2, I, T|S = plete graphical criterion for this purpose.\ns) is a Gaussian distribution (Anderson 1984), it satisfies the assumptions in Theorem 4. Therefore, we can We are currently studying even less restrictive assumpapply the method that follows from this theorem to tions than those in this paper. The objective is develop\nidentify R({C1, C2}). The result is R({C1, C2}) = ∅ a new method whose assumptions are satisfied by such\nand, thus, I({C1, C2}) = {I}. Moreover, {C1, C2, T} an important family of probability distributions as the\nis the optimal domain to learn the BN because C1 ∈ family of conditional Gaussian distributions because,\nR({C2}) = {C1, I} and C2 ∈R({C1}) = {C2, I}. So, in general, this family does not satisfy composition.\nwe have solved the problem correctly without learning An example follows.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 19, "total_chunks": 29, "char_count": 1455, "word_count": 252, "chunking_strategy": "semantic" }, { "chunk_id": "a317f28f-b1aa-4a15-a7fa-310e6d1ac87a", "text": "Let {X, Y, Z} be a random vaa BN first. riable such that X and Y are continuous and Z is\nbinary. Let p(X, Y |Z = z) be a Gaussian distribution\nIn the cases where a BN can be learnt first, it is tempfor all z. Let these two Gaussian distributions have\nting to try to identify R by just studying the BN structhe same mean and diagonal of the variance-covariance\nture learnt. This can however lead to erroneous conclumatrix but different off-diagonal.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 20, "total_chunks": 29, "char_count": 448, "word_count": 84, "chunking_strategy": "semantic" }, { "chunk_id": "99e69087-8a43-40bf-bd83-3a4b537eae9b", "text": "Then, {X, Y }̸⊥⊥Z|∅\nsions. For instance, the best BN structure that can be\nbut X ⊥⊥Z|∅and Y ⊥⊥Z|∅(Anderson 1984). We\nlearnt from p(C1, C2, I, T|S = s) is a minimal directed note that there do exist conditional Gaussian distriindependence map of it because, as discussed above,\nbutions that satisfy composition and weak transitivity,\nthis probability distribution is not DAG-faithful. The\ne.g. those that are DAG-faithful. DAGs (c) and (d) in figure 1 are the only minimal\ndirected independence maps of p(C1, C2, I, T|S = s) We are also currently applying the method proposed\n(Chickering and Meek 2002). Let us assume that the in this paper to our atherosclerosis gene expression\nBN structure learnt is the DAG (c). Then, it seems database. We believe that it is not unrealistic to asreasonable to conclude that R({C1, C2}) = {I}, be- sume that the probability distribution underlying our\ncause sep(I, T|{C1, C2}) is false in the BN structure data satisfies strict positivity, composition and weak\nlearnt and there exist probability distributions that transitivity. The cell is the functional unit of all the\nare faithful to it (Meek 1995). In other words, by just organisms and includes all the information necessary\nstudying the BN structure learnt, it is not possible to to regulate its function. This information is encoded\ndetect whether we are dealing with a probability dis- in the DNA of the cell, which is divided into a set\ntribution that is faithful to it or not and, thus, it is of genes, each coding for one or more proteins. Prosafer to declare I relevant. We know that this is not teins are required for practically all the functions in\nthe correct solution.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 21, "total_chunks": 29, "char_count": 1672, "word_count": 283, "chunking_strategy": "semantic" }, { "chunk_id": "212f6bf5-cbc1-47d2-8329-8918992bc8b4", "text": "The same problem occurs with the cell. The amount of protein produced depends curate model of the cell (Murphy and Mian 1999): The\nKalisch, M., B¨uhlmann, P.: Estimating Highnodes represent the genes and proteins, and the edges\nDimensional Directed Acyclic Graphs with the PCand parameters represent the causal relations between\nAlgorithm.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 22, "total_chunks": 29, "char_count": 339, "word_count": 51, "chunking_strategy": "semantic" }, { "chunk_id": "bf8af8a7-8f39-403d-beee-b304a8924715", "text": "Technical report (2005). Available at\nthe gene expression levels and the protein amounts. It\nhttp://stat.ethz.ch/∼buhlmann/bibliog.html.\nis important that the BN is dynamic because a gene\ncan regulate some of its regulators and even itself with Lin, Y., Druzdzel, M. J.: Computational Advantages\nsome time delay. Since the technology for measuring of Relevance Reasoning in Bayesian Belief Networks.\nthe state of the protein nodes is not widely available In Proceedings of the Thirteenth Conference on Unyet, the data in most projects on gene expression data certainty in Artificial Intelligence (1997) 342-350.\nanalysis are a sample of the probability distribution\nMadsen, A. V.: Lazy Propagation in\nrepresented by the dynamic Bayesian network after hiJunction Trees. In Proceedings of the Fourteenth\nding the protein nodes.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 23, "total_chunks": 29, "char_count": 825, "word_count": 122, "chunking_strategy": "semantic" }, { "chunk_id": "e8237847-5117-4539-a7d2-feb3411f183c", "text": "The probability distribution\nConference on Uncertainty in Artificial Intelligence\nwith no node hidden is almost surely faithful to the\n(1998) 362-369.\ndynamic Bayesian network (Meek 1995) and, thus, it\nsatisfies composition and weak transitivity (see section Mahoney, S. B.: Constructing Situa-\n2) and, thus, so does the probability distribution af- tion Specific Belief Networks. In Proceedings of the\nter hiding the protein nodes (see Theorem 5). The Fourteenth Conference on Uncertainty in Artificial Inassumption that the probability distribution sampled telligence (1998) 370-378.\nis strictly positive is justified because measuring the\nMeek, C.: Strong Completeness and Faithfulness in\nstate of the gene nodes involves a series of complex\nBayesian Networks. In Proceedings of the Eleventh\nwet-lab and computer-assisted steps that introduces\nConference on Uncertainty in Artificial Intelligence\nnoise in the measurements (Sebastiani et al. 2003).\n(1995) 411-418. Acknowledgements Murphy, K., Mian, S.: Modelling Gene Expression Data Using Dynamic Bayesian NetWe thank the anonymous referees for their comments works. Technical report (1999). Available at\nand pointers to relevant literature. M. http://www.cs.ubc.ca/∼murphyk/papers.html. Meek for kindly answering our quesOkamoto, M.: Distinctness of the Eigenvalues of a\ntions about Proposition 1 in Chickering and Meek\nQuadratic Form in a Multivariate Sample. This work is funded by the Swedish Research\nStatistics 1 (1973) 763-765.Council (VR-621-2005-4202), the Swedish Foundation\nfor Strategic Research, and Link¨oping University. Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan KaufReferences mann (1988). M., Nilsson, R., Bj¨orkegren, J., Tegn´er,Anderson, T. W.: An Introduction to Multivariate\nJ.: Towards Scalable and Data Efficient Learning ofStatistical Analysis.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 24, "total_chunks": 29, "char_count": 1881, "word_count": 254, "chunking_strategy": "semantic" }, { "chunk_id": "b957ad7b-b2cc-482b-b309-3df7590efc00", "text": "John Wiley & Sons (1984). International Journal of ApChickering D. M., Meek C.: Finding Optimal Bayesian proximate Reasoning (2006a) submitted. Available at\nNetworks. In Proceedings of the Eighteenth Confe- http://www.ifm.liu.se/∼jmp/ijarecsqarujmp.pdf.\nrence on Uncertainty in Artificial Intelligence (2002)\nPe˜na, J. M., Nilsson, R., Bj¨orkegren, J., Tegn´er,94-102.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 25, "total_chunks": 29, "char_count": 368, "word_count": 45, "chunking_strategy": "semantic" }, { "chunk_id": "c0f74539-4d40-4422-aad4-1c01019389d9", "text": "J.: Reading Dependencies from the Minimal\nGeiger, D., Verma, T., Pearl, J.: Identifying Indepen- Undirected Independence Map of a Graphoid\ndence in Bayesian Networks. Networks 20 (1990) 507- that Satisfies Weak Transitivity. European Workshop on Probabilistic Graphical Models (2006b) submitted. Available atGretton, A., Herbrich, R., Smola, A., Bousquet, O.,\nhttp://www.ifm.liu.se/∼jmp/jmppgm2006.pdf.Sch¨olkopf, B.: Kernel Methods for Measuring Independence. Journal of Machine Learning Research 6 Sebastiani, P., Gussoni, E., Kohane, I. S., Ramoni, M.:\n(2005a) 2075-2129.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 26, "total_chunks": 29, "char_count": 574, "word_count": 73, "chunking_strategy": "semantic" }, { "chunk_id": "a28b3a90-7554-4b90-a9af-681a370dd431", "text": "Statistical Challenges in Functional Genomics (with\nDiscussion). Statistical Science 18 (2003) 33-60.Gretton, A., Smola, A., Bousquet, O., Herbrich, R.,\nBelitski, A., Augath, M., Murayama, Y., Pauls, J., Shachter, R. D.: Probabilistic Inference and Influence\nSch¨olkopf, B., Logothetis, N.: Kernel Constrained CoDiagrams. Operations Research 36 (1988) 589-604. D.: An Ordered Examination of Influence\nDiagrams. Networks 20 (1990) 535-563.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 27, "total_chunks": 29, "char_count": 438, "word_count": 57, "chunking_strategy": "semantic" }, { "chunk_id": "5ecaa326-d547-4725-8b97-6b02a29a2782", "text": "D.: Bayes-Ball: The Rational Pastime\n(for Determining Irrelevance and Requisite Information in Belief Networks and Influence Diagrams). In\nProceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (1998) 480-487. Studen´y, M.: Probabilistic Conditional Independence\nStructures. Tsamardinos, I., Aliferis, C. F., Statnikov, A.: Algorithms for Large Scale Markov Blanket Discovery. In\nProceedings of the Sixteenth International Florida Artificial Intelligence Research Society Conference (2003)\n376-380.", "paper_id": "1206.6847", "title": "Identifying the Relevant Nodes Without Learning the Model", "authors": [ "Jose M. Pena", "Roland Nilsson", "Johan Björkegren", "Jesper Tegnér" ], "published_date": "2012-06-27", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1206.6847v1", "chunk_index": 28, "total_chunks": 29, "char_count": 526, "word_count": 64, "chunking_strategy": "semantic" } ]