researchpilot-data / chunks /1005.0530_semantic.json
Subhadip007's picture
feat: Upload full 358k vector database
e8a2c2e
[
{
"chunk_id": "58a29689-9609-4173-87ef-622ef2e96a46",
"text": "Feature Selection with Conjunctions of Decision\nStumps and Learning from Microarray Data\nMohak Shah, Mario Marchand, and Jacques Corbeil Abstract\nOne of the objectives of designing feature selection learning algorithms is to obtain classifiers that depend on a small number\nof attributes and have verifiable future performance guarantees.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 0,
"total_chunks": 49,
"char_count": 338,
"word_count": 48,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6746404f-c331-4469-8e94-3537272c9ec0",
"text": "There are few, if any, approaches that successfully address the two\ngoals simultaneously. Performance guarantees become crucial for tasks such as microarray data analysis due to very small sample\nsizes resulting in limited empirical evaluation. To the best of our knowledge, such algorithms that give theoretical bounds on the\nfuture performance have not been proposed so far in the context of the classification of gene expression data. In this work, we\ninvestigate the premise of learning a conjunction (or disjunction) of decision stumps in Occam's Razor, Sample Compression, and2010\nPAC-Bayes learning settings for identifying a small subset of attributes that can be used to perform reliable classification tasks. We apply the proposed approaches for gene identification from DNA microarray data and compare our results to those of well\nknown successful approaches proposed for the task. We show that our algorithm not only finds hypotheses with much smaller\nnumber of genes while giving competitive classification accuracy but also have tight risk guarantees on future performance unlikeMay other approaches. The proposed approaches are general and extensible in terms of both designing novel algorithms and application\n4 to other domains.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 1,
"total_chunks": 49,
"char_count": 1245,
"word_count": 187,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f1567eab-80c5-4357-8dfe-5d33dffdf1a8",
"text": "Index Terms\nMicroarray data classification, Risk bounds, Feature selection, Gene identification. INTRODUCTION[cs.LG]\nAn important challenge in the problem of classification of high-dimensional data is to design a learning algorithm that can\nconstruct an accurate classifier that depends on the smallest possible number of attributes. Further, it is often desired that\nthere be realizable guarantees associated with the future performance of such feature selection approaches. With the recent\nexplosion in various technologies generating huge amounts of measurements, the problem of obtaining learning algorithms\nwith performance guarantees has acquired a renewed interest. Consider the case of biological domain where the advent of microarray technologies [Eisen and Brown, 1999, Lipshutz et al.,\n1999] have revolutionized the outlook on the investigation and analysis of genetic diseases. In parallel, on the classification\nfront, many interesting results have appeared aiming to distinguish between two or more types of cells, (e.g. diseased vs.\nnormal, or cells with different types of cancers) based on gene expression data in the case of DNA microarrays (see, for\ninstance, [Alon et al., 1999] for results on Colon Cancer, [Golub et al., 1999] for Leukaemia). Focusing on very few genes to\ngive insight into the class association for a microarray sample is quite important owing to a variety of reasons. For instance, a\nsmall subset of genes is easier to analyze as opposed to the set of genes output by the DNA microarray chips. It also makes\nof microarray technology cheaper, affordable, and effective. In the view of a diseased versus a normal sample, these genes can be considered as indicators of the disease's cause. Subsequent validation study focused on these genes, their behavior, and their interactions, can lead to better understanding of\nthe disease. Some attempts in this direction have yielded interesting results. See, for instance, a recent algorithm proposed\nby Wang et al. [2007] involving the identification of a gene subset based on importance ranking and subsequently combinations\nof genes for classification. Another example is the approach of Tibshirani et al. [2003] based on nearest shrunken centroids. Some kernel based approaches such as the BAHSIC algorithm [Song et al., 2007] and their extensions (e.g., [Shah and Corbeil,\n2010] for short time-series domains) have also appeared.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 2,
"total_chunks": 49,
"char_count": 2415,
"word_count": 365,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ae0b3cf8-40b0-462c-84a2-d11930fd02b3",
"text": "The traditional methods used for classifying high-dimensional data are often characterized as either \"filters\" (e.g. [Furey et al.,\n2000, Wang et al., 2007] or \"wrappers\" (e.g. [Guyon et al., 2002]) depending on whether the attribute selection is performed\nindependent of, or in conjunction with, the base learning algorithm. Shah is with the Centre for Intelligent Machines, McGill University, Montreal, Canada, H3A 2A7. E-mail: mohak@cim.mcgill.ca\nM.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 3,
"total_chunks": 49,
"char_count": 452,
"word_count": 65,
"chunking_strategy": "semantic"
},
{
"chunk_id": "590c3a6c-2f27-49ea-826f-baa9415dd9ad",
"text": "Marchand is with the Department of Computer Science and Software Engineering, Pav. Adrien Pouliot, Laval University, Quebec, Canada, G1V-0A6. Email: Mario.Marchand@ift.ulaval.ca\nJ. Corbeil is with CHUL Research Center, Laval University, Quebec (QC) Canada, G1V-4G2. Email: Jacques.Corbeil@crchul.ulaval.ca Despite the acceptable empirical results achieved by such approaches, there is no theoretical justification of their performance\nnor do they come with a guarantee on how well will they perform in the future. What is really needed is a learning algorithm\nthat has provably good performance guarantees in the presence of many irrelevant attributes.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 4,
"total_chunks": 49,
"char_count": 652,
"word_count": 90,
"chunking_strategy": "semantic"
},
{
"chunk_id": "4662ba98-53cc-445d-a9c2-53c2b7de4dce",
"text": "This is the focus of the work\npresented here. Contributions\nThe main contributions of this work come in the form of formulation of feature selection strategies within well established\nlearning settings resulting in learning algorithms that combine the tasks of feature selection and discriminative learning. Consequently, we obtain feature selection algorithms for classification with tight realizable guarantees on their generalization\nerror. The proposed approaches are a step towards more general learning strategies that combine feature selection with the\nclassification algorithm and have tight realizable guarantees. We apply the approaches to the task of classifying microarray\ndata where the attributes of the data sample correspond to the expression level measurements of various genes. In fact the\nchoice of decision stumps as learning bias has in part motivated by this application.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 5,
"total_chunks": 49,
"char_count": 893,
"word_count": 129,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3460d48f-b4f2-4619-9631-70e842c1e236",
"text": "The framework is general and extensible\nin a variety of ways. For instance, the learning strategies proposed in this work can readily be extended to other similar tasks\nthat can benefit from this learning bias. An immediate example would be classifying data from other microarray technologies\nsuch as in the case of Chromatin Immunoprecipitation experiments. Similarly, learning biases other than the conjunctions of\ndecision stumps, can also be explored in the same frameworks leading to novel learning algorithms. Motivation\nFor learning the class of conjunctions of features, we draw motivation from the guarantee that exists for this class in the\nfollowing form: if there exists a conjunction, that depends on r out of the n input attributes and that correctly classifies a\ntraining set of m examples, then the greedy covering algorithm of Haussler [1988] will find a conjunction of at most r ln m\nattributes that makes no training errors. Note the absence of dependence on the number n of input attributes. The method is\nguaranteed to find at most r ln m attributes and, hence, depends on the number of available samples m but not on the number\nof attributes n to be analyzed. We propose learning algorithms for building small conjunctions of decision stumps. We examine three approaches to obtain\nan optimal classifier based on this premise that mainly vary in the coding strategies for the threshold of each decision stump. The first two approaches attempt to do this by encoding the threshold either with message strings (Occam's Razor) or by\nusing training examples (Sample Compression). The third strategy (PAC-Bayes) attempts to examine if an optimal classifier\ncan be obtained by trading off the sparsity1 of the classifier with the magnitude of the separating margin of each decision\nstump. In each case, we derive an upper bound on the generalization error of the classifier and subsequently use it to guide the\nrespective algorithm. Finally, we present empirical results on the microarray data classification tasks and compare our results\nto the state-of-the-art approaches proposed for the task including the Support Vector Machine (SVM) coupled with feature\nselectors, and Adaboost. The preliminary results of this work appeared in [Marchand and Shah, 2005].",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 6,
"total_chunks": 49,
"char_count": 2275,
"word_count": 363,
"chunking_strategy": "semantic"
},
{
"chunk_id": "4396b75e-a15e-4d3d-bb4f-18ee7f23a7da",
"text": "Organization\nSection II gives the basic definitions and notions of the learning setting that we utilize and also characterizes the hypothesis\nclass of conjunctions of decision stumps. All subsequent learning algorithms are proposed to learn this hypothesis class. Section III proposes an Occam's Razor approach to learn conjunctions of decision stumps leading to an upper bound on the\ngeneralization error in this framework. Section IV then proposes an alternate encoding strategy for the message strings using\nthe Sample Compression framework and gives a corresponding risk bound. In Section V, we propose a PAC-Bayes approach\nto learn conjunction of decision stumps that enables the learning algorithm to perform an explicit non-trivial margin-sparsity\ntrade-off to obtain more general classifiers. Section VI then proposes algorithms to learn in the three learning settings proposed\nin Sections III, IV and V along with a time complexity analysis. Note that the learning (optimization) strategies proposed in\nSection VI do not affect the respective theoretical guarantees of the learning settings.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 7,
"total_chunks": 49,
"char_count": 1100,
"word_count": 163,
"chunking_strategy": "semantic"
},
{
"chunk_id": "19f2bdba-f8e2-4d6c-9b54-0498666b5837",
"text": "The algorithms are evaluated empirically\non real world microarray datasets in Section VII. Section VIII presents a discussion on the results and also provides an analysis\nof the biological relevance of the selected genes in the case of each dataset, and their agreement with published findings. Finally,\nwe conclude in Section IX. DEFINITIONS\nThe input space X consists of all n-dimensional vectors x = (x1, . . . , xn) where each real-valued component xi ∈[Ai, Bi]\nfor i = 1, . . . n.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 8,
"total_chunks": 49,
"char_count": 485,
"word_count": 85,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a3f54943-c0e4-4f77-a686-bf09ce00c966",
"text": "Each attribute xi for instance can refer to the expression level of gene i. Hence, Ai and Bi are, respectively,\nthe a priori lower and upper bounds on values for xi. The output space Y is the set of classification labels that can be assigned\nto any input vector x ∈X. We focus here on binary classification problems. Each example z = (x, y) is an 1This refers to the number of decision stumps used.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 9,
"total_chunks": 49,
"char_count": 398,
"word_count": 75,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e9843d4d-e727-461d-840c-b25cd1072416",
"text": "input vector x ∈X with its classification label y ∈Y chosen i.i.d. from an unknown distribution D on X × Y. The true risk\nR(f) of any classifier f is defined as the probability that it misclassifies an example drawn according to D: def\nR(f) = Pr(x,y)∼D (f(x) ̸= y) = E(x,y)∼DI(f(x) ̸= y)\nwhere I(a) = 1 if predicate a is true and 0 otherwise. Given a training set S = {z1, . . . , zm} of m examples, the empirical\nrisk RS(f) on S, of any classifier f, is defined according to:\nXm def 1 def\nRS(f) = I(f(xi) ̸= yi) = E(x,y)∼SI(f(x) ̸= y)\ni=1\nThe goal of any learning algorithm is to find the classifier with minimal true risk based on measuring empirical risk (and other\nproperties) on the training sample S. We focus on learning algorithms that construct a conjunction of decision stumps from a training set. Each decision stump\nis just a threshold classifier defined on a single attribute (component) xk. More formally, a decision stump is identified by an\nattribute index k ∈{1, . . . , n}, a threshold value t ∈R, and a direction d ∈{−1, +1} (that specifies whether class 1 is on\nthe largest or smallest values of xk). Given any input example x, the output rktd(x) of a decision stump is defined as:\ndef 1 if (xk −t)d > 0\nrktd(x) = 0 if (xk −t)d ≤0 We use a vector k def= (k1, . . . , k|k|) of attribute indices kj ∈{1, . . . , n} such that k1 < k2 < . . . < k|k| where |k| is the\nnumber of indices present in k (and thus the number of decision stumps in the conjunction) 2. Furthermore, We use a vector\nt = (tk1, tk2, . . . , tk|k|) of threshold values and a vector d = (dk1, dk2, . . . , dk|k|) of directions where kj ∈{1, . . ., n} for\nj ∈{1, . . . , |k|}. On any input example x, the output Cktd(x) of a conjunction of decision stumps is given by:\ndef 1 if rjtjdj(x) = 1 ∀j ∈k Cktd(x) =\n0 if ∃j ∈k : rjtjdj(x) = 0\nFinally, any algorithm that builds a conjunction can be used to build a disjunction just by exchanging the role of the positive\nand negative labeled examples. In order to keep our description simple, we describe here only the case of a conjunction. However, the case of disjunction follows symmetrically. AN OCCAM'S RAZOR APPROACH\nOur first approach towards learning the conjunction (or disjunction) of decision stumps is the Occam's Razor approach. Basically, we wish to obtain a hypothesis that can be coded using the least number of bits.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 10,
"total_chunks": 49,
"char_count": 2361,
"word_count": 462,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ce1fb2b3-b155-4e30-9d07-dbf583344225",
"text": "We first propose an Occam's Razor\nrisk bound which will ultimately guide the learning algorithm. In the case of zero-one loss, we can model the risk of the classifier as a binomial. Let Bin(κ, m, r) be the the binomial tail\nassociated with a classifier of (true) risk r. Then Bin(κ, m, r) is the probability that this classifier makes at most κ errors on\na set of m examples:\nXκ def m\nBin (κ, m, r) = ri(1 −r)m−i\ni=0\nThe binomial tail inversion Bin (κ, m, δ) then gives the largest risk value that a classifier can have while still having a probability\nof at least δ of observing at most κ errors out of m examples [Langford, 2005, Blum and Langford, 2003]: def\nBin (κ, m, δ) = sup {r : Bin (κ, m, r) ≥δ}\nFrom this definition, it follows that Bin (mRS(f), m, δ) is the smallest upper bound, which holds with probability at least\n1 −δ, on the true risk of any classifier f with an observed empirical risk RS(f) on a test set of m examples: ∀f : PrS∼Dm R(f) ≤Bin mRS(f), m, δ ≥1 −δ\nOur starting point is the Occam's razor bound of Langford [2005] which is a tighter version of the bound proposed\nby Blumer et al. [1987]. It is also more general in the sense that it applies to any prior distribution P over any countable class\nof classifiers. 2Although it is possible to use up to two decision stumps on any attribute, we limit ourselves here to the case where each attribute can be used for only\none decision stump. Theorem 1 (Langford [2005]). For any prior distribution P over any countable class F of classifiers, for any data-generating\ndistribution D, and for any δ ∈(0, 1], we have: PrS∼Dm ∀f ∈F : R(f) ≤Bin mRS(f), m, P(f)δ ≥1 −δ The proof (available in [Langford, 2005]) directly follows from a straightforward union bound argument and from the fact P\nP forthat f∈F P(f) = 1. To apply this bound for conjunctionsP of decision stumps we thus need to choose a suitable prior\nthis class. Moreover, Theorem 1 is valid when f∈F P(f) ≤1. Consequently, we will use a subprior P whose sum is ≤1.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 11,
"total_chunks": 49,
"char_count": 1994,
"word_count": 376,
"chunking_strategy": "semantic"
},
{
"chunk_id": "dd50cb09-8e10-49e1-afa1-ac37a07e3aee",
"text": "In our case, decision-stumps' conjunctions are specified in terms of the discrete-valued vectors k and d and the continuousvalued vector t. We will see below that we will use a finite-precision bit string σ to specify the set of threshold values t. Let\nus denote by P(k, d, σ) the prior probability assigned to the conjunction Ckσd described by (k, d, σ). We choose a prior of\nthe following form:\n1 1\nP(k, d, σ) = n p(|k|) gk,d(σ)\n|k| 2|k|\nwhere gk,d(σ) is the prior probability assigned to string σ given that we have chosen k and d. Let M(k, d) be the\nset of all message strings that we can use given that we have chosen k and d. If I denotes the set of all 2n possible attribute index vectors and Dk denotes the set of all 2|k| binary direction vectors d of dimension |k|, we have thatP P P Pn P\nk∈I d∈Dk σ∈M(k,d) P(k, d, σ) ≤1 whenever d=0 p(d) ≤1 and σ∈M(k,d) gk,d(σ) ≤1 ∀k, d.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 12,
"total_chunks": 49,
"char_count": 882,
"word_count": 172,
"chunking_strategy": "semantic"
},
{
"chunk_id": "9c2bed72-a69d-49be-9dfc-85d23e10fc09",
"text": "The reasons motivating this choice for the prior are the following. The first two factors come from the belief that the final\nclassifier, constructed from the group of attributes specified by k, should depend only on the number |k| of attributes in this\ngroup. If we have complete ignorance about the number of decision stumps the final classifier is likely to have, we should\nchoose p(d) = 1/(n + 1) for d ∈{0, 1, . . ., n}. However, we should choose a p that decreases as we increase d if we have\nreasons to believe that the number of decision stumps of the final classifier will be much smaller than n. Since this is usually\nour case, we propose to use:\np(|k|) = (|k| + 1)−2\nThe third factor of P(k, d, σ) gives equal prior probabilities for each of the two possible values of direction dj. To specify the distribution of strings gk,d(σ), consider the problem of coding a threshold value t ∈[a, b] ⊂[A, B] where\n[A, B] is some predefined interval in which we are permitted to choose t and where [a, b] is an interval of \"equally good\"\nthreshold values.3 We propose the following diadic coding scheme for the identification of a threshold value that belongs to\nthat interval. Let l be the number of bits that we use for the code. Then, a code of l bits specifies one value among the set Λl\nof threshold values:\ndef −1 2j −1 Λl = 1 −2j A + B\n2l+1 2l+1 j=1\nWe denote by Ai and Bi, the respective a priori minimum and maximum values that the attribute i can take. These values are\nobtained from the definition of data. Hence, for an attribute i ∈k, given an interval [ai, bi] ⊂[Ai, Bi] of threshold values, we\ntake the smallest number li of bits such that there exists a threshold value in Λli that falls in the interval [ai, bi]. In that way,\nwe will need at most ⌊log2((Bi −Ai)/(bi −ai))⌋bits to obtain a threshold value that falls in [ai, bi]. Hence, to specify the threshold for each decision stump i ∈k, we need to specify the number li of bits and a li-bit string\nsi that identifies one of the threshold values in Λli. The risk bound does not depend on how we actually code σ (for some\nreceiver). It only depends on the a priori probabilities we assign to each possible realization of σ. We choose the following\ndistribution:",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 13,
"total_chunks": 49,
"char_count": 2230,
"word_count": 417,
"chunking_strategy": "semantic"
},
{
"chunk_id": "717b1180-cb99-4614-80c1-a81ebb578980",
"text": "def\ngk,d(σ) = gk,d(l1, s1, . . . , l|k|, s|k|) (1)\n= ζ(li) · 2−li (2)\ni∈k\nwhere:\ndef 6\nζ(a) = (a + 1)−2 ∀a ∈N (3)\nThe sum over all the possible realizations of σ gives 1 since i=1 i−2 = π2/6. Note that by giving equal a priori probability\nto each of the 2li strings si of length li, we give no preference to any threshold value in Λli. The distribution ζ that we have chosen for each string length li has the advantage of decreasing slowly so that the risk\nbound does not deteriorate too rapidly as li increases.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 14,
"total_chunks": 49,
"char_count": 512,
"word_count": 104,
"chunking_strategy": "semantic"
},
{
"chunk_id": "507c1d82-2d96-4882-9749-5b2aafe84104",
"text": "Other choices are clearly possible. However, note that the dominant\ncontribution comes from the 2−li term yielding a risk bound that depends linearly in li. 3By a \"good\" threshold value, we mean a threshold value for a decision stump that would cover many negative examples and very few positive examples\n(see the learning algorithm).",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 15,
"total_chunks": 49,
"char_count": 334,
"word_count": 54,
"chunking_strategy": "semantic"
},
{
"chunk_id": "86e74284-c22b-44f6-a662-2054fcf93cf8",
"text": "With this choice of prior, we have the following theorem: Given all our previous definitions and for any δ ∈(0, 1], we have: p(|k|)gk,d(σ)δ\nPr ∀k, d, σ: R(Ckσd) ≤Bin mRS(Ckσd), m, n ≥1 −δ S∼Dm |k| 2|k| Finally, we emphasize that the risk bound of Theorem 2, used in conjunction with the distribution of messages given by\ngk,d(σ), provides a guide for choosing the optimal classifier. Note that the above risk bound suggests a non-trivial trade-off\nbetween the number of attributes and the length of the message string used to encode the classifier. Indeed the risk bound\nmay be smaller for a conjunction having a large number of attributes with small message strings (i.e., small lis) than for a\nconjunction having a small number of attributes but with large message strings.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 16,
"total_chunks": 49,
"char_count": 775,
"word_count": 132,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8d5a708c-45c6-4901-b660-bb4680c54abe",
"text": "A SAMPLE COMPRESSION APPROACH\nThe basic idea of the Sample compression framework [Kuzmin and Warmuth, 2007] is to obtain learning algorithms with\nthe property that the generated classifier (with respect to some training data) can often be reconstructed with a very small\nsubset of training examples. More formally, a learning algorithm A is said to be a sample-compression algorithm iff there\nexists a compression function C and a reconstruction function R such that for any training sample S = {z1, . . . , zm} (where\ndef\nzi = (xi, yi)), the classifier A(S) returned by A is given by: A(S) = R(C(S)) ∀S ∈(X × Y)m\nFor a training set S, the compression function C of learning algorithm A outputs a subset zi of S, called the compression set,\nand an information message σ, i.e., (zi, σ) = C(z1, . . . , zm). The information message σ contains the additional information\nneeded to reconstruct the classifier from the compression set zi. Given a training sample S, we define the compression set zi\nby a vector of indices i such that i def= (i1, i2, . . . , i|i|), with ij ∈{1, . . ., m}∀j and i1 < i2 < . . . < i|i| and where |i| denotes\nthe number of indices present in i. When given an arbitrary compression set zi and an arbitrary information message σ, the reconstruction function R of a\nlearning algorithm A must output a classifier. The information message σ is chosen from a set M(zi) that consists of all the\ndistinct messages that can be attached to the compression set zi. The existence of this reconstruction function R assures that\nthe classifier returned by A(S) is always identified by a compression set zi and an information message σ. In sample compression settings for learning decision stumps' conjunctions, the message string consists of the attributes and\ndirections defined above. However, the thresholds are now specified by training examples. Hence, if we have |k| attributes\nwhere k is the set of thresholds, the compression set consists of |k| training examples (one per threshold). Our starting point is the following generic Sample Compression bound [Marchand and Sokolova, 2005]: For any sample compression learning algorithm with a reconstruction function R that maps arbitrary subsets of\na training set and information messages to classifiers: PS∼Dm {∀i ∈I, σ ∈M(Zi): R(R(σ, Zi)) ≤ǫ(σ, Zi, |j|)} ≥1 −δ where\n−1 m m −|i|\nǫ(σ, zi, |j|) = 1 −exp ln + ln\nm −|i| −|j| |i| |j|\n1 1\n+ ln + ln\nPM(Zi)(σ) ζ(|i|)ζ(|j|)δ\n(4)",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 17,
"total_chunks": 49,
"char_count": 2438,
"word_count": 429,
"chunking_strategy": "semantic"
},
{
"chunk_id": "dc7079d9-f150-4c60-875b-000883835920",
"text": "Now, we need to specify the distribution of messages (PM(Zi)(σ)) for the conjunction of decision stumps. Note that in\norder to specify a conjunction of decision stumps, the compression set consists of one example per decision stump. For each\ndecision stump we have one attribute and a corresponding threshold value determined by the numerical value that this attribute\ntakes on the training example. The learner chooses an attribute whose threshold is identified by the associated training example. The set of these training\nexamples form the compression set. Finally, the learner chooses a direction for each attribute. The subset of attributes that specifies the decision stumps in our compression set zi is given by the vector k defined in the\nprevious section. Moreover, since there is one decision stump corresponding to each example in the compression set, we have\n|i| = |k|. Now, we assign equal probability to each possible set |k| of attributes (and hence thresholds) that can be selected\nfrom n attributes. Moreover, we assign equal probability over the direction that each decision stump can have (+1, −1). Hence,\nwe get the following distribution of messages:",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 19,
"total_chunks": 49,
"char_count": 1171,
"word_count": 187,
"chunking_strategy": "semantic"
},
{
"chunk_id": "10c5efc7-d1b9-4849-96e8-8fa40440c608",
"text": "PM(zi)(σ) = · 2−|k| ∀σ (5)\n|k| Equation 5 along with the Sample Compression Theorem completes the bound for the conjunction of decision stumps. A PAC-BAYES APPROACH\nThe Occam's Razor and Sample Compression, in a sense, aim at obtaining sparse classifiers with minimum number of stumps. This sparsity is enforced by selecting the classifiers with minimal encoding of the message strings and the compression set in\nrespective cases. We now examine if by sacrificing this sparsity in terms of a larger separating margin around the decision boundary (yielding\nmore confidence) can lead us to classifiers with smaller generalization error. The learning algorithm is based on the PAC-Bayes\napproach [McAllester, 1999] that aims at providing Probably Approximately Correct (PAC) guarantees to \"Bayesian\" learning\nalgorithms specified in terms of a prior distribution P (before the observation of the data) and a data-dependent, posterior\ndistribution Q over a space of classifiers. We formulate a learning algorithm that outputs a stochastic classifier, called the Gibbs Classifier GQ defined by a datadependent posterior Q. Our classifier will be partly stochastic in the sense that we will formulate a posterior over the threshold\nvalues utilized by the decision stumps while still retaining the deterministic nature for the selected attributes and directions for\nthe decision stumps. Given an input example x, the Gibbs classifier first selects a classifier h according to the posterior distribution Q and then\nuse h to assign the label h(x) to x. The risk of GQ is defined as the expected risk of classifiers drawn according to Q: def\nR(GQ) = Eh∼QR(h) = Eh∼QE(x,y)∼DI(h(x) ̸= y)\nOur starting point is the PAC-Bayes theorem [McAllester, 2003, Langford, 2005, Seeger, 2002] that provides a bound on the\nrisk of the Gibbs classifier: Given any space H of classifiers. For any data-independent prior distribution P over H, we have:\n+ ln m+1δ Pr ∀Q : kl(RS(GQ)∥R(GQ)) ≤KL(Q∥P) ≥1 −δ\nS∼Dm m\nwhere KL(Q∥P) is the Kullback-Leibler divergence between distributions4 Q and P: def Q(h)\nKL(Q∥P) = Eh∼Q ln\nP(h) and where kl(q∥p) is the Kullback-Leibler divergence between the Bernoulli distributions with probabilities of success q and\ndef q 1 −q\nkl(q∥p) = q ln + (1 −q) ln\np 1 −p\nThis bound for the risk of Gibbs classifiers can easily be turned into a bound for the risk of Bayes classifiers BQ over the\nposterior Q.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 20,
"total_chunks": 49,
"char_count": 2402,
"word_count": 391,
"chunking_strategy": "semantic"
},
{
"chunk_id": "be236c3f-0d36-4ce7-b877-7b8926a7ad79",
"text": "BQ basically performs a majority vote (under measure Q) of binary classifiers in H. When BQ misclassifies an\nexample x, at least half of the binary classifiers (under measure Q) misclassifies x. It follows that the error rate of GQ is at\nleast half of the error rate of BQ.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 21,
"total_chunks": 49,
"char_count": 273,
"word_count": 50,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a2be6c77-0acf-4fb5-9012-75cc79741517",
"text": "In our case, we have seen that decision stump conjunctions are specified in terms of a mixture of discrete parameters k and\nd and continuous parameters t. If we denote by Pk,d(t) the probability density function associated with a prior P over the\nclass of decision stump conjunctions, we consider here priors of the form:\nY 1 1 I(tj ∈[Aj, Bj])\nPk,d(t) = n p(|k|)\n|k| 2|k| j∈k Bj −Aj\nAs before, we have that: Z Bj X X Y\ndtjPk,d(t) = 1\nk∈I d∈Dk j∈k Aj\nwhenever e=0 p(e) = 1. The factors relating to the discrete components k and d have the same rationale as in the case of the Occam's Razor\napproach. However, in the case of the threshold for each decision stumps, we now consider an explicitly continuous uniform 4Here Q(h) denotes the probability density function associated with Q, evaluated at h.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 22,
"total_chunks": 49,
"char_count": 798,
"word_count": 146,
"chunking_strategy": "semantic"
},
{
"chunk_id": "06dd141a-be54-47f9-8772-d8a61875414e",
"text": "As in the Occam's Razor case, we assume each attribute value xk to be constrained, a priori, in [Ak, Bk] such that Ak\nand Bk are obtained from the definition of the data. Hence, we have chosen a uniform prior probability density on [Ak, Bk]\nfor each tk such that k ∈k. This explains the last factors of Pk,d(t). Given a training set S, the learner will choose an attribute group k and a direction vector d deterministically. We pose the\nproblem of choosing the threshold in a similar manner as in the case of Occam's Razor approach of Section III with the only\ndifference that the learner identifies the interval and selects a threshold stochastically. For each attribute xk ∈[Ak, Bk] : k ∈k,\na margin interval [ak, bk] ⊆[Ak, Bk] is chosen by the learner. A deterministic decision stump conjunction classifier is then\nspecified by choosing the thresholds values tk ∈[ak, bk] uniformly. It is tempting at this point to choose tk = (ak+bk)/2 ∀k ∈k\n(i.e., in the middle of each interval). However, the PAC-Bayes theorem offers a better guarantee for another type of deterministic\nclassifier as we see below. Hence, the Gibbs classifier is defined with a posterior distribution Q having all its weight on the same k and d as chosen\nby the learner but where each tk is uniformly chosen in [ak, bk]. The KL divergence between this posterior Q and the prior\nP is then given by:\nX n 2|k| Bk −Ak\nKL(Q∥P) = ln · + ln\n|k| p(|k|) bk −ak k∈k\nIn this limit when [ak, bk] = [Ak, Bk] ∀k ∈k, it can be seen that the KL divergence between the \"continuous components\"\nof Q and P vanishes. Furthermore, the KL divergence between the \"discrete components\" of Q and P is small for small values\nof |k| (whenever p(|k|) is not too small). Hence, this KL divergence between our choices for Q and P exhibits a tradeoff\nbetween margins (bk −ak) and sparsity (small value of |k|) for Gibbs classifiers. Theorem 4 suggests that the GQ with the\nsmallest guarantee of risk R(GQ) should minimize a non trivial combination of KL(Q∥P) and RS(GQ).",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 23,
"total_chunks": 49,
"char_count": 2012,
"word_count": 363,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6984dedd-5247-4b2c-a0ae-817d9e2e9f14",
"text": "The posterior Q is identified by an attribute group vector k, a direction vector d, and intervals [ak, bk] ∀k ∈k. We refine the\nnotation for our Gibbs classifier GQ to reflect this. Hence, we use Gkdab where a and b are the vectors formed by the unions\nof aks and bks respectively. We can obtain a closed-form expression for RS(Gkdab) by first considering the risk R(x,y)(Gkdab)\non a single example (x, y) since RS(Gkdab) = E(x,y)∼SR(x,y)(Gkdab). From our definition for Q, we find that:\n\"Y #\nR(x,y)(Gkdab) = (1 −2y) σdkak,bk(xk) −y (6)\nk∈k\nwhere:\nif (x < a and d = +1) or (b < x and d = −1)  0x−a def b−a if a ≤x ≤b and d = +1 σda,b(x) = b−x b−a if a ≤x ≤b and d = −1 \n1 if (b < x and d = +1) or (x < a and d = −1)\nNote that the expression for R(x,y)(Cktd) is identical to the expression for R(x,y)(Gkdab) except that the piece-wise linear\nfunctions σdkak,bk(xk) are replaced by the indicator functions I((xk −tk)dk > 0). The PAC-Bayes theorem provides a risk bound for the Gibbs classifier Gkdab. Since the Bayes classifier Bkdab just performs\na majority vote under the same posterior distribution as the one used by Gkdab, it follows that:\n( Q\n1 if σdkak,bk(xk) > 1/2 Q k∈k Bkdab(x) = (7)\n0 if k∈k σdkak,bk(xk) ≤1/2 Note that Bkdab has an hyperbolic decision surface. Consequently, Bkdab is not representable as a conjunction of decision\nstumps. There is, however, no computational difficulty at obtaining the output of Bkdab(x) for any x ∈X. We now state our\nmain theorem:\nTheorem 5. Given all our previous definitions, for any δ ∈(0, 1], and for any p satisfying e=0 p(e) = 1, we have, with\nprobability atleast 1 −δ over random draws of S ∼Dm:\n∀k, d, a, b: R(Gkdab) ≤sup ǫ: kl(RS(Gkdab)∥ǫ) ≤ψ where \" #\nX 1 n 2|k| m + 1 Bk −Ak\nψ = ln · · + ln\nm |k| p(|k|) δ bk −ak k∈k\nFurthermore: R(Bkdab) ≤2R(Gkdab) ∀k, d, a, b.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 24,
"total_chunks": 49,
"char_count": 1825,
"word_count": 354,
"chunking_strategy": "semantic"
},
{
"chunk_id": "855bc2b0-10ea-4d76-96d4-0846ef470a03",
"text": "THE LEARNING ALGORITHMS\nHaving proposed the theoretical frameworks attempting to obtain the optimal classifiers based on various optimization criteria,\nwe now detail the learning algorithms for these approaches. Ideally, we would like to find a conjunction of decision stumps\nthat minimizes the respective risk bounds for each approach. Unfortunately, this cannot be done efficiently in all cases since\nthis problem is at least as hard as the (NP-hard) minimum set cover problem as mentioned by Marchand and Shawe-Taylor\n[2002]. Hence, we use a set covering greedy heuristic. It consists of choosing the decision stump i with the largest utility U iSC\nwhere:\nU iSC = |Qi| −p|Ri| (8)\nwhere Qi is the set of negative examples covered (classified as 0) by feature i, Ri is the set of positive examples misclassified\nby this feature, and p is a learning parameter that gives a penalty p for each misclassified positive example. Once the feature\nwith the largest Ui is found, we remove Qi and Ri from the training set S and then repeat (on the remaining examples) until\neither no more negative examples are present or that a maximum number s of features has been reached. This heuristic was\nalso used by Marchand and Shawe-Taylor [2002] in the context of a sample compression classifier called the set covering\nmachine. For our sample compression approach (SC), we use the above utility function U iSC . However, for the Occam's Razor and the PAC-Bayes approaches, we need utility functions that can incorporate the optimization aspects suggested by these approaches. The Occam's Razor learning algorithm\nWe propose the following learning strategy for Occam's Razor learning of conjunctions of decision stumps.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 25,
"total_chunks": 49,
"char_count": 1705,
"word_count": 276,
"chunking_strategy": "semantic"
},
{
"chunk_id": "fefd92b1-e0ed-4530-8fb5-d1244e390992",
"text": "For a fixed li\nand η, let N be the set of negative examples and P be the set of positive examples. We start with N ′ = N and P ′ = P. Let\nQi be the subset of N ′ covered by decision stump i, let Ri be the subset of P ′ covered by decision stump i, and let li be\nthe number of bits used to code the threshold of decision stump i. We choose the decision stump i that maximizes the utility\nU iOccam defined as:\nOccam def |Qi| −p|Ri| −η · li U i = ′ N P\nwhere p is the penalty suffered by covering (and hence, misclassifying) a positive example and η is the cost of using li bits\nfor decision stump i. Once we have found a decision stump maximizing Ui, we update N ′ = N ′ −Qi and P ′ = P ′ −Ri and\nrepeat to find the next decision stump until either N ′ = ∅or the maximum number v of decision stumps has been reached\n(early stopping the greedy). The best values for the learning parameters p, η, and v are determined by cross-validation. The PAC-Bayes Learning Algorithm\nTheorem 5 suggests that the learner should try to find the Bayes classifier Bkdab that uses a small number of attributes (i.e.,\na small |k|), each with a large separating margin (bk −ak), while keeping the empirical Gibbs risk RS(Gkdab) at a low value. As discussed earlier, we utilize the greedy set covering heuristic for learning. In our case, however, we need to keep the Gibbs risk on S low instead of the risk of a deterministic classifier. Since the\nGibbs risk is a \"soft measure\" that uses the piece-wise linear functions σda,b instead of the \"hard\" indicator functions, we\ncannot make use of the hard utility function of Equation 8. Instead, we need a \"softer\" version of this utility function to take\ninto account covering (and erring on) an example partly. That is, a negative example that falls in the linear region of a σda,b is\nin fact partly covered and vice versa for the positive example. Following this observation, let k′ be the vector of indices of the attributes that we have used so far for the construction of\nthe classifier. Let us first define the covering value C(Gk′dab ) of Gk′dab by the \"amount\" of negative examples assigned to class\n0 by Gk′dab :\n \nX Y\nC(Gk′dab ) def= (1 −y) 1 − σdjaj,bj(xj)\n(x,y)∈S j∈k′ We also define the positive-side error E(Gk′dab ) of Gk′dab as the \"amount\" of positive examples assigned to class 0 :\n \nX Y\nE(Gk′dab ) def= y 1 − σdjaj,bj(xj)\n(x,y)∈S j∈k′",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 26,
"total_chunks": 49,
"char_count": 2385,
"word_count": 457,
"chunking_strategy": "semantic"
},
{
"chunk_id": "03a534aa-293d-4f0e-a715-9aad614d9b93",
"text": "We now want to add another decision stump on another attribute, call it i, to obtain a new vector k′′ containing this new\nattribute in addition to those present in k′. Hence, we now introduce the covering contribution of decision stump i as:\nCk′dab (i) def= C(Gk′′d′a′b′ ) −C(Gk′dab )\nX h i Y\n= (1 −y) 1 −σdiai,bi(xi) σdjaj,bj(xj)\n(x,y)∈S j∈k′ and the positive-side error contribution of decision stump i as:\nEk′dab (i) def= E(Gk′′d′a′b′ ) −E(Gk′dab )\nX h i Y\n= y 1 −σdiai,bi(xi) σdjaj,bj(xj)\n(x,y)∈S j∈k′\nTypically, the covering contribution of decision stump i should increase its \"utility\" and its positive-side error should\ndecrease it. Moreover, we want to decrease the \"utility\" of decision stump i by an amount which would become large\nwhenever it has a small separating margin.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 27,
"total_chunks": 49,
"char_count": 785,
"word_count": 134,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a91e476d-fd30-4e6e-83cd-04f6c7d36801",
"text": "Our expression for KL(Q∥P) suggests that this amount should be proportional\nto ln((Bi −Ai)/(bi −ai)). Furthermore we should compare this margin term with the fraction of the remaining negative\nexamples that decision stump i has covered (instead of the absolute amount of negative examples covered). Hence the covering\ncontribution Ck′dab (i) of decision stump i should be divided by the amount Nabk′d of negative examples that remains to be\ncovered before considering decision stump i:\nX Y k′d def\nNab = (1 −y) σdjaj,bj(xj)\n(x,y)∈S j∈k′\nwhich is simply the amount of negative examples that have been assigned to class 1 by Gk′dab . If P denotes the set of positive\nexamples, we define the utility U abk′d (i) of adding decision stump i to Gk′dab as:\nk′d def Ck′dab (i) ab (i) Bi −Ai U ab (i) = k′d −pEk′d −η ln\nNab |P| bi −ai\nwhere parameter p represents the penalty of misclassifying a positive example and η is another parameter that controls the\nimportance of having a large margin. These learning parameters can be chosen by cross-validation. For fixed values of these\nparameters, the \"soft greedy\" algorithm simply consists of adding, to the current Gibbs classifier, a decision stump with\nmaximum added utility until either the maximum number v of decision stumps has been reached or all the negative examples\nhave been (totally) covered. It is understood that, during this soft greedy algorithm, we can remove an example (x, y) from S Q\nwhenever it is totally covered. This occurs whenever j∈k′ σdjaj,bj(xj) = 0.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 28,
"total_chunks": 49,
"char_count": 1519,
"word_count": 258,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8fc81825-9666-49fa-88d1-fbac96a2113f",
"text": "Hence, we use the above utility function for the PAC-Bayes learning strategy. Note that, in the case of U iP B and U iOccam ,\nwe normalize the number of covered and erred examples so as to increase their sensitivity to the respective η terms.\n1) Time Complexity Analysis: Let us analyze the time complexity of this algorithm for fixed p and η. For each attribute,\nwe first sort the m examples with respect to their values for the attribute under consideration. This takes O(m log m) time. Then, we examine each potential ai value (defined by the values of that attribute on the examples). Corresponding to each ai,\nwe examine all the potential bi values (all the values greater than ai).",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 29,
"total_chunks": 49,
"char_count": 687,
"word_count": 122,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a11ad6ee-4ada-4918-9f62-b4c76cf33bbf",
"text": "This gives us a time complexity of O(m2). Now if k is\nthe largest number of examples falling into [ai, bi], calculating the covering and error contributions and then finding the best\ninterval [ai, bi] takes O(km2) time. Moreover, we allow k ∈O(m) giving us a time complexity of O(m3) for each attribute. Finally, we do this over all the attributes. Hence, the overall time complexity of the algorithm is O(nm3). Note, however,\nthat for microarray data, we have n >> m (hence, we can consider m3 to be a constant).",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 30,
"total_chunks": 49,
"char_count": 513,
"word_count": 91,
"chunking_strategy": "semantic"
},
{
"chunk_id": "227d61f0-bf15-4496-b3a8-ab2d93eed858",
"text": "Moreover once the best stump is\nfound, we remove the examples covered by this stump from the training set and repeat the algorithm. Now, we know that\ngreedy algorithms of this kind have the following guarantee: if there exist r decision stumps that covers all the m examples,\nthe greedy algorithm will find at most r ln(m) decision stumps. Since we almost always have r ∈O(1), the running time of\nthe whole algorithm will almost always be ∈O(nm3 log(m)). The good news is, since n >> m, the time complexity of our\nalgorithm is roughly linear in n.\n2) Fixed-Margin Heuristic: In order to show why we prefer a uniformly distributed threshold as opposed to the one fixed\nat the middle of the interval [ai, bi] for each stump i, we use an alternate algorithm that we call the fixed margin heuristic. The algorithm is similar to the one described above but with an additional parameter γ. This parameter decides a fixed\nmargin boundary around the threshold, i.e. γ decides the length of the interval [ai, bi].",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 31,
"total_chunks": 49,
"char_count": 1004,
"word_count": 177,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ad7a9758-07c3-43d2-b72a-aeac7ccb77bb",
"text": "The algorithm still chooses the attribute\nvector k, the direction vector d and the vectors a and b. However, the ai's and bi's for each stump i are chosen such that,\n|bi −ai| = 2γ. The threshold ti is then fixed in the middle of this interval, that is ti = (ai+bi)2 . Hence, for each stump i, the\ninterval [ai, bi] = [ti −γ, ti + γ]. For fixed p and γ, a similar analysis as in the previous subsection yields a time complexity\nof O(nm2 log(m)) for this algorithm.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 32,
"total_chunks": 49,
"char_count": 463,
"word_count": 91,
"chunking_strategy": "semantic"
},
{
"chunk_id": "36c455d6-0007-44c8-8d5e-e64a18359942",
"text": "Data Set SVM SVM+gs SVM+rfe Adaboost\nName ex Genes Errs Errs S Errs S Itrs Errs\nColon 62 2000 12.8±1.4 14.4±3.5 256 15.4±4.8 128 20 15.2±2.1\nB MD 34 7129 13.2±1 7.2±2.6 32 10.4±2.4 64 20 9.8±1.1\nC MD 60 7129 28.2±2.2 23.1±2.8 1024 28.2±2.2 7129 50 21.2±2.4\nLeuk 72 7129 21.3±1.4 14±2.8 64 21±3.2 256 20 17.8±1.8\nLung 52 918 8.8±1.3 6.8±1.9 64 7.2±1.8 32 1 2.4±1.4\nBreastER 49 7129 15.3±2.4 10.3±2.7 256 11.2±2.8 256 50 9.8±1.7\nTABLE I\nRESULTS OF SVM, SVM COUPLED WITH GOLUB'S FEATURE SELECTION ALGORITHM (FILTER), SVM WITH RECURSIVE FEATURE ELIMINATION\n(WRAPPER) AND ADABOOST ALGORITHMS ON GENE EXPRESSION DATASETS. Data Set Occam SC\nName ex Genes Errs S bits Errs S\nColon 62 2000 23.6±1.2 1.8±.6 6 18.2±1.8 1.2±.6\nB MD 34 7129 17.2±1.8 1.2±.8 3 17.2±1.3 1.4±.8\nC MD 60 7129 28.6±1.8 2.6±1.1 4 29.2±1.1 1.2±.6\nLeuk 72 7129 27.8±1.7 2.2±.8 6 27.3±1.7 1.4±.7\nLung 52 918 21.7±1.1 1.8±1.2 5 18±1.3 1.2±.5\nBreastER 49 7129 25.4±1.2 3.2±.6 2 21.2±1.5 1.4±.5\nTABLE II\nRESULTS OF THE PROPOSED OCCAM'S RAZOR AND SAMPLE COMPRESSION LEARNING ALGORITHMS ON GENE EXPRESSION DATASETS. EMPIRICAL RESULTS\nThe proposed approaches for learning conjunctions of decision stumps were tested on the six real-world binary microarray\ndatasets viz. the colon tumor [Alon et al., 1999], the Leukaemia [Golub et al., 1999], the B MD and C MD Medulloblastomas\ndata [Pomeroy et al., 2002], the Lung [Garber et al., 2001], and the BreastER data [West et al., 2001]. The colon tumor data set [Alon et al., 1999] provides the expression levels of 40 tumor and 22 normal colon tissues measured\nfor 6500 human genes. We use the set of 2000 genes identified to have the highest minimal intensity across the 62 tissues. The\nLeuk data set [Golub et al., 1999] provides the expression levels of 7129 human genes for 47 samples of patients with Acute\nLymphoblastic Leukemia (ALL) and 25 samples of patients with Acute Myeloid Leukemia (AML). The B MD and C MD\ndata sets [Pomeroy et al., 2002] are microarray samples containing the expression levels of 7129 human genes. Data set B MD\ncontains 25 classic and 9 desmoplastic medulloblastomas whereas data set C MD contains 39 medulloblastomas survivors and\n21 treatment failures (non-survivors). The Lung dataset consists of gene expression levels of 918 genes of 52 patients with 39\nAdenocarcinoma and 13 Squamous Cell Cancer [Garber et al., 2001].",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 33,
"total_chunks": 49,
"char_count": 2359,
"word_count": 397,
"chunking_strategy": "semantic"
},
{
"chunk_id": "dc287095-34dc-4849-925f-ba7d86ad807a",
"text": "This data has some missing values which were replaced\nby zeros. Finally, the BreastER dataset is the Breast Tumor data of West et al. [2001] used with Estrogen Receptor status\nto label the various samples. The data consists of expression levels of 7129 genes of 49 patients with 25 positive Estrogen\nReceptor samples and 24 negative Estrogen Receptor samples. The number of examples and the number of genes in each data are given in the \"ex\" and \"Genes\" columns respectively under\nthe \"Data Set\" tab in each table. The algorithms are referred to as \"Occam\" (Occam's Razor), \"SC\" (Sample Compression) and\n\"PAC-Bayes\" (PAC-Bayes) in Tables II to V. They utilize the respective theoretical frameworks discussed in Sections III, IV\nand V along with the respective learning strategies of Section VI.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 34,
"total_chunks": 49,
"char_count": 794,
"word_count": 130,
"chunking_strategy": "semantic"
},
{
"chunk_id": "bc73178f-37ab-4826-a3e2-eba9d59ec86e",
"text": "We have compared our learning algorithm with a linear-kernel soft-margin SVM trained both on all the attributes (gene\nexpressions) and on a subset of attributes chosen by the filter method of Golub et al. [1999]. The filter method consists of\nranking the attributes as function of the difference between the positive-example mean and the negative-example mean and then\nuse only the first ℓattributes. The resulting learning algorithm, named SVM+gs is the one used by Furey et al. [2000] for the\nsame task. Guyon et al. [2002] claimed obtaining better results with the recursive feature elimination method but, as pointed\nout by Ambroise and McLachlan [2002], their work contained a methodological flaw. We use the SVM recursive feature\nelimination algorithm with this bias removed and present these results as well for comparison (referred to as \"SVM+rfe\" in\nTable I). Finally, we also compare our results with the state-of-the-art Adaboost algorithm. For this, we use the implementation\nin the Weka data mining software [Witten and Frank, 2005]. Each algorithm was tested over 20 random permutations of the datasets, with the 5-fold cross validation (CV) method. Each\nof the five training sets and testing sets was the same for all algorithms. The learning parameters of all algorithms and the\ngene subsets (for \"SVM+gs\" and \"SVM+rfe\") were chosen from the training sets only.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 35,
"total_chunks": 49,
"char_count": 1377,
"word_count": 219,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0f6c43ba-4803-4eec-8924-cbb53c67185f",
"text": "This was done by performing a second\n(nested) 5-fold CV on each training set. For the gene subset selection procedure of SVM+gs, we have considered the first ℓ= 2i genes (for i = 0, 1, . . . , 12) ranked\naccording to the criterion of Golub et al. [1999] and have chosen the i value that gave the smallest 5-fold CV error on the\ntraining set. The \"Errs\" column under each algorithm in Tables I to III refer to the average (nested) 5-fold cross-validation\nerror of the respective algorithm with one standard deviation two-sided confidence interval. The \"bits\" column in Table II Data Set PAC-Bayes\nName ex Genes S G-errs B-errs\nColon 62 2000 1.53±.28 14.68±1.8 14.65±1.8\nB MD 34 7129 1.2±.25 8.89±1.65 8.6±1.4\nC MD 60 7129 3.4±1.8 23.8±1.7 22.9±1.65\nLeuk 72 7129 3.2±1.4 24.4±1.5 23.6±1.6\nLung 52 918 1.2±.3 4.4±.6 4.2±.8\nBreastER 49 7129 2.6±1.1 12.8±.8 12.4±.78\nTABLE III\nRESULTS OF THE PAC-BAYES LEARNING ALGORITHM ON GENE EXPRESSION DATASETS. refer to the number of bits used for the Occam' Razor approach.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 36,
"total_chunks": 49,
"char_count": 1008,
"word_count": 174,
"chunking_strategy": "semantic"
},
{
"chunk_id": "785e07db-2992-4b22-9de8-38228b921982",
"text": "The \"G-errs\" and the \"B-errs\" columns in Table III refer to\nthe average nested 5-fold CV error of the optimal Gibbs classifier and the corresponding Bayes classifier with one standard\ndeviation two-sided interval respectively. For Adaboost, 10, 20, 50, 100, 200, 500, 1000 and 2000 iterations for each datasets were tried and the reported results\ncorrespond to the best obtained 5-fold CV error. The size values reported here (the \"S\" columns for \"SVM+gs\"and \"SVM+rfe\",\nand \"Itr\" column for \"AdaBoost\" in Table I) correspond to the number of attributes (genes) selected most frequently by the\nrespective algorithms over all the permutation runs.5 Choosing, by cross-validation, the number of boosting iteration is somewhat\ninconsistent with Adaboost's goal of minimizing the empirical exponential risk. Indeed, to comply with Adaboost's goal, we\nshould choose a large-enough number of boosting rounds that assures the convergence of the empirical exponential risk to its\nminimum value. However, as shown by Zhang and Yu [2005], Boosting is known to overfit when the number of attributes\nexceeds the number of examples. This happens in the case of microarray experiments frequently where the number of genes\nfar exceeds the number of samples, and is also the case in the datasets mentioned above. Early stopping is the recommended\napproach in such cases and hence we have followed the method described above to obtained the best number of boosting\niterations.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 37,
"total_chunks": 49,
"char_count": 1458,
"word_count": 228,
"chunking_strategy": "semantic"
},
{
"chunk_id": "225f8870-5ea1-4d2e-820e-78f9de961879",
"text": "Further, Table IV gives the result for a single run of the deterministic algorithm using the fixed-margin heuristic described\nabove. Table V gives the results for the PAC-Bayes bound values for the results obtained for a single run of the PAC-Bayes\nalgorithm on the respective microarray data sets. Recall that the PAC-Bayes bound provides a uniform upper bound on the\nrisk of the Gibbs classifier.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 38,
"total_chunks": 49,
"char_count": 398,
"word_count": 65,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f69d8be0-7191-4f7e-87b4-ab15941b01c5",
"text": "The column labels refer to the same quantities as above although the errors reported are over a\nsingle nested 5-fold CV run. The \"Ratio\" column of Table V refers to the average value of (bk −ak)/(Bk −Ak) obtained\nover the decision stumps used by the classifiers over 5 testing folds and the \"Bound\" columns of Tables IV and V refer to the\naverage risk bound of Theorem 5 multiplied by the total number of examples in respective data sets. Note, again, that these\nresults are on a single permutation of the datasets and are presented just to illustrate the practicality of the risk bound and the\nrationale of not choosing the fixed-margin heuristic over the current learning strategy. A Note on the Risk Bound\nNote that the risk bounds are quite effective and their relevance should not be misconstrued by observing the results in just\nthe current scenario. One of the most limiting factor in the current analysis is the unavailability of microarray data with larger\nnumber of examples. As the number of examples increase, the risk bound of Theorem 5 gives tighter guarantees. Consider,\nfor instance, if the datasets for the Lung and Colon Cancer had 500 examples. A classifier with the same performance over\n500 examples (i.e. with the same classification accuracy and number of features as currently) would have a bound of about 12\nand 30 percent error instead of current 34.6 and 54.6 percent respectively. This only illustrates how the bound can be more\neffective as a guarantee when used on datasets with more examples. Similarly, a dataset of 1000 examples for Breast Cancer",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 39,
"total_chunks": 49,
"char_count": 1579,
"word_count": 268,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e68de1b9-4997-4d89-b9b0-d051c4b7eaf7",
"text": "5There were no close ties with classifiers with fewer genes. Data Set Stumps:PAC-Bayes(fixed margin)\nName ex Genes Size Errors Bound\nColon 62 2000 1 14 34\nB MD 34 7129 1 7 20\nC MD 60 7129 3 28 48\nLeuk 72 7129 2 21 46\nLung 52 918 2 9 29\nBreastER 49 7129 3 11 31\nTABLE IV\nRESULTS OF THE PAC-BAYES APPROACH WITH FIXED-MARGIN HEURISTIC ON GENE EXPRESSION DATASETS. Data Set Stumps:PAC-Bayes\nName ex Genes Ratio Size G-errs B-errs Bound\nColon 62 2000 0.42 1 12 11 33\nB MD 34 7129 0.10 1 7 7 20\nC MD 60 7129 0.08 5 21 20 45\nLeuk 72 7129 0.002 3 22 21 48\nLung 52 918 0.12 1 3 3 18\nBreastER 49 7129 0.09 2 11 11 29\nTABLE V\nAN ILLUSTRATION OF THE PAC-BAYES RISK BOUND ON A SAMPLE RUN OF THE PAC-BAYES ALGORITHM.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 40,
"total_chunks": 49,
"char_count": 702,
"word_count": 150,
"chunking_strategy": "semantic"
},
{
"chunk_id": "4ce8938a-8329-4c71-86f1-48369d1bf0de",
"text": "with a similar performance can have a bound of about 30 percent instead of current 63 percent. Hence, the current limitation\nin the practical application of the bound comes from limited data availability. As the number of examples increase, the bounds\nprovides tighter guarantees and become more significant.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 41,
"total_chunks": 49,
"char_count": 308,
"word_count": 48,
"chunking_strategy": "semantic"
},
{
"chunk_id": "11e16e59-17c4-4d91-86ac-d1a3ef95c690",
"text": "ANALYSIS\nThe results clearly show that even though \"Occam\" and \"SC\" are able to find sparse classifiers (with very few genes), they\nare not able to obtain acceptable classification accuracies. One possible explanation is that these two approaches focus on\nthe most succinct classifier with their respective criterion. The Sample compression approach tries to minimize the number of\ngenes used but does not take into account the magnitude of the separating margin and hence compromises accuracy. On the\nother hand, the Occam's Razor approach tries to find a classifier that depends on margin only indirectly. Approaches based on\nsample compression as well as minimum description length have shown encouraging results in various domains. An alternate\nexplanation for their suboptimal performance here can be seen in terms of extremely limited sample sizes. As a result, the\ngain in accuracy does not offset the cost of adding additional features in the conjunction. The PAC-Bayes approach seems to\nalleviate these problems by performing a significant margin-sparsity tradeoff. That is, the advantage of adding a new feature is\nseen in terms of a combination of the gain in both margin and the empirical risk. This can be compared to the strategy used\nby the regularization approaches. The classification accuracy of PAC-Bayes algorithm is competitive with the best performing\nclassifier but has an added advantage, quite importantly, of using very few genes. For the PAC-Bayes approach, we expect the Bayes classifier to generally perform better than the Gibbs classifier. This is\nreflected to some extent in the empirical results for Colon, C MD and Leukaemia datasets. However, there is no means to\nprove that this will always be the case.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 42,
"total_chunks": 49,
"char_count": 1739,
"word_count": 275,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5c19599c-4fef-4e91-8429-799787094c64",
"text": "It should be noted that there exist several different utility functions that we can use\nfor each of the proposed learning approaches. We have tried some of these and reported results only for the ones that were\nfound to be the best (and discussed in the description of the corresponding learning algorithms). A noteworthy observation with regard to Adaboost is that the gene subset identified by this algorithm almost always include\nthe ones found by the proposed PAC-Bayes approach for decision stumps. Most notably, the only gene Cyclin D1, a well\nknown marker for Cancer, found for the lung cancer dataset is the most discriminating factor and is commonly found by both\napproaches. In both cases, the size of the classifier is almost always restricted to 1. These observations not only give insights\ninto the absolute peaks worth investigating but also experimentally validates the proposed approaches.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 43,
"total_chunks": 49,
"char_count": 905,
"word_count": 146,
"chunking_strategy": "semantic"
},
{
"chunk_id": "99591092-92d2-4c8c-8ee1-668a7d6abd26",
"text": "Finally, many of the genes identified by the final6 PAC-Bayes classifier include some prominent markers for the corresponding\ndiseases as detailed below. Biological Relevance of the Selected Features\nTable VI details the genes identified by the final PAC-Bayes classifier learned over each dataset after the parameter selection\nphase. There are some prominent markers identified by the classifier. Some of the main genes identified by the PAC-Bayes\napproach are the ones identified by previous studies for each disease— giving confidence in the proposed approach. Some\nof the discovered genes in this case include Human monocyte-derived neutrophil-activating protein (MONAP) mRNA in the\ncase of Colon Cancer dataset and oestrogen receptor in the case of Breast Cancer data, D79205 at-Ribosomal protein L39,\nD83542 at-Cadherin-15 and U29195 at-NPTX2 Neuronal pentraxin II in the case of Medulloblastomas datasets B MD and\nC MD. Other genes identified have biological relevance, for instance, the identification of Adipsin, LAF-4 and HOX1C with\nregard to ALL/AML by our algorithm is in agreement with that of the findings of Chow et al. [2001], Hiwatari et al. [2003]\nand Lawrence and Largman [1992] respectively and the studies that followed. Further, in the case of breast cancer, Estrogen receptors (ER) have shown to interact with BRCA1 to regulate VEGF\ntranscription and secretion in breast cancer cells [Kawai et al., 2002]. These interactions are further investigated by Ma et al.\n[2005]. Further studies for ER have also been done. For instance, Moggs et al. [2005] discovered 3 putative estrogen-response\nelements in Keratin6 (the second gene identified by the PAC-Bayes classifier in the case of BreastER data) in the context of 6This is the classifier learned after choosing the best parameters using nested 5-fold CV and trained on the full dataset. Dataset Gene(s) identified by PAC-Bayes Classifier\nColon 1. Hsa˙627 M26383-Human monocyte-derived neutrophil-activating protein (MONAP) mRNA\nB MD 1. D79205 at-Ribosomal protein L39\nC MD 1. S71824 at-Neural Cell Adhesion Molecule, Phosphatidylinositol-Linked Isoform Precursor\n2.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 44,
"total_chunks": 49,
"char_count": 2138,
"word_count": 321,
"chunking_strategy": "semantic"
},
{
"chunk_id": "16a599c7-ecdc-4c99-9fd1-bb30e40a0e48",
"text": "D83542 at-Cadherin-15\n3. U29195 at-NPTX2 Neuronal pentraxin II\n4. X73358 s at-HAES-1 mRNA\n5. L36069 at-High conductance inward rectifier potassium channel alpha subunit mRNA\nLeuk 1. M84526 at-DF D component of complement (adipsin)\n2. U34360 at-Lymphoid nuclear protein (LAF-4) mRNA\n3. M16937 at-Homeo box c1 protein, mRNA\nLung 1. GENE221X-IMAGE 841641-cyclin D1 (PRAD1-parathyroid adenomatosis 1) Hs˙82932 AA487486\nBreastER 1. X03635 at,X03635- class C, 20 probes, 20 in all X03635 5885 - 6402\nHuman mRNA for oestrogen receptor\n2. L42611 f at, L42611- class A, 20 probes, 20 in L42611 1374-1954,\nHomo sapiens keratin 6 isoform K6e KRT6E mRNA, complete cds TABLE VI\nGENES IDENTIFIED BY THE Final PAC-BAYES CLASSIFIER E2-responsive genes identified by microarray analysis of MDA-MD-231 cells that re-express ERα. An important role played\nby cytokeratins in cancer development is also widely known (see for instance Gusterson et al. [2005]). Furthermore, the importance of MONAP in the case of colon cancer and Adipsin in the case of leukaemia data has further\nbeen confirmed by various rank based algorithms as detailed by Su et al. [2003] in the implementation of \"RankGene\", a\nprogram that analyzes and ranks genes for the gene expression data using eight ranking criteria including Information Gain\n(IG), Gini Index (GI), Max Minority (MM), Sum Minority (SM), Twoing Rule (TR), t-statistic (TT), Sum of variances (SV)\nand one-dimensional Support Vector Machine (1S). In the case of Colon Cancer data, MONAP is identified as the top ranked\ngene by four of the eight criteria (IG, SV, TR, GI), second by one (SM), eighth by one (MM) and in top 50 by 1S.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 45,
"total_chunks": 49,
"char_count": 1652,
"word_count": 264,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ca9ac13a-860e-44f7-819e-3599eb027bb8",
"text": "Similarly,\nin the case of Leukaemia data, Adipsin is top ranked by 1S, fifth by SM, seventh by IG, SV, TR, GI and MM and is in top\n50 by TT. These observations provides a strong validation for our approaches.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 46,
"total_chunks": 49,
"char_count": 208,
"word_count": 40,
"chunking_strategy": "semantic"
},
{
"chunk_id": "53a87975-ddac-4033-954b-157f7d55c920",
"text": "Cyclin as identified in the case of Lung Cancer dataset is a well known marker for cell division whose perturbations are\nconsidered to be one of the major factors causing cancer [Driscoll et al., 1999, Masaki et al., 2003]. Finally, the discovered genes in the case of Medulloblastomas are important with regard to the neuronal functioning (esp. S71824, U29195 and L36039) and can have relevance for nervous system related tumors.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 47,
"total_chunks": 49,
"char_count": 430,
"word_count": 70,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5f7cc6dc-a46b-42d0-bbf7-4dfd9365d0b7",
"text": "CONCLUSION\nLearning from high-dimensional data such as that from DNA microarrays can be quite challenging especially when the aim\nis to identify only a few attributes that characterizes the differences between two classes of data. We investigated the premise of\nlearning conjunctions of decision stumps and proposed three formulations based on different learning principles. We observed\nthat the approaches that aim solely to optimize sparsity or the message code with regard to the classifier's empirical risk limits\nthe algorithm in terms of its generalization performance, at least in the present case of small dataset sizes. By trading-off the\nsparsity of the classifier with the separating margin in addition to the empirical risk, the PAC-Bayes approach seem to alleviate\nthis problem to a significant extent. This allows the PAC-Bayes algorithm to yield competitive classification performance while\nat the same time utilizing significantly fewer attributes. As opposed to the traditional feature selection methods, the proposed approaches are accompanied by a theoretical justification\nof the performance. Moreover, the proposed algorithms embed the feature selection as a part of the learning process itself.7\nFurthermore, the generalization error bounds are practical and can potentially guide the model (parameter) selection. When\napplied to classify DNA microarray data, the genes identified by the proposed approaches are found to be biologically significant\nas experimentally validated by various studies, an empirical justification that the approaches can successfully perform meaningful\nfeature selection. Consequently, this represents a significant improvement in the direction of successful integration of machine\nlearning approaches for use in high-throughput data to provide meaningful, theoretically justifiable, and reliable results. Such\napproaches that yield a compressed view in terms of a small number of biological markers can lead to a targeted and well\nfocussed study of the issue of interest. For instance, the approach can be utilized in identifying gene subsets from the microarray\nexperiments that should be further validated using focused RT-PCR techniques which are otherwise both costly and impractical\nto perform on the full set of genes. Finally, as mentioned previously, the approaches presented in this wor have a wider relevance, and can have significant\nimplications in the direction of designing theoretically justified feature selection algorithms. These are one of the few approaches\nthat combines the feature selection with the learning process and provide generalization guarantees over the resulting classifiers\nsimultaneously. This property assumes even more significance in the wake of limited size of microarray datasets since it limits\nthe amount of empirical evaluation that can be reliably performed otherwise. Most natural extensions of the approaches and\nthe learning bias proposed here would be in other similar domains including other forms of microarray experiments such as",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 48,
"total_chunks": 49,
"char_count": 3031,
"word_count": 439,
"chunking_strategy": "semantic"
},
{
"chunk_id": "4e455f77-edb1-4397-b557-206334049264",
"text": "7Note that Huang and Chang [2007] proposed one such approach. However, they need multiple SVM learning runs. Hence, their method basically works\nas a wrapper. Chromatin Immunoprecipitation promoter arrays (chIP-Chip) and from Protein arrays. Within the same learning settings, other\nlearning biases can also be explored such as classifiers represented by features or sets of features built on subsets of attributes. ACKNOWLEDGMENT\nThis work was supported by the National Science and Engineering Research Council (NSERC) of Canada [Discovery Grant\nNo. 122405 to MM], the Canadian Institutes of Health Research [operating grant to JC, training grant to MS while at CHUL]\nand the Canada Research Chair in Medical Genomics to JC.",
"paper_id": "1005.0530",
"title": "Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data",
"authors": [
"Mohak Shah",
"Mario Marchand",
"Jacques Corbeil"
],
"published_date": "2010-05-04",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1005.0530v1",
"chunk_index": 49,
"total_chunks": 49,
"char_count": 725,
"word_count": 110,
"chunking_strategy": "semantic"
}
]