researchpilot-data / chunks /1009.0861_semantic.json
Subhadip007's picture
feat: Upload full 358k vector database
e8a2c2e
[
{
"chunk_id": "795bf4b7-6b67-46c8-bbe7-490bfd916b77",
"text": "On the Estimation of Coherence Mehryar Mohri\nCourant Institute and Google Research2010 New York, NY\nmohri@cs.nyu.edu Ameet TalwalkarSep\nUniversity of California, Berkeley4\nBerkeley, CA\nameet@eecs.berkeley.edu",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 0,
"total_chunks": 25,
"char_count": 208,
"word_count": 25,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0e43c6f5-b6c8-45a0-8368-b15c7f93ec2f",
"text": "Low-rank matrix approximations are often used to help scale standard[stat.ML] machine learning algorithms to large-scale problems. Recently, matrix\ncoherence has been used to characterize the ability to extract global information from a subset of matrix entries in the context of these low-rank\napproximations and other sampling-based algorithms, e.g., matrix completion, robust PCA. Since coherence is defined in terms of the singular\nvectors of a matrix and is expensive to compute, the practical significance\nof these results largely hinges on the following question: Can we efficiently and accurately estimate the coherence of a matrix? In this paper\nwe address this question.",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 1,
"total_chunks": 25,
"char_count": 680,
"word_count": 100,
"chunking_strategy": "semantic"
},
{
"chunk_id": "aaebb960-0696-434d-84f6-c7c66f761c9f",
"text": "We propose a novel algorithm for estimating\ncoherence from a small number of columns, formally analyze its behavior,\nand derive a new coherence-based matrix approximation bound based on\nthis analysis. We then present extensive experimental results on synthetic\nand real datasets that corroborate our worst-case theoretical analysis, yet\nprovide strong support for the use of our proposed algorithm whenever\nthese coherence estimates are excellent predictors of the effectiveness of\nsampling-based matrix approximation on a case-by-case basis. Large-scale datasets are becoming more and more prevalent for problems in a\nvariety of areas, e.g., computer vision, natural language processing, computational biology. However, several standard methods in machine learning, such\nas spectral clustering, manifold learning techniques, kernel ridge regression or\nother kernel-based algorithms do not scale to such orders of magnitude. large datasets, these algorithms would require storage and operation on matrices with thousands to millions of columns and rows, which is especially problematic since these matrices are often not sparse. An attractive solution to such\nproblems involves efficiently generating low-rank approximations to the original\nmatrix of interest. In particular, sampling-based techniques that operate on a\nsubset of the columns of the matrix can be effective solutions to this problem,\nand have been widely studied within the machine learning and theoretical computer science communities (Drineas et al., 2006; Frieze et al., 1998; Kumar et\nal., 2009b; Williams and Seeger, 2000). In the context of kernel matrices, the\nNystr¨om method (Williams and Seeger, 2000) has been shown to work particularly well in practice for various applications ranging from manifold learning to\nimage segmentation (Fowlkes et al., 2004; Talwalkar et al., 2008). A crucial assumption of these algorithms involves their sampling-based nature, namely that an accurate low-rank approximation of some matrix X ∈Rn×m can be generated exclusively from information extracted from a small\nsubset (l ≪m) of its columns. This assumption is not generally true for allmatrices, and explains the negative results of Fergus et al. (2009). For instance,\nconsider the extreme case:\n  (1) X = . . . er 0 . . . 0 ,  e1 where ei is the ith column of the n dimensional identity matrix and 0 is the n\ndimensional zero vector.",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 2,
"total_chunks": 25,
"char_count": 2406,
"word_count": 365,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a5202d94-d12c-4fec-aec8-daeb93aa2d08",
"text": "Although this matrix has rank r, it cannot be well\napproximated by a random subset of l columns unless this subset includes\ne1, . . . , er. In order to account for such pathological cases, previous theoretical bounds relied on sampling columns of X in an adaptive fashion (Bach and\nJordan, 2005; Deshpande et al., 2006; Kumar et al., 2009b; Smola and Sch¨olkopf,\n2000) or from non-uniform distributions derived from properties of X (Drineas\nand Mahoney, 2005; Drineas et al., 2006). Indeed, these bounds give better guarantees for pathological cases, but are often quite loose nonetheless, e.g., when\ndealing with kernel matrices using RBF kernels, and these sampling schemes\nare rarely utilized in practice.",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 3,
"total_chunks": 25,
"char_count": 708,
"word_count": 115,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2eb638b0-e123-401d-94fa-2d0b3568d8e9",
"text": "More recently, Talwalkar and Rostamizadeh (2010) used the notion of coherence to characterize the ability to extract information from a small subset\nof columns, showing theoretical and empirical evidence that coherence is tied\nto the performance of the Nystr¨om method. Coherence measures the extent to\nwhich the singular vectors of a matrix are correlated with the standard basis. Intuitively, if the dominant singular vectors of a matrix are incoherent, then the\nsubspace spanned by these singular vectors is likely to be captured by a random\nsubset of sampled columns of the matrix. In fact, coherence-based analysis of\nalgorithms has been an active field of research, starting with pioneering work\non compressed sensing (Cand`es et al., 2006; Donoho, 2006), as well as related\nwork on matrix completion (Cand`es and Recht, 2009; Keshavan et al., 2009b)\nand robust principle component analysis (Cand`es et al., 2009). In Cand`es and Recht (2009), the use of coherence is motivated by results\nshowing that several classes of randomly generated matrices have low coherence with high probability, one of which is the class of matrices generated from\nuniform random orthonormal singular vectors and arbitrary singular values.",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 4,
"total_chunks": 25,
"char_count": 1224,
"word_count": 189,
"chunking_strategy": "semantic"
},
{
"chunk_id": "1d284263-0826-43f3-9c84-c40550b4d363",
"text": "Unfortunately, these results do not help a practitioner compute coherence on a\ncase-by-case basis to determine whether attractive theoretical bounds hold for\nthe task at hand. Furthermore, the coherence of a matrix is by definition derived\nfrom its singular vectors and is thus expensive to compute (the prohibitive cost\nof calculating singular values and singular vectors is precisely the motivation\nbehind sampling-based techniques). Hence, in spite of the numerous theoretical\nwork based on related notions of coherence, the practical significance of these\nresults largely hinges on the following open question: Can we efficiently and\naccurately estimate the coherence of a matrix? In this paper we address this question by presenting a novel algorithm for\nestimating matrix coherence from a small number of columns.",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 5,
"total_chunks": 25,
"char_count": 819,
"word_count": 123,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ec6a06da-4474-4b1d-b620-cea4502aa34f",
"text": "The remainder\nof this paper is organized as follows. Section 2.1 introduces basic definitions,\nand provides a brief background on low-rank matrix approximation and matrix\ncoherence. In Section 3 we introduce our sampling-based algorithm to estimate\nmatrix coherence. We then formally analyze its behavior in Section 4, and also\nuse this analysis to derive a novel coherence-based bound for matrix projection\nreconstruction via Column-sampling (defined in Section 2.2). Finally, in Section 5 we present extensive experimental results on synthetic and real datasets. These results corroborate our worst-case theoretical analysis, yet provide strong\nsupport for the use of our proposed algorithm whenever sampling-based matrix\napproximation is being considered. Empirically, our algorithm effectively estimates matrix coherence across a wide range of datasets, and these coherence\nestimates are excellent predictors of the effectiveness of sampling-based matrix\napproximation on a case-by-case basis.",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 6,
"total_chunks": 25,
"char_count": 997,
"word_count": 138,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0d41a4e5-2b97-4c1a-80d3-717fd40b48bb",
"text": "2.1 Notation\nLet X ∈Rn×m be an arbitrary matrix. We define X(j), j = 1 . . . m, as the jth\ncolumn vector of X, X(i), i = 1 . . . n, as the ith row vector of X and Xij as\nthe ijth entry of X. We denote by ∥X∥F the Frobenius norm of X and by ∥v∥\nthe l2 norm of the vector v. If rank(X) = r, we can write the thin Singular\nValue Decomposition (SVD) as X = UXΣXV⊤X. ΣX is diagonal and contains\nthe singular values of X sorted in decreasing order, i.e., σ1(X) ≥σ2(X) ≥\n. . . ≥σr(X). UX ∈Rn×r and VX ∈Rm×r have orthogonal columns thatcontain the left and right singular vectors of X corresponding to its singular\nvalues. We define PX = UXU⊤X as the orthogonal projection matrix onto the\ncolumn space of X, and denote the projection onto its orthogonal complement\nas PX,⊥= I −PX. We further define X+ ∈Rm×n as the Moore-Penrose\npseudoinverse of X, with X+ = VXΣ+XU⊤X. Finally, we will define K ∈Rn×n as a symmetric positive semidefinite (SPSD) matrix with rank(K) = r ≤n, i.e.a symmetric matrix with non-negative eigenvalues. 2.2 Low-rank matrix approximation\nStarting with an n×m matrix, X, we are interested in algorithms that generatea low-rank approximation, eX, from a sample of l of its columns. The ≪naccuracy of this approximation is often measured using the Frobenius or Spectral\n}. We next briefly describe two of the most commondistance, i.e., ∥X −eX∥{2,Falgorithms of this form, the Column-sampling and the Nystr¨om methods. The Column-sampling method generates approximations to arbitrary rectangular matrices. We first sample l columns of X such that X = X1 X2 ,\nwhere X1 has l columns, and then use the SVD of X1, X1 = UX1ΣX1V⊤X1, to\napproximate the SVD of X (Frieze et al., 1998). This method is most commonly\nused to generate a 'matrix projection' approximation (Kumar et al., 2009b) of\nX as follows:\neXcol = UX1U⊤X1X. (2)",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 7,
"total_chunks": 25,
"char_count": 1832,
"word_count": 331,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5e5e303a-a984-4985-bd24-5ba68e3a0ecf",
"text": "The runtime of the Column-sampling method is dominated by the SVD of X1\nwhich takes O(nl2) time to perform and is feasible for small l. In contrast to the Column-sampling method, the Nystr¨om method deals only\nwith SPSD matrices. We start with an n × n SPSD matrix, sampling l columns\nsuch that K = K1 K2 , where K1 has l columns, and define W as the l × lmatrix consisting of the intersection of these l columns with the corresponding\nl rows of K. Since K is SPSD, W is also SPSD. Without loss of generality, we\ncan rearrange the columns and rows of K based on this sampling such that:\n\" # \" #\nW bK⊤1 W bK⊤1 K = where K1 = and K2 = , (3)\nbK1 bK2 bK1 bK2 The Nystr¨om method uses W and K1 from (3) to generate a 'spectral reconstruction' (Kumar et al., 2009b) approximation of K as eKnys = K1W+K⊤1 . Since the running time complexity of SVD on W is in O(l3) and matrix multiplication with K1 takes O(l2n), the total complexity of the Nystr¨om approximation computation is also in O(l2n). Matrix coherence measures the extent to which the singular vectors of a matrix\nare correlated with the standard basis. As previously mentioned, coherence\nhas been to analyze techniques such as compressed sensing, matrix completion,\nrobust PCA, and the Nystr¨om method. These analyses have used a variety of\nrelated notions of coherence. If we let ei be the ith column of the standard\nbasis, we can define three basic notions of coherence as follows: Definition 1 (µ-Coherence).",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 8,
"total_chunks": 25,
"char_count": 1465,
"word_count": 268,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f52f0868-2adb-480c-9370-f13d92c93289",
"text": "Let U ∈Rn×r contain orthonormal columns withr < n. Then the µ-coherence of U is:\nµ(U) = √n max Uij . (4)\ni,j\nDefinition 2 (µ0-Coherence). Let U ∈Rn×r contain orthonormal columns withr < n and define PU = UU⊤as its associated orthogonal projection matrix. Then the µ0-coherence of U is:\nµ0(U) = max = max . (5) r 1≤i≤n∥PUei∥2 1≤i≤n∥U(i)∥2\nDefinition 3 (µ1-Coherence). Given the matrix X ∈Rn×mP with rank r, left\nand right singular vectors, UX and VX, and define T = 1≤k≤r U(k)X V(k)X ⊤. Then, the µ1-coherence of X is:\nrnm\nµ1(X) = max Tij . (6)\nr ij In Talwalkar and Rostamizadeh (2010), µ(U) is used to provide coherencebased bounds for the Nystr¨om method, where U corresponds to the singular\nvectors of a low-rank SPSD kernel matrix. Low-rank matrices are also the\nfocus of work on matrix completion by Cand`es and Recht (2009) and Keshavan\net al. (2009b), though they deal with more general rectangular matrices with\nSVD X = UXΣXV⊤X, and they use µ0(UX), µ0(VX) and µ1(X) to bound\nthe performance of two different matrix completion algorithms. Note that a\nstronger, more complex notion of coherence is used in Cand`es and Tao (2009) to\nprovide tighter bounds for the matrix completion algorithm presented in Cand`es\nand Recht (2009) (definition omitted here). Moreover, coherence has also been\nused to analyze algorithms dealing with low-rank matrices in the presence of\nnoise, e.g., Cand`es and Plan (2009); Keshavan et al. (2009a) for noisy matrix\ncompletion and Cand`es et al. (2009) for robust PCA. In these analyses, the\ncoherence of the underlying low-rank matrix once again appears in the form of\nµ0(·) and µ1(·). In this work we choose to focus on µ0. In comparison to µ, µ0 is a more\nrobust measure of coherence, as it deals with row norms of U, rather than\nthe maximum entry of U, and the two notions are related by a simple pair of\ninequalities: µ2/r ≤µ0 ≤µ2. Furthermore, since we focus on coherence in\nthe context of algorithms that sample columns of the original matrix, µ0 is a\nmore natural choice than µ1, since existing coherence-based bounds for these\nalgorithms (both in Talwalkar and Rostamizadeh (2010) and in Section 4 of this\nwork) only depend on the left singular vectors of the matrix. 3 Estimate-Coherence Algorithm",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 9,
"total_chunks": 25,
"char_count": 2244,
"word_count": 386,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0bf9be31-4566-4c36-bb49-a7f9196971c7",
"text": "As discussed in the previous section, matrix coherence has been used to analyze\na variety of algorithms, under the assumption that the input matrix is either exactly low-rank or is low-rank with the presence of noise. In this section, we\npresent a novel algorithm to estimate the coherence of matrices following the\nsame assumption. Starting with an arbitrary n×m matrix, X, we are ultimately\ninterested in an estimate of µ0(UX), which contains the scaling factor n/r as\nshown in Definition 2. However, our estimate will also involve singular vectors\nin dimension n, and as we mentioned above, r is assumed to be small. Hence,\nneither of these scaling terms has a significant impact on our estimation. As\nsuch, our algorithm focuses on the closely related expression: γ(U) = 1≤i≤n∥PUei∥2max = nµ0 . (7)",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 10,
"total_chunks": 25,
"char_count": 802,
"word_count": 133,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5218815d-30de-4c7b-95c5-39547473b012",
"text": "Our proposed algorithm is quite similar in flavor to the Column-sampling\nalgorithm discussed in Section 2.2. It estimates coherence by first sampling l\ncolumns of the matrix and subsequently using the left singular vectors of this\nsubmatrix to obtain an estimate. Note that our algorithm applies both to exact\nlow-rank matrices as well as low-rank matrices perturbed by noise. In the latter\ncase, the algorithm requires a user-defined low-rank parameter, r. The runtime\nof this algorithm is dominated by the singular value decomposition of the n × l Estimate-Coherencesubmatrix, and hence is in O(l2n). The details of the\nalgorithm are presented in Figure 1. Input: n × l matrix (X1) storing l columns of arbitrary n × m matrix X, low-rank parameter (r)\nOutput: An estimate of the coherence of X Estimate-Coherence(X1, r)\n1 UX1 ←SVD(X1) ✄keep left singular vectors\n2 q ←min rank(X1), r\nq) ✄keep top q singular vectors of X1 3 eU ←Truncate(UX1,\nequation (7) 4 γ(X1) ←Calculate-Gamma(eU) ✄see\n5 return γ(X1) Figure 1: The proposed sampling-based algorithm to estimate matrix coherence. Note that r is only required when X is perturbed by noise.",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 11,
"total_chunks": 25,
"char_count": 1142,
"word_count": 187,
"chunking_strategy": "semantic"
},
{
"chunk_id": "886f2e3b-b5bf-4506-a907-0ce2cba013f9",
"text": "In this section we present a theoretical analysis of Estimate-Coherence when\nused with low-rank matrices. Our main theoretical results are presented in\nTheorem 1. Define X ∈Rn×m with rank(X) = r ≪n, and denote by UX ther left singular vectors of X corresponding to its non-zero singular values. Let the\northogonal projection onto span(X1) be denoted by PX1 = UX1U⊤X1, and define the projection onto its orthogonal complement as PX1,⊥.",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 12,
"total_chunks": 25,
"char_count": 434,
"word_count": 70,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5b47efcb-d3de-47ad-96f6-88cf082e9fca",
"text": "Let X1 be a set of l\ncolumns of X sampled uniformly at random, and let x be a column of X that is\nnot in X1 that is sampled uniformly at random. Then the following statements\ncan be made about γ(X1), which is the output of Estimate-Coherence(X1): 1. γ(X1) is a monotonically increasing estimate of γ(X). Furthermore, if\nX′1 = X1 x with x⊥= PX1,⊥x, then 0 ≤γ(X′1) −γ(X1) ≤γ(z),\nwhere z = x⊥/∥x⊥∥.\n2. γ(X1) = γ(X) when rank(X1) = rank(X), and the probability of this event\nis dependent on the coherence of X. Specifically, for any δ > 0, it occurs\nwith probability 1 −δ for l ≥r2µ0(UX) max C1 log(r), C2 log(3/δ) for\npositive constants C1 and C2.",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 13,
"total_chunks": 25,
"char_count": 644,
"word_count": 122,
"chunking_strategy": "semantic"
},
{
"chunk_id": "1692c650-0047-4930-b6d3-dee161aa4410",
"text": "The second statement in Theorem 1 leads to Corollary 1, which relates matrix coherence to the performance of the Column-sampling algorithm when used\nfor matrix projection on a low-rank matrix. Assume the same notation as defined in Theorem 1, and let\neXcol be the matrix projection approximation generated by the Column-sampling\nmethod using X1, as described in (2). Then, for any δ > 0, eXcol = X with probability 1−δ, for l ≥r2µ0(UX) max C1 log(r), C2 log(3/δ) for positive constants\nC1 and C2. When rank(X1) = rank(X), the columns of X1 span the columns of\nX. Hence, when this event occurs, projecting X onto the span of the columns\nof X1 leaves X unchanged.",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 14,
"total_chunks": 25,
"char_count": 661,
"word_count": 115,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3b03b795-aa0b-429e-a75b-6667bffb25c8",
"text": "The second statement in Theorem 1 bounds the\nprobability of this event. 4.1 Proof of Theorem 1 We first present Lemmas 1 and 2, and then complete the proof of Theorem 1\nusing these lemmas. Assume the same notation as defined in Theorem 1. Further, let\nPX′1 be the orthogonal projection onto span(X′1) and define s = ∥x⊥∥. Then,\n1for any l ∈[1, n −1], the following equalities relate the projection matrix PX′\nto PX1: (\nPX1 + zz⊤ if s > 0;\nPX′1 = (8)\nPX1 if s = 0. First assume that s = 0, which implies that x is in the span of the\ncolumns of X1. Since orthogonal projections are unique, then clearly PX′1 =\nPX1 in this case. Next, assume that s > 0, in which case the span of the\ncolumns of X′1 can be viewed as the subspace spanned by the columns of X1\nalong with the subspace spanned by the residual of x, i.e., x⊥. Observe that\nzz⊤is the orthogonal projection onto span(x⊥). Since these two subspaces are\northogonal and since orthogonal projection matrices are unique, we can write PX′1 as the sum of orthogonal projections onto these subspaces, which matches\nthe statement of the lemma for s > 0. Assume the same notation as defined in Theorem 1. Then, if l ≥r2µ0(UX) max C1 log(r), C2 log(3/δ) , where C1 and C2 are positive constants,\nthen for any δ > 0, with probability at least 1 −δ, rank(X1) = r. Assuming uniform sampling at random, Talwalkar and Rostamizadeh\n(2010) shows that Pr[rank(X1) = r] ≥Pr ∥cV⊤X,lVX,l −I∥2 < 1 for any c ≥0,\nwhere VX,l ∈Rl×r corresponds to the first l components of the r right singularvectors of X.",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 15,
"total_chunks": 25,
"char_count": 1537,
"word_count": 288,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b3ec2fa2-fe1a-44e0-b11c-40acadad84d9",
"text": "Applying Theorem 1.2 in Cand`es and Romberg (2007) and using\nthe identity rµ0 ≥µ2 yields the statement of the lemma. Now, to prove Theorem 1 we analyze the difference: ∆l = γ(X′1) −γ(X1) = maxj e⊤j PX′1ej −maxi e⊤i PX1ei . (9) If s = ∥x⊥∥= 0, then by Lemma 1, ∆l = 0. If s > 0, then using Lemma 1 and(9) yields: ∆l = max e⊤j PX1 + zz⊤ ej e⊤i PX1ei (10) j −maxi\n≤maxj e⊤j zz⊤ej = γ(z). (11) In (10), we use the fact that orthogonal projections are always SPSD, which\nmeans that e⊤j zz⊤ej ≥0 for all j and ensures that ∆l ≥0. In (11) we decouple\nthe max(·) over PX1 and zz⊤to obtain the inequality and then apply the\ndefinition of γ(·), which yields the first statement of Theorem 1. Finally, thesecond statement of Theorem 1 follows directly from Lemma 1 when s = 0 along\nwith Lemma 2, as the former shows that ∆l = 0 if rank(X1) = rank(X) and\nthe latter gives a coherence-based finite-sample bound on the probability of this\nevent occurring. Theorem 1 suggests that the ability to estimate matrix coherence is dependent\non the coherence of the matrix itself. In fact, if we adversarially construct a\nhigh coherence matrix and select columns from this matrix in an unfortunate\nmanner, the results are quite discouraging. For instance, imagine that we generate a random SPSD matrix, e.g., using the Rand function in Matlab, and then\nreplace its first diagonal with an arbitrarily large value, leading to a very high\ncoherence matrix. If we subsequently force our sampling mechanism to ignore\nthe first column of this matrix, we are completely unable to estimate coherence\nusing Estimate-Coherence, as illustrated in Figure 2 on a synthetic matrix\ngenerated in Matlab following this procedure, with n = 1000 and k = 50. Gamma Estimation (Worst Case) Exact 0.8\nApprox\nGamma 0.60.4",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 16,
"total_chunks": 25,
"char_count": 1776,
"word_count": 318,
"chunking_strategy": "semantic"
},
{
"chunk_id": "46aa101b-db7a-4b28-adc7-46480c103e6c",
"text": "0 200 400 600 800 1000\n# of Columns Sampled Figure 2: Synthetic dataset illustrating worst-case performance of EstimateCoherence. In spite of these discouraging worst-case results, our extensive empirical\nstudies show that Estimate-Coherence performs quite well in practice on a\nvariety of synthetic and real datasets with varying coherence, suggesting that\nthe worst case addressed in theory and matched empirically in Figure 2 is rarely\nencountered in practice. We present these results in the remainder of this section.",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 17,
"total_chunks": 25,
"char_count": 522,
"word_count": 78,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d0609ae2-1f78-4619-886b-32ebebe8626a",
"text": "5.1 Experiments with synthetic data We first generated low-rank synthetic matrices with varying coherence and singular value spectra, with n = m = 1000, and r = 50. To control the lowrank structure of the matrix, we generated datasets with exponentially decaying eigenvalues with differing decay rates, i.e., for i ∈{1, . . ., r} we defined the\nith singular value as σi = exp(−iη), where η controls the rate of decay and\nηslow = .01, ηmedium = .1, ηfast = .5. To control coherence, we independently\ngenerated left and right singular vectors with varying coherences by manually\ndefining one singular vector and then using QR to generate r −1 additionalorthogonal vectors. We associated this coherence-inducing singular vector with\nthe r/2 largest singular value. We defined our 'low' coherence model by forcing\nthe coherence-inducing singular vector to have minimal coherence, i.e., setting\neach component equal to 1/√n. Using this as a baseline, we used 3 and 8 times\nthis baseline to generate 'mid' and 'high' coherences (see Figure 3(a)). We\nthen used Estimate-Coherence with varying numbers of sampled columns\nto estimate matrix coherence. Results reported in Figure 3(b-d) are means and\nstandard deviations of 10 trials for each value of l.",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 18,
"total_chunks": 25,
"char_count": 1244,
"word_count": 201,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3502bd0a-3a13-4372-a3ef-7055614572c2",
"text": "Although the coherence estimate converges faster for the low coherence matrices, the results show that even\nin the high coherence matrices, Estimate-Coherence recovers the true coherence after sampling only r columns. Further, we note that the singular value\nspectrum influences the quality of the estimate. This observation is due to the\nfact that the faster the singular values decay, the greater the impact of the r/2\nlargest singular value, which is associated with the coherence-inducing singular",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 19,
"total_chunks": 25,
"char_count": 501,
"word_count": 76,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8e14e090-1288-4f92-b161-b7f7a2404152",
"text": "Exact Gamma of Synthetic Datasets Gamma Estimation Error, Decay = SLOW\nlow 0\n0.8 mid\n−0.2\nhigh Exact 0.6\n− −0.4\nGamma 0.4 −0.6 low Approx\nmid 0.2 −0.8\nhigh\n0 −1 10 20 30 40 50\n# of Columns Sampled\n(a) (b)\nGamma Estimation Error, Decay = MEDIUM Gamma Estimation Error, Decay = FAST −0.2 −0.2\nExact Exact\n− −0.4 − −0.4 −0.6 −0.6 Approx low Approx low\nmid mid −0.8 −0.8\nhigh high\n−1 −1\n10 20 30 40 50 10 20 30 40 50\n# of Columns Sampled # of Columns Sampled\n(c) (d)\nGamma Estimation Error, Noise = SMALL Gamma Estimation Error, Noise = LARGE\n0.5 0.5 Exact 0 Exact\n− − 0\nApprox −0.5 low Approx low\nmid mid\nhigh high\n−1 −0.5\n50 100 150 200 250 300 50 100 150 200 250 300\n# of Columns Sampled # of Columns Sampled\n(e) (f) Figure 3: Experiments with synthetic matrices. (a) True coherence associated\nwith 'low', 'mid' and 'high' coherences. (b-d) Exact low-rank experiments measuring difference between the exact coherence and the estimate by EstimateCoherence. (e-f) Experiments with low-rank matrices in the presence of noise,\ncomparing exact and estimated coherence with two different levels of noise.",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 20,
"total_chunks": 25,
"char_count": 1097,
"word_count": 202,
"chunking_strategy": "semantic"
},
{
"chunk_id": "acd94bfe-c946-4aca-b4d5-37084441ddb9",
"text": "vector, and hence the more likely it will be captured by sampled columns. Next, we examined the scenario of low-rank matrices with noise, working\nwith the 'MEDIUM' decaying matrices used in the low-rank experiments. To\ncreate a noisy matrix from each original low-rank matrix, we first used the QR Dataset Type of data # Points (n) # Features (d) Kernel\nNIPS bag of words 1500 12419 linear\nPIE face images 2731 2304 linear\nMNIS digit images 4000 784 linear\nEssential proteins 4728 16 RBF\nAbalone abalones 4177 8 RBF\nDexter bag of words 2000 20000 linear\nKIN-8nm kinematics of robot arm 2000 8 polynomial Table 1: Description of real datasets used in our coherence experiments, including the type of data, the number of points (n), the number of features (d)\nand the choice of kernel (Asuncion and Newman, 2007; Gustafson et al., 2006;\nLeCun and Cortes, 1998; Sim et al., 2002).",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 21,
"total_chunks": 25,
"char_count": 877,
"word_count": 151,
"chunking_strategy": "semantic"
},
{
"chunk_id": "010b8e65-2c44-4266-9eb0-a37fd9e03386",
"text": "algorithm to find a full orthogonal basis containing the r left singular vectors\nof the original matrix, and used it as our new left singular vectors (we repeated\nthis procedure to obtain right singular vectors). We then defined each of the\nremaining n −r singular values of our noisy matrix to equal some fraction ofthe rth singular value of the original matrix (0.1 for 'SMALL' noise and 0.9\n'LARGE' noise). The performance of Estimate-Coherence on these noisy\nmatrices is presented in Figure 3(e-f), where results are means and standard\ndeviations of 10 trials for each value of l. The presence of noise clearly has a\nnegative affect on performance, yet the estimates are quite accurate for l = 2r\nin the 'LOW' noise scenario, and even for the high coherence matrices with\n'LARGE' noise, the estimate is fairly accurate when l ≥4r. 5.2 Experiments with real data We next performed experiments using the datasets listed in Table 1. We used a\nvariety of kernel functions to generate SPSD kernel matrices from these datasets,\nwith the resulting kernel matrices being quite varied in coherence (see Figure\n4(a)). We then used Estimate-Coherence with r set to equal the number\nof singular values needed to capture 99% of the spectral energy of each kernel\nmatrix.",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 22,
"total_chunks": 25,
"char_count": 1261,
"word_count": 212,
"chunking_strategy": "semantic"
},
{
"chunk_id": "94751f7f-a10e-48f0-98a1-17bb134b999f",
"text": "Figure 4(b) shows the estimation error over 10 trials. Although the coherence is well estimated across datasets when l ≥100, the estimates for the twohigh coherence datasets (nips and dext) converge most slowly and exhibit the\nmost variance across trials. Next, we performed spectral reconstruction using\nthe Nystr¨om method and matrix projection reconstruction using the Columnsampling method, and report results over 10 trials in Figure 4(c-d). The results\nclearly illustrate the connection between matrix coherence and the quality of\nthese low-rank approximation techniques, as the two high coherence datasets\nexhibit significantly worse performance than the remaining datasets. Exact Gamma of Real Datasets Gamma Estimation Error\n1 0.5 nips\n0.8 pie\nmnis\n0.6 essabn Exact kin pie Gamma 0.4 dext − 0 nips mnis Approx ess\n0.2 abndext\n−0.5 kin\n0 0 200 400 600\n# of Columns Sampled\n(a) (b)\nSpectral Reconstruction Error Matrix Projection Error 1 nipspie 1 nipspie\nmnis mnis\n0.8 essabn 0.8 essabn\ndext dext Error kin Error kin\n0.6 0.6 0.4 0.4 Normalized Normalized 0.2 0.2 0 0\n100 200 300 400 100 200 300 400\n# of Columns Sampled # of Columns Sampled\n(c) (d)",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 23,
"total_chunks": 25,
"char_count": 1156,
"word_count": 190,
"chunking_strategy": "semantic"
},
{
"chunk_id": "17e0cb34-85e3-4430-b235-14d2cdded4c7",
"text": "Figure 4: Experiments with real data. (a) True coherence of each kernel matrix\nK. (b) Difference between the true coherence and the estimated coherence. (c-d)\nQuality of two types of low-rank matrix approximations (eK), where 'Normalized\nError' equals ∥K −eK∥F/∥K∥F. We proposed a novel algorithm to estimate matrix coherence. Our theoretical\nanalysis shows that Estimate-Coherence provides good estimates for relatively low-coherence matrices, and more generally, its effectiveness is tied to\ncoherence itself. We corroborate this finding for high-coherence matrices with\nan adversarially chosen dataset and sampling scheme. Empirically, however,\nour algorithm efficiently and accurately estimates coherence across a wide range\nof datasets, and these estimates are excellent predictors of the effectiveness of\nsampling-based matrix approximation. We believe that our algorithm should be\nused whenever low-rank matrix approximation is being considered to determine\nits applicability on a case-by-case basis. Moreover, the variance of coherence estimates across multiple samples may provide further information, and the use of\nmultiple samples fits nicely in the framework of ensemble methods for low-rank\napproximation, e.g., Kumar et al. (2009a).",
"paper_id": "1009.0861",
"title": "On the Estimation of Coherence",
"authors": [
"Mehryar Mohri",
"Ameet Talwalkar"
],
"published_date": "2010-09-04",
"primary_category": "stat.ML",
"arxiv_url": "http://arxiv.org/abs/1009.0861v1",
"chunk_index": 24,
"total_chunks": 25,
"char_count": 1247,
"word_count": 173,
"chunking_strategy": "semantic"
}
]