diff --git "a/GNAyT4oBgHgl3EQf5Pq2/content/tmp_files/load_file.txt" "b/GNAyT4oBgHgl3EQf5Pq2/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/GNAyT4oBgHgl3EQf5Pq2/content/tmp_files/load_file.txt" @@ -0,0 +1,924 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf,len=923 +page_content='G-CEALS: Gaussian Cluster Embedding in Autoencoder Latent Space for Tabular Data Representation Manar D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Samad and Sakib Abrar Department of Computer Science Tennessee State University Nashville, TN, USA msamad@tnstate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content='edu January 3, 2023 ABSTRACT The latent space of autoencoders has been improved for clustering image data by jointly learning a t-distributed embedding with a clustering algorithm inspired by the neighborhood embedding con- cept proposed for data visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' However, multivariate tabular data pose different challenges in representation learning than image data, where traditional machine learning is often superior to deep tabular data learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' In this paper, we address the challenge of learning tabular data in con- trast to image data and present a novel Gaussian Cluster Embedding in Autoencoder Latent Space (G-CEALS) algorithm by replacing t-distributions with multivariate Gaussian clusters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Unlike cur- rent methods, the proposed method defines the Gaussian embedding and the target cluster distribu- tion independently to accommodate any clustering algorithm in representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' A trained G-CEALS model extracts a quality embedding for unseen test data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Based on the embedding clus- tering accuracy, the average rank of the proposed G-CEALS method is 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content='4 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content='7), which is superior to all eight baseline clustering and cluster embedding methods on seven tabular data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' This pa- per shows one of the first algorithms to jointly learn embedding and clustering for improving the representation of multivariate tabular data in downstream clustering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Keywords embedding clustering, tabular data, Gaussian clusters, autoencoder, representation learning, multivariate distribution 1 Introduction Deep learning has replaced traditional machine learning in many data-intensive research and applications due to its ability to perform concurrent and efficient representation learning and classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' This concurrent learning approach outperforms traditional machine learning that requires handcrafted features to perform supervised classification [1, 2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' However, representation learning via supervisory signals from ground truth labels may be prone to overfitting [3] and adversarial attacks [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Moreover, human annotations for supervised representation learning and classification may not be available in all data domains or for all data samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' To address these pitfalls, representation learning via unsupervised clustering algorithms may be a strong alternative to supervised learning methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' The limitation of supervised representation learning may be overcome using self-supervision or pseudo labels that do not require human-annotated supervisory signals [5, 6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' In a self-supervised autoencoder, the objective is to preserve all information of input data in a low-dimensional embedding for data reconstruction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' However, embeddings for data reconstruction do not emphasize representations essential for downstream classification or clustering tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Therefore, unsupervised methods have been proposed for jointly learning embedding with clustering to yield clustering friendly representations [7, 8, 9, 10, 11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' The existing cluster embedding literature suggests several strict assumptions about clustering algorithms (k-means), cluster distributions (t-distribution), and data modality (image data).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' While deep representation learning of image data is well studied using convolutional neural networks (CNN), deep learning has not seen much success with structured tabular data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' There is strong evidence in the literature that traditional machine arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content='00802v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content='LG] 2 Jan 2023 A PREPRINT - JANUARY 3, 2023 learning still outperforms deep models in learning tabular data [12, 13, 14, 15, 16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' In this paper, we review the assumptions made in the cluster embedding literature and revise those assumptions for the representation learning of tabular data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Accordingly, a novel joint learning framework is proposed considering the architectural and algorithmic differences in learning image and tabular data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' The remainder of this manuscript is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Section 2 provides a review of the state-of-the-art literature on deep cluster embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Section 3 introduces tabular data with some theoretical underpinnings of neighborhood embedding and cluster embedding in support of our proposed representation learning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Section 4 outlines the proposed joint cluster embedding framework to obtain a quality representation of tabular data for downstream clustering or classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Section 5 summarizes the tabular data sets and experiments for evaluating the proposed joint learning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Section 6 provides the results following the experiments and compares our proposed method with similar methods in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Section 7 summarizes the findings with additional insights into the results and limitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' The paper concludes in Section 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' 2 Related work One of the earliest studies on cluster embedding, Deep Embedded Clustering (DEC) [7], is inspired by the seminal work on t-distributed stochastic neighborhood embedding (t-SNE) [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' The DEC approach first trains a deep autoen- coder by minimizing the data reconstruction loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' The trained encoder part (excluding the decoder) is then fine-tuned by minimizing the Kullback-Leibler (KL) divergence between a t-distributed cluster distribution (Q) on the embed- ding and a target distribution (P).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' The target distribution is obtained via a closed-form solution by taking the first derivative of the KL divergence loss between P and Q distributions with respect to P and equating it to zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' There- fore, the assumption of t-distribution holds for both Q and P distributions in similar work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' The k-means clustering in the DEC approach is later replaced by spectral clustering to improve the quality of embedding in terms of clustering performance [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' The DEC approach is also enhanced by an improved DEC (IDEC) framework [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' In IDEC, the autoencoder reconstruction loss and the KL divergence loss are jointly minimized to update the weights of a deep autoencoder and produce the embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Similar approaches, including t-distributions, k-means clustering, and KL divergence loss, are adopted in joint embedding and cluster learning (JECL) for multimodal representation learning of text-image data pairs [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' The Deep Clustering via Joint Convolutional Autoencoder (DEPICT) approach learns image embedding via a de-noising autoencoder [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' The embedding is mapped to a softmax function to obtain a clus- ter distribution or likelihood (Q) instead of assuming a distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Following a series of mathematical derivations and assumptions, their final learning objective includes a cross-entropy loss involving P and Q distributions and an embedding reconstruction loss for each layer of the convolutional autoencoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' A general trend in the cluster embedding literature shows that K-means is the most common clustering method [7, 8, 10, 9, 21, 19, 20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' The assumption of t-distributed cluster embedding made in the DEC method [7] continues to appear in the literature [22, 23, 8, 18, 19, 24] without any alternatives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' The assumption of t-distribution is originally made in the t-SNE algorithm for data visualization using neighborhood embedding maps [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' We argue that the assumptions of neighborhood embedding for data visualization are not aligned with the requirements of cluster embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Moreover, cluster embedding methods proposed in the literature are invariably evaluated on benchmark image data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' The methods for image learning may not be optimal or even ready to learn tabular data representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' To the best of our knowledge, similar cluster embedding methods have not been studied on multivariate tabular data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content='1 Contributions This paper is one of the first to investigate the performance of joint cluster embedding methods on tabular data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' The limitations of state-of-the-art joint cluster embedding methods are addressed to contribute a new cluster embedding algorithm as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' First, we replace the current assumption of t-distributed embedding with a mixture of mul- tivariate Gaussian distributions for multivariate tabular data by providing a theoretical underpinning for this choice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Second, a new cluster embedding algorithm is proposed using multivariate Gaussian distributions that can jointly learn distributions with any clustering algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Third, we define the target cluster distribution on the tabular data space instead of deriving it from the embedding because traditional machine learning of tabular data is still superior to deep learning and can add complementary benefits to the embedding learned via an autoencoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Therefore, our embedding and target distributions are independent of each other to flexibly learn any target cluster distribution depending on the application domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' 2 A PREPRINT - JANUARY 3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' 2023 Factors Image data Tabular data Heterogeneity Homogeneous pixel distribution Heterogeneous or multivariate distribution Spatial Regularity Yes No Sample size Large,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' >50,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content='000 Small,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' median size ∼ 660 Benchmark data set MNIST,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' CIFAR No standard benchmark Data dimensionality High,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' >1000 Low,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' median 18 Best method Deep CNN Traditional machine learning Deep approaches transfer learning,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' image augmentation None Table 1: Contrasts between image and tabular data that require significant rework of deep architectures proposed for images in learning tabular data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Median sample size and data dimensionality are obtained from 100 most downloaded tabular data sets from the UCI machine learning repository [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' 30 20 10 0 10 20 Projected Space 1 20 10 0 10 20 Projected Space 2 CNV DME Drusen Normal (a) t-SNE projected 50 0 50 100 150 Projected Space 1 50 0 50 100 150 Projected Space 2 CNV DME Drusen Normal (b) PCA projected Figure 1: Two-dimensional embeddings of high dimensional image features extracted from a deep convolutional neural network obtained from [26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' 3 Theoretical background This section provides preliminaries on tabular data in contrast to image data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' We draw multiple contrasts between neighborhood embedding proposed for data visualization and cluster embedding proposed for representation learning to underpin our proposed approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content='1 Preliminaries A tabular data set is represented in a matrix X ∈ ℜn×d with n i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content='i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content='d samples in rows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Each sample (Xi) is represented by a d-dimensional feature vector, Xi ∈ ℜd = {x1, x2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' , xd}, where i = {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' , n}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Compared to a pixel distribution P(I) of an image I, tabular data contain multivariate distributions P(x1, x2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' , xd) of heterogeneous variables in relatively much lower dimensions with limited samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Table 1 shows contrasts between image and tabular data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' One may argue that some high-dimensional sequential data, such as genomics and the MNIST images converted to pixel vectors, can be structured as tabular data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' However, these tabular representations still include regularity or homogeneity in patterns that do not pose the unique challenges of heterogeneous tabular data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Therefore, tabular data in business, health records, and many domains fail to take advantage of deep convolutional learning due to the absence of sequential patterns or image-like spatial regularities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' The current literature selectively chooses data sets with high dimensionality and large sample sizes to take the full benefits of deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' In contrast, the most commonly studied tabular data sets are of low-dimensions and limited samples (Table 1) and are almost never considered in deep representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Therefore, tabular data sets are identified as the last ”unconquered castle” for deep learning [15], where traditional machine learning methods are still competing strongly against advanced neural network architectures [15, 14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Similar to image learning, there is a need for robust tabular data learning methods to outperform superior traditional machine learning or clustering methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content='2 Neighborhood embedding A neighborhood embedding is a low-dimensional map that preserves the similarity between data points (xi and xj) observed in a higher dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Maaten and Hinton propose a Student’s t-distribution to model the similarity between 3 A PREPRINT - JANUARY 3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' 2023 t-SNE DEC [7]/ [17] IDEC [8] Purpose Neighborhood embedding Cluster embedding Low-dimensional Sampled from Gaussian Autoencoder embedding (zi) with low σ2 latent space Distance or similarity Between sample Between point & cluster measure points (xi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' xj) centroid (xi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' µj) Embedding t-distribution,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' t-distribution,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' distribution (qij) α = 1 α = 1 Target Gaussian in high-dimensional A function of distribution (pij) space (x) t-distributed qij Learning zi+1 = zi + d KLD(p,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content='q) d(zi) wi+1 = wi + d KLD(p,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content='q) d(w) Purpose Visualization in d = 2 Clustering in d > 2 Table 2: Comparison between neighborhood embedding proposed in t-SNE for data visualization [17] and cluster embedding proposed in DEC [7] inspired by t-SNE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' α = degrees of freedom of t-distribution, d = dimension of low- dimensional embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' W represents the trainable parameter of an autoencoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' samples in neighborhood embedding (zi, zj) of high-dimensional data points (xi and xj) for data visualization [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' First, the similarity between two sample points (xi and xj) in the high dimension is modeled by a Gaussian distribution, pij in Equation 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Similar joint distribution can be defined for a pair of points in the low-dimensional embedding (zi, zj) as qij below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' pij = exp(−||xi − xj||2/2σ2) � k̸=l exp(−||xk − xl||2/2σ2), qij = exp(−||zi − zj||2/2σ2 � k̸=l exp(−||zk − zl||2/2σ2) (1) The divergence between the target (pij) and embedding (qij) distributions is measured using a KL divergence loss, which is minimized to iteratively optimize the neighborhood embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' KL (P||Q) = � i � j pijlog pij qij (2) To facilitate high-dimensional data visualization in two dimensions (2D), the embedding distribution (qij) is mod- eled by a Student’s t-distribution, as shown in Equation 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' One primary justification for t-distribution is its heavier tails compared to a Gaussian distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' A heavier tail aids in an efficient mapping of outliers observed in high dimensional space to the 2D space for data visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' qij = (1 + ||zi − zj||)−1 � k̸=l(1 + ||zk − zl||)−1 (3) Therefore, data points placed at a moderate distance in high-dimension are pulled farther by a t-distribution to aid visualization in 2D space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' In the context of cluster embedding, we argue that the additional separation between points in low dimensions may alter their cluster assignments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' To illustrate this phenomenon, we project high-dimensional deep convolutional image features on 2D using 1) t-SNE and 2) two principal components, as shown in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' The scattering of data points is evident in the t-SNE mapping (Figure 1 (a)), where one blue point appears on the left side of the figure leading to a wrong cluster assignment, unlike the PCA mapping (Figure 1 (b)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' In general, the expectations of data visualization and clustering tasks are different, as highlighted in Table 2, which should be considered in respective representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content='3 Cluster embedding Cluster embedding is achieved by infusing cluster separation information into the low-dimensional latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' While neighborhood embedding is initialized by sampling from a Gaussian distribution, cluster embedding methods use embedding learned from an autoencoder’s latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' However, the current cluster embedding methods use the same t-distribution (Equation 3) to define the embedding distribution (qij), similar to neighborhood embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' The target distribution (pij) is derived as a function of qij, as shown below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' sij = q2 ij � i qij , pij = sij � j sij .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' (4) 4 A PREPRINT - JANUARY 3, 2023 While pair-wise sample distances in neighborhood embedding have a complexity of O (N 2), the distances from the centroids in embedding are O(N*K).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' Here, K is the number of clusters, which is much smaller than the number of samples (N).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQf5Pq2/content/2301.00802v1.pdf'} +page_content=' While an outlier point results in N large distances (extremely small pij values) in neighborhood embedding, there will be much fewer (K<