researchpilot-data / chunks /1805.07683_semantic.json
Subhadip007's picture
feat: Upload full 358k vector database
e8a2c2e
[
{
"chunk_id": "29bbf4f8-77e8-47c0-bea0-604eb2c5a675",
"text": "Learning Graph-Level Representations with\nRecurrent Neural Networks JaJa\nDepartment of Electrical and Computer Engineering\nInstitute for Advanced Computer Studies\n2018 University of Maryland, College Park\nEmail: yuj@umd.edu, joseph@umiacs.umd.edu\nSep\nSeptember 13, 2018",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 0,
"total_chunks": 23,
"char_count": 269,
"word_count": 32,
"chunking_strategy": "semantic"
},
{
"chunk_id": "02c537ba-551d-44f3-a727-9bc92aa12f64",
"text": "Recently a variety of methods have been developed to encode graphs into lowdimensional vectors that can be easily exploited by machine learning algorithms. The[cs.LG] majority of these methods start by embedding the graph nodes into a low-dimensional\nvector space, followed by using some scheme to aggregate the node embeddings. In\nthis work, we develop a new approach to learn graph-level representations, which includes a combination of unsupervised and supervised learning components. We start\nby learning a set of node representations in an unsupervised fashion. Graph nodes are\nmapped into node sequences sampled from random walk approaches approximated by\nthe Gumbel-Softmax distribution. Recurrent neural network (RNN) units are modified\nto accommodate both the node representations as well as their neighborhood information. Experiments on standard graph classification benchmarks demonstrate that our\nproposed approach achieves superior or comparable performance relative to the stateof-the-art algorithms in terms of convergence speed and classification accuracy. Machine learning on graphs has recently emerged as a powerful tool to solve\ngraph related tasks in various applications such as recommendation systems, quantum chemistry, and genomics [1, 2, 3, 4, 5, 6]. Excellent reviews of recent machine learning algorithms\non graphs appear in [1, 7]. However, in spite of the considerable progress achieved, deriving\ngraph-level features that can be used by machine learning algorithms remains a challenging\nproblem, especially for networks with complex substructures. In this work, we develop a new approach to learn graph-level representations of variablesize graphs based on the use of recurrent neural networks (RNN). RNN models with gated\nRNN units such as Long Short Term Memory (LSTM) have outperformed other existing\ndeep networks in many applications. These models have been shown to have the ability to",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 1,
"total_chunks": 23,
"char_count": 1923,
"word_count": 280,
"chunking_strategy": "semantic"
},
{
"chunk_id": "02791379-c963-4545-a74f-3bd23f9eeff4",
"text": "encode variable-size sequences of inputs into group-level representations; and to learn longterm dependencies among the sequence units. The key steps of our approach include a new\nscheme to embed the graph nodes into a low dimensional vector space, and a random walk\nbased algorithm to map graph nodes into node sequences approximated by the GumbelSoftmax distribution. More specifically, inspired by the continuous bag-of-word (CBOW)\nmodel for learning word representation [8, 9], we learn the node representations based on\nthe node features as well as the structural graph information relative to the node. We\nsubsequently use a random walk approach combined with the Gumbel-Softmax distribution\nto continuously sample graph node sequences where the parameters are learned from the\nclassification objective. The node embeddings as well as the node sequences are used as\ninput by a modified RNN model to learn graph-level features to predict graph labels. We\nmake explicit modifications to the architectures of the RNN models to accommodate inputs\nfrom both the node representations as well as its neighborhood information. We note that\nthe node embeddings are trained in an unsupervised fashion where the graph structures\nand node features are used. The sampling of node sequences and RNN models form a\ndifferentiable supervised learning model to predict the graph labels with parameters learned\nfrom back-propagation with respect to the classification objective. The overall approach is\nillustrated in Figure 1. Our model is able to capture both the local information from the dedicated pretrained\nnode embeddings and the long-range dependencies between the nodes captured by the RNN\nunits. Hence the approach combines the advantages of previously proposed graph kernel\nmethods as well as graph neural network methods, and exhibits strong representative power\nfor variable-sized graphs. The main contributions of our work can be summarized as follows,",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 2,
"total_chunks": 23,
"char_count": 1954,
"word_count": 295,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3f220424-0040-4c99-91e7-b3cb88bcb8da",
"text": "• Graph recurrent neural network model. We extend the RNN models to learn\ngraph-level representations from samples of variable-sized graphs. • Node embedding method. We propose a new method to learn node representations\nthat encode both the node features and graph structural information tailored to the\nspecific application domain. • Map graph nodes to sequences. We propose a parameterized random walk approach with the Gumbel-Softmax distribution to continuously sample graph nodes\nsequences with parameters learned from the classification objective. • Experimental results.",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 3,
"total_chunks": 23,
"char_count": 577,
"word_count": 82,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ba280a10-4eda-431b-9a4e-cfa77763284d",
"text": "The new model achieves improved performance in terms of\nthe classification accuracy and convergence rate compared with the state-of-the art\nmethods in graph classification tasks over a range of well-known benchmarks. Graph kernel methods are commonly used to compare graphs and derive graph-level representations. The graph kernel function defines a positive semi-definite similarity measure Training Graphs Node Embedding Graph-level\nrepresentation\nRNN\n0.34 0.89 0.18 1.05 8 Random\nG2 9 0.41 0.23 -0.19 0.69 Walk\n5 3\n0.88 -0.47 0.07 0.19 2 … … 4 -0.16 0.34 0.95 0.13 6 1\n0.21 -0.14 0.38 0.80 7 Node Sequence: 1 2 3 … 9\nNeighbors: ( 2 4 ) ( 3 1 )( 2 4 8 ) ( 4 8 ) Figure 1: Graph recurrent neural network model to learn graph-level representations. Step\n1: Node embeddings are learned from the graph structures and node features over the entire\ntraining samples. Step 2: Graph node sequences are continuously sampled from a random\nwalk method approximated by the Gumbel-Softmax distribution. Step 3: Node embeddings\nas well as the node sequences are used as input by a modified RNN model to learn graphlevel features to predict graph labels. Step 2 and 3 form a differentiable supervised learning\nmodel with both both random walk and RNN parameters learned from back-propagation\nwith respect to the classification objective. between arbitrary-sized graphs, which implicitly corresponds to a set of representations of\ngraphs. Popular graph kernels encode graph structural information such as the numbers\nof elementary graph structures including walks, paths, subtrees with emphasis on different\ngraph substructures [10, 11, 12]. Kernel based learning algorithms such as Support Vector\nMachine (SVM) are subsequently applied using the pair-wise similarity function to carry\nout specific learning tasks. The success of the graph kernel methods relies on the design of\nthe graph kernel functions, which often depend on the particular application and task under\nconsideration. Note that most of the hand-crafted graph kernels are prespecified prior to the\nmachine learning task, and hence the corresponding feature representations are not learned\nwith the classification objective [13].",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 4,
"total_chunks": 23,
"char_count": 2181,
"word_count": 342,
"chunking_strategy": "semantic"
},
{
"chunk_id": "9d83137f-6196-43a3-8bf6-25194c53fe3b",
"text": "1.0.2 Graph convolutional neural networks Motivated by the success of convolutional neural networks (CNN), recent work has adopted\nthe CNN framework to learn graph representations in a number of applications [3, 14, 15,\n6, 5]. The key challenge behind graph CNN models is the generalization of convolution\nand pooling operators on the irregular graph domain. The recently proposed convolutional\noperators were mostly based on iteratively aggregating information from neighboring nodes\nand the graph-level representations are accumulated from the aggregated node embeddings\nthrough simple schemes or graph coarsening approaches [6, 16, 14]. 1.0.3 Recurrent neural networks on graphs Recurrent neural network (RNN) models have been successful in handling many applications, including sequence modeling applications thereby achieving considerable success in a number of challenging applications such as machine translation, sentence classification [17]. Our model is directly inspired by the application of RNN models in sentence classification\nproblems, which use RNN units to encode a sequence of word representations into group-level\nrepresentations with parameters learned from the specific tasks [18]. Several recent works have adapted the RNN model on graph-structured data. Li et al.\nmodified the graph neural networks with Gated Recurrent Units (GRU) and proposed a gated\ngraph neural network model to learn node representations [16]. The model includes a GRUlike propagation scheme that simultaneously updates each node's hidden state absorbing\ninformation from neighboring nodes. Jain et al. and Yuan et al. applied the RNN model\nto analyze temporal-spatial graphs with computer vision applications where the recurrent\nunits are primarily used to capture the temporal dependencies [19, 20]. You et al. proposed\nto use RNN models to generate synthetic graphs [21] trained on Breadth-First-Search (BFS)\ngraph node sequences. A graph is represented as the tuple G = (V, H, A, l) where V is the set of all graph nodes with\n|V | = n. H represents the node attributes such that each node attribute is a discrete element\nfrom the alphabet Σ, where |Σ| = k.",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 5,
"total_chunks": 23,
"char_count": 2156,
"word_count": 325,
"chunking_strategy": "semantic"
},
{
"chunk_id": "56beb783-139b-447d-9d68-02ffb7051a38",
"text": "We assume that the node attributes are encoded with\none-hot vectors indicating the node label type, and hence H ∈Rn×k. Hi ∈Rk denotes the\none-hot vector for the individual node i corresponding to the ith. The matrix A ∈Rn×n is\nthe adjacent matrix. From the adjacent matrix, we denote Ns(i) as the set of neighbors of\nnode i with distance s from node i. l is the discrete graph label from a set of graph labels\nΣg. In this work, we consider the graph classification problem where the training samples\nare labeled graphs with different sizes. The goal is to learn graph level features and train a\nclassification model that can achieve the best possible accuracy efficiently. 3.1 Learning node embedding from graph structures The node embeddings are learned from the graph structures and node attributes over the\nentire training graphs. The main purpose is to learn a set of domain-specific node representations that encode both the node features and graph structural information. Inspired by the continuous Bag-of-Words (CBOW) model to learn word representations,\nwe propose a node embedding model with the goal to predict the central node labels from the\nembeddings of the surrounding neighbors [8, 9]. Specifically, we want to learn the embedding\nmatrix E ∈Rk×d such that each node i is mapped to a d-dimensional vector ei computed as\nei = HiE , and the weight vector w ∈RK representing the weights associated with the set\nof neighbor nodes N1, N2, ..., NK corresponding to different distances. The predictive model for any node i is abstracted as follows: 0.8\n(a) Categorical 0.6\n6 Sampling 0.4\n5 0.2\n3 v1 v2 v3 v4 v5 v6\n4 (b) Gumbel-Softmax 1\nApproximation 0.8\n1 0.6\n0.4\n0.2\nv1 v2 v3 v4 v5 v6 Figure 2: Example of the graph node sequences sampled from the random walk approach.\n(a) the node sequence consists of categorical samples from the random walk distribution (b)\nthe Gumbel-Softmax approximation of the node sequence. Yi = f(X (ws X HjE)) (1)\ns=1 j∈Ns(i)\nEach term ws Pj∈Ns(i) HjE corresponds to the sum of node embeddings from the set of\nneighbors that are s-distance to the center node i. f(·) is a differentiable predictive function\nand Yi ∈Rk corresponds to the predicted probability of the node type.",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 6,
"total_chunks": 23,
"char_count": 2214,
"word_count": 380,
"chunking_strategy": "semantic"
},
{
"chunk_id": "7b760014-a906-45a7-91f2-68f94ea3a676",
"text": "In the experiment,\nwe use a two-layer neural network model as the predictive function: Yi = Softmax(W2ReLU(W1F + b1) + b2) (2) where F = PKs=1(ws Pj∈Ns(i) hjE). The loss function is defined as the sum of the cross\nentropy error over all nodes in the training graphs, L = − X X Hi ln Yi (3)\nm=1 i∈Vm",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 7,
"total_chunks": 23,
"char_count": 298,
"word_count": 59,
"chunking_strategy": "semantic"
},
{
"chunk_id": "26df8552-777b-450a-aa5e-b654d4ec2cad",
"text": "3.1.1 Connection with existing node embedding models Previous work has described a number of node embedding methods to solve problems in\nvarious applications such as recommendation system and link prediction [22, 23]. Perozzi et al. and Grover et al. proposed DeepWalk and node2vec which use neighborhood information to derive the node embeddings inspired from Skip-Gram language models\n[24, 22]. Their objective is to preserve the similarity between nodes in the original network,\nwhich is obviously different from our proposed method. Our method has a similar formulation with Graph Convolutional Network (GCN) and GraphSAGE but the main difference is\nthat they explicitly include the central node embedding aggregated with neighboring nodes\nto predict the central node labels but our pretrained model only uses the neighbors of the\nnode information [6, 23]. In addition, their goal is to predict the node label for the unseen\nnodes in the network while ours is to learn a set of node representations that encode both\nnode and structural information. 3.2 Learning graph node sequences Since the next state of RNNs depends on the current input as well as the current state, the\norder of the data being processed is critical during the training task [25]. Therefore it is\nimportant to determine an appropriate ordering of the graph nodes whose embeddings, as\ndetermined by the first stage, will be processed by the RNN. As graphs with n nodes can be arranged into n! different sequences, it is intractable to\nenumerate all possible orderings. Hence we consider a random walk approach combined with\nthe Gumbel-Softmax distribution to generate continuous samples of graph node sequences\nwith parameters to be learned with the classification objective. We begin by introducing the weight matrix W ∈Rn×n with parameters C ∈RKrw and ϵ\ndefined as follows, ( Cs if j ∈Ns(i), s = 1, ..., Krw\nWij = (4)\nϵ otherwise In other words, W is parameterized by assigning the value Cs between nodes with distance\ns for s = 1, ..., Krw and ϵ for nodes with distance beyond Krw.",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 8,
"total_chunks": 23,
"char_count": 2058,
"word_count": 339,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6e2deb64-03b9-4fec-b591-7a44dafc3d60",
"text": "The random walk transition\nmatrix P ∈Rn×n is defined as the softmax function over the rows of the weight matrix, exp Wij\nPij = (5)\nPnk=1 exp Wik\nIn the following, we use Pi and Wi to denote the vectors corresponding to ith rows of\nthe matrices P and W respectively. The notation Pij and Wij correspond to the matrix\nelements. The graph sequence denoted as S(G) = (vπ(1), ..., vπ(n)) consists of consecutive graph\nnodes sampled from the transition probability as in Equation 6. π(i) indicates the node index\nin the ith spot in the sequence, and (vπ(1), ..., v(π(n))) forms the permutation of (v1, ..., vn). Each of the node v ∈Rn corresponds to a one-hot vector with 1 at the selected node index. vπ(i) = Sample(Pπ(i−1)) (6) Note that the first (root) node is sampled uniformly over all of the graph nodes. However, sampling categorical variables directly from the random walk probabilities suffers from two major problems: 1) the sampling process is not inherently differentiable with\nrespect to the distribution parameters; 2) the node sequence may repeatedly visit the same\nnodes multiple times while not include other unvisited nodes. To address the first problem, we introduce the Gumbel-Softmax distribution to approximate samples from a categorical distribution[26, 27]. Considering the sampling of vπ(i)\nfrom probability Pπ(i−1) in Equation 6, the Gumbel-Max provides the following way to draw\nsamples from the random walk probability as vπ(i) = one hot(argj max(gj + log Pπ(i−1)j)) (7) Algorithm 1 Random Walk Algorithm to Sample Node Sequences with the GumbelSoftmax Distribution\n1: Input: G = (V, H, A, l), the set of neighbors for each node Ns(·), parameters C ∈RK\nand ϵ.\n2: Output: the node sequences, S(G) = (vπ(1), vπ(2), ..., vπ(n))\n( Ck if j ∈Nk(i)\n3: Initialize W ∈Rn×n with Wij =\nϵ otherwise\n4: Initialize Tj(0) = 1 and Pπ(0)j = n for j = 1, ..., n\n5: for i = 1, ..., n do\n6: vπ(i) = Gumbel Softmax(Pπ(i−1))\n7: T(i) = T(i −1) ⊙(1 −vπ(i))\n8: Wπ(i) = vπ(i) · W\n9: Pπ(i) = Softmax(Wπ(i−1) ⊙T(i −1))\n10: end for where gj's are i.i.d. samples drawn from Gumbel(0, 1) distribution1.",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 9,
"total_chunks": 23,
"char_count": 2094,
"word_count": 364,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8542c3af-d992-415d-b74b-ba25d88058c8",
"text": "We further use softmax function as a continuous and differentiable approximation to arg max. The approximate\nsample is computed as, exp ((gj + log Pπ(i−1)j)/τ)\n˜vπ(i)j = (8)\nPnk=1 exp ((gk + log Pπ(i−1)k)/τ)\nThe softmax temperature τ controls the closeness between the samples from the GumbelSoftmax distribution and the one-hot representation. As τ approaches 0, the sample becomes\nidentical to the one-hot samples from the categorical distribution [26].",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 10,
"total_chunks": 23,
"char_count": 455,
"word_count": 69,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b74f8b4b-3957-4696-b2b9-524d2d9d3ad1",
"text": "To address the second problem, we introduce an additional vector T ∈Rn, which indicates\nthe status of whether specific nodes have been visited, to regularize the transition weight\nand the corresponding transition probability. Specifically, T is initialized to be all 1's and\nthe next node sequence vπ(i) is sampled from the following equations where ⊙represents\nelement-wise multiplication., Pπ(i−1) = Softmax(Wπ(i−1) ⊙T(i −1)) (9)\nvπ(i) = Sample(Pπ(i−1)) (10)\nT(i) = T(i −1) ⊙(1 −vπ(i)) (11) The introduction of the additional vector T reduces the probability of nodes which have\nbeen visited while the Gumbel-Softmax distributions still remain differentiable with respect\nto the parameters. The full algorithm of mapping graph nodes into sequences is described in Algorithm 1\nand an example is depicted in Figure 2. 1The Gumbel(0, 1) distribution can be sampled with inverse transform sampling by first drawing u from\nthe uniform distribution Uniform(0, 1) and computing the sample g = −log(−log(u))",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 11,
"total_chunks": 23,
"char_count": 1001,
"word_count": 152,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2edce394-06c6-48e4-8228-14d06e30708c",
"text": "Figure 3: Architecture of graph LSTM model with modified parts highlighted in red. 3.3 Recurrent neural network model on graphs We adapt the recurrent neural network model especially LSTM to accommodate both the\nnode attributes and the neighborhood information with the node sequences sampled from the\nrandom walk approach. As each element vπ(i) in the node sequence corresponds to a softmax\nover all the graph nodes, the input node feature denoted as evπ(i) and the neighborhood\nfeature denoted as Nbvπ(i) are computed as the weighted sum of the corresponding node and\nneighbor embeddings, evπ(i) = X (vπ(i)j · ej)\nj=1\nNbvπ(i) = X (vπ(i)j · Nbj)\nj=1 where ei is the representation of a node as generated by the first stage algorithm and\nNbi = Pj∈N1(t) ej as the aggregated neighborhood embeddings of node i. Given the state\nof the recurrent units defined by ht+1 = g(ht, xt), we modify the state update as ht+1 =\ng′(ht, xt, Nbt) to take into account both the node and neighborhood information. The graphlevel representation is formed as the sum of hidden units over all the sequence steps as follows. For the LSTM model, we propagate the neighbor information to all the LSTM gates\nwhich allows the neighborhood information to be integrated into the gate state as shown in The formulations are as follows, it = σ(Wiievπ(t) + WniNbvπ(t) + Whiht−1 + bi)\nft = σ(Wifevπ(t) + WnfNbvπ(t) + Whiht−1 + bf)\ngt = tanh(Wigevπ(t) + WngNbvπ(t) + Whght−1 + bg)\not = σ(Wioevπ(t) + WnoNbvπ(t) + Whoht−1 + bo)\nct = ftct−1 + itgt\nht = ottanh(ct) Other RNN architectures such as Gated Recurrent Units (GRU) can be similarly defined\nby propagating the neighbor information into the corresponding gates.",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 12,
"total_chunks": 23,
"char_count": 1682,
"word_count": 287,
"chunking_strategy": "semantic"
},
{
"chunk_id": "42741f93-fe3b-4a3f-9661-dd9a54eed405",
"text": "3.4 Discriminative training A predictive model is appended on the graph-level representations to predict the graph label. In the experiment, we use a two-layer fully-connected neural network for discriminative\ntraining. All the parameters of recurrent neural networks, the random walk defined above as\nwell as the two-layer neural network predictive models are learned with the back-propagation\nfrom the loss function defined as the cross entropy error between the predicted label and the\ntrue graph label as in Equation 13. Lg = − X lm ln ym (13)\nm=1",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 13,
"total_chunks": 23,
"char_count": 551,
"word_count": 88,
"chunking_strategy": "semantic"
},
{
"chunk_id": "1004eb15-cf57-4082-bfd2-579c5e3465db",
"text": "Datasets sample size average |V | average |E| max |V | max |E| node labels graph classes MUTAG 188 17.93 19.79 28 33 7 2 ENZYMES 600 32.63 62.14 126 149 3 6 NCI1 4110 29.87 32.3 111 119 37 2 NCI109 4127 29.68 32.13 111 119 38 2 DD 1178 284.32 715.66 5748 14267 82 2",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 14,
"total_chunks": 23,
"char_count": 265,
"word_count": 57,
"chunking_strategy": "semantic"
},
{
"chunk_id": "96a8ffb1-6a48-4128-a9d2-ed2b430ff642",
"text": "Table 1: Statistics of the graph benchmark datasets [28]. 3.5 Discussion on isomorphic graphs Most previous methods on extracting graph-level representations are designed to be invariant relative to isomorphic graphs, i.e. the graph-level representations remain the same for\nisomorphic graphs under different node permutations. Therefore the common practice to\nlearn graph-level representations usually involves applying associative operators, such as\nsum, max-pooling, on the individual node embeddings [4, 3]. Datasets WL subtree WL edge WL sp PSCN DE-MF DE-LBP DGCNN GraphLSTM MUTAG 82.05 81.06 83.78 92.63 87.78 88.28 85.83 93.89 ENZYMES 52.22 53.17 59.05 NA 60.17 61.10 NA 65.33 NCI1 82.19 84.37 84.55 78.59 83.04 83.72 74.44 82.73 NCI109 82.46 84.49 83.53 78.59 82.05 82.16 NA 82.04 DD 79.78 77.95 79.43 77.12 80.68 82.22 79.37 84.90",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 15,
"total_chunks": 23,
"char_count": 839,
"word_count": 125,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2d08edbf-e989-4217-b469-32068372ab91",
"text": "Table 2: 10-fold cross validation accuracy on graph classification benchmark datasets. However as we observe that for node-labeled graphs, isomorphic graphs do not necessarily\nshare the same properties. For example, graphs representing chiral molecules are isomorphic but do not necessarily have the same properties, and hence may belong to different\nclasses [29]. In addition, the restrictions on the associative operators limit the possibility\nto aggregate effective and rich information from the individual units. Recent work, such as\nthe recently proposed GraphSAGE model, has shown that non-associative operators such as\nLSTM on random sequences performs better than some of the associative operators such as\nmax-pooling aggregator at node prediction tasks [1]. Our models rely on RNN units to capture the long term dependencies and to generate\nfixed-length representations for variable-sized graphs. Due to the sequential nature of RNN,\nthe model may generate different representations under different node sequences. However\nwith the random walk based node sequence strategy, our model will learn the parameters of\nthe random walk approach that generates the node sequences to optimize the classification\nobjective. The experimental results in the next section will show that our model achieves\ncomparable or better results on benchmark graph datasets than previous methods including\ngraph kernel methods as well as the graph neural network methods. 4 Experimental Results We evaluate our model against the best known algorithms on standard graph classification\nbenchmarks. We also provide a discussion on the roles played by the different components\nin our model. All the experiments are run on Intel Xeon CPU and two Nvidia Tesla K80\nGPUs. The codes will be publicly available.",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 16,
"total_chunks": 23,
"char_count": 1786,
"word_count": 267,
"chunking_strategy": "semantic"
},
{
"chunk_id": "65f30b73-79c2-41fe-8a7f-c8698e21a8bc",
"text": "4.1 Graph classification The graph classification benchmarks contain five standard datasets from biological applications, which are commonly used to evaluate graph classification methods [30, 4, 28]. MUTAG,\nNCI1 and NCI109 are chemical compounds datasets while ENZYMES and DD are protein\ndatasets [31, 32]. Relevant information about these datasets is shown in Table 1. We use\nthe same dataset setting as in Shervashidze et al. and Dai et al. [30, 4]. 0.4 0.4 Accuracy Accuracy 0.2 0.2\nDE-MF test accuracy DE-MF test accuracy\nGraphLSTM test accuracy GraphLSTM test accuracy\n0 0\n0 50 100 150 200 0 50 100 150 200\nNumber of epochs Number of epochs Figure 4: Classification accuracy on ENZYMES and MUTAG datasets with the number of\nepochs comparing GraphLSTM and DE-MF models. [4]. split into 10 folds and the classification accuracy is computed as the average of 10-fold cross\nvalidation. 4.1.2 Model Configuration",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 17,
"total_chunks": 23,
"char_count": 912,
"word_count": 146,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0c87f926-ff58-4b08-8d3c-1d807ad6f49e",
"text": "In the pretrained node representation model, We use K = 2 with the number of epochs set\nto 100. Following the standard practice, we use ADAM optimizer with initial learning rate\nas 0.0001 with adaptive decay [33]. The embedding size d is selected from {16, 32, 64, 128}\ntuned for the optimal classification performance. For the random walk approach to sample node sequences, we choose the softmax temperature τ from {0.5, 0.01, 0.0001} and Krw = 2. The parameters of C are initialized uniformly\nfrom 0 to 5 and ϵ is uniformly sampled from −1 to 1.",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 18,
"total_chunks": 23,
"char_count": 547,
"word_count": 97,
"chunking_strategy": "semantic"
},
{
"chunk_id": "21c9bd7f-1f5f-47a5-89f4-c52ae8d4a646",
"text": "For the graph RNN classification model, we use the LSTM units as the recurrent units to\nprocess the node sequences which we term as GraphLSTM. The dimension of the hidden\nunit in the LSTM is selected from {16, 32, 64}. A two-layer neural network model is used\nas the classification model for the graph labels. The dimension of the hidden unit in the\nclassification model is selected from {16, 32, 64}. ADAM is used for optimization with the\ninitial learning rate as 0.0001 with adaptive decay [33]. The baseline algorithms are Weisfeiler-Lehman (WL) graph kernels including subtree\nkernel, edge kernel and shortest path (sp) kernel capturing different graph structures [30],\nPSCN algorithm that is based on graph CNN models [15], structure2vec models including\nDE-MF and DE-LBP [4] and DGCNN based on a new deep graph convolutional neural\nnetwork architecture [34].",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 19,
"total_chunks": 23,
"char_count": 865,
"word_count": 140,
"chunking_strategy": "semantic"
},
{
"chunk_id": "77d9430a-5f2c-42ba-af2a-f5bc75dabf29",
"text": "Table 2 shows the results of the classification accuracy on the five benchmark datasets. Our proposed model achieves comparable or superior results relative to the state-of-the art 0.4 0.4 Accuracy Accuracy\n0.2 Raw feature 0.2 Raw feature\nRandom embedding Random embedding\nPretrained embedding Pretrained embedding\n0 0\n0 50 100 150 200 0 50 100 150 200\nNumber of epochs Number of epochs Figure 5: Classification accuracy on benchmark datasets with different embedding methods. Figure 4 further compares the classification accuracy of the GraphLSTM model and DEMF model as a function of the number of epochs. Note that DE-MF model is a representation\nof the recent algorithms that tackle graph problems with convolutional neural network structures. For ENZYMES and MUTAG datasets, the GraphLSTM model achieves significantly\nbetter results in terms of the classification accuracy as well as convergence speed, which\nshows the superiority of RNN models for learning graph representations in the classification\ntasks. 4.2 Discussion on the graph RNN model In this part, we discuss several important factors that affect the performance of our proposed\ngraph RNN model. 4.2.1 Node representations Node embeddings, as the direct inputs on the RNN model, are important to the performance\nof the graph RNN models [35]. We evaluate the effect of different definitions of node representations on the performance of the GraphLSTM model. We compare the set of pretrained\nembeddings against the baseline node representations as briefly described below, • Raw features: the one-hot vectors H ∈Rn×k are directly used as the node representations. • Randomized embedding: the embedding matrix E ∈Rk×d is randomly initialized as\ni.i.d. samples from the normal distribution with mean 0 and variance 1.",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 20,
"total_chunks": 23,
"char_count": 1781,
"word_count": 275,
"chunking_strategy": "semantic"
},
{
"chunk_id": "226145c0-3a94-4113-8a38-891671b68a21",
"text": "Figure 5 shows the results comparing the performance using the different definitions of\nnode embeddings. The model with pretrained embeddings of graph nodes outperforms the other embeddings by a large margin. Hence the pretrained embeddings learned from graph\nstructural information are effective for graph RNN models to learn graph representations.",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 21,
"total_chunks": 23,
"char_count": 349,
"word_count": 50,
"chunking_strategy": "semantic"
},
{
"chunk_id": "144c66a7-ef2e-4e8d-8276-07e26083ada7",
"text": "0.4 0.4 Accuracy Accuracy Random walk\nRandom walk\n0.2 DFS 0.2 DFS\nBFS BFS\nRandom permutation\n0 Random permutation 0\n0 50 100 150 200 0 50 100 150 200\nNumber of epochs Number of epochs Figure 6: Classification accuracy on benchmark datasets with different node ordering methods. We evaluate the impact of different node ordering methods on the prediction accuracy, which\nis shown in Figure 6. The baseline methods are the random permutation, Breadth First\nSearch (BFS), and Depth First Search (DFS). Note that the baseline methods determine the\ngraph sequences independent of the classification task. The results show that the order of the\nnode sequence affects the classification accuracy by a large margin. The RNN model with\nthe parameterized random walk approach achieves the best results compared to other graph\nordering methods. The random permutations yields the worst performance which suggests\npreserving the proximity relationship of the original graph in the node sequence is important\nto learn the effective node representations in the classification tasks. In this work, we propose a new graph learning model to directly learn graph-level representations with the recurrent neural network model from samples of variable-sized graphs. New node embedding methods are proposed to embed graph nodes into high-dimensional\nvector space which captures both the node features as well as the structural information. We\npropose a random walk approach with the Gumbel-Softmax approximation to generate continuous samples of node sequences with parameters learned from the classification objective. Empirical results show that our model outperforms the state-of-the-art methods in graph\nclassification. We also include a discussion of our model in terms of the node embedding and\nthe order of node sequences, which illustrates the effectiveness of the strategies used.",
"paper_id": "1805.07683",
"title": "Learning Graph-Level Representations with Recurrent Neural Networks",
"authors": [
"Yu Jin",
"Joseph F. JaJa"
],
"published_date": "2018-05-20",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07683v4",
"chunk_index": 22,
"total_chunks": 23,
"char_count": 1868,
"word_count": 284,
"chunking_strategy": "semantic"
}
]