researchpilot-data / chunks /1805.07504_semantic.json
Subhadip007's picture
feat: Upload full 358k vector database
e8a2c2e
[
{
"chunk_id": "d06e1a9a-6839-40a6-b4e9-ecbf98ec6f98",
"text": "Deep Loopy Neural Network Model for Graph\nStructured Data Representation Learning Jiawei Zhang\n⋆IFM Lab, Florida State University, FL, USA\njzhang@cs.fsu.edu\n2019 Abstract Existing deep learning models may encounter great challenges in handling graph\nstructured data. In this paper, we introduce a new deep learning model for graphSep\ndata specifically, namely the deep loopy neural network. Significantly different\n5 from the previous deep models, inside the deep loopy neural network, there exist\na large number of loops created by the extensive connections among nodes in the\ninput graph data, which makes model learning an infeasible task. To resolve such\na problem, in this paper, we will introduce a new learning algorithm for the deep\nloopy neural network specifically. Instead of learning the model variables based\non the original model, in the proposed learning algorithm, errors will be back-[cs.LG] propagated through the edges in a group of extracted spanning trees. Extensive\nnumerical experiments have been done on several real-world graph datasets, and\nthe experimental results demonstrate the effectiveness of both the proposed model\nand the learning algorithm in handling graph data. Formally, a loopy neural network denotes a neural network model involving loops among neurons in\nits architecture. Deep loopy neural network is a novel learning model proposed for graph structured\ndata specifically in this paper. Given a graph data G = (V, E) (where V and E denote the node and\nedge sets respectively), the architecture of deep neural nets constructed for G can be described as\nfollows: DEFINITION 1 (Deep Loopy Neural Network): Formally, we can represent a deep loopy neural More specifically, the neurons covered in set N can be categorized into several layers: (1) input\nlayer X, (2) hidden layers H = Skl=1 H(l), and (3) output layer Y, where X = {xi}vi∈V, H(l) =\n{h(l)i }vi∈V, ∀l ∈{1, 2, · · · , k}, Y = {yi}vi∈V and k denotes the hidden layer depth. Vector xi ∈\nRn, h(l)i ∈Rm(l) and yi ∈Rd denote the feature, hidden state (at layer l) and label vectors of\nnode vi ∈V in the input graph respectively. Meanwhile, the connections among the neurons in L\ncan be divided into (1) intra-layer neuron connections La (which connect neurons within the same\nlayers), and (2) inter-layer neuron connections Le (which connect neurons across layers), where\nLa = Skl=1{(h(l)i , h(l)j )}(vi,vj)∈E and Le = Lx,h1 ∪Lh1,h2 ∪· · · ∪Lhk,y.",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 0,
"total_chunks": 25,
"char_count": 2442,
"word_count": 398,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5970990d-e282-4431-a095-5981a4effa42",
"text": "In the notation, Lx,h1\ncovers the connections between neurons in the input layer and the first hidden layer, and so forth for\nthe other neuron connection sets. In Figure 1, we show an example of a deep loopy neural network model constructed for the input graph as shown in the left plot. In the example, the input Input Graph Output\nLayer y6 y5 y4\nv5 y1 y2 y3 Hidden v6 h6K h5K Layer K h4K K\nv4 h1 h2 K h3K\nv1 … … … … … … … Hidden 1 1\nh6 h5 h41 Layer 1 1 v3 h1 1 1 h2 h3 Input x6 x5 x4 v2 Layer\nx2 x3 x1 Figure 1: Overall Architecture of Deep Loopy Neural Network Model. graph can be represented as G = (V, E), where V = {v1, v2, · · · , v6} and E =\n{(v1, v2), (v1, v3), (v1, v4), (v2, v3), (v3, v4), (v4, v5), (v5, v6)}. The constructed deep loopy neural network model involves k layers and the intra-layer neuron connections mainly exist in these\nhidden layers respectively. Formally, given the node input feature vector xi for node vi, we can represents its hidden states at\ndifferent hidden layers and the output label as follows:",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 1,
"total_chunks": 25,
"char_count": 1034,
"word_count": 208,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e0c02a79-112b-4f90-bd3c-460ae350f97e",
"text": "h(1)i = σ Wxxi + bx + Pvj∈Γ(vi)(Wh1hj + bh1) ,\n · · · (1)\nh(k)i = σ Whk−1,hkh(k−1)i + bhk−1,hk + Pvj∈Γ(vi)(Whkhj + bhk) ,\nyi = σ(Wyh(k)i + by), where set Γ(vi) = {vj|vj ∈V ∧(vi, vj) ∈E} denotes the neighbors of node vi in the input graph. Compared against the true label vector of nodes in the network, e.g., ˆyi for vi ∈V, we can represent\nthe introduced loss by the model on the input graph as E(G) = X E(vi) = X loss(ˆyi, yi), (2)\nvi∈V vi∈V where different loss functions can be adopted here, e.g., mean square loss or cross-entropy loss. By minimizing the loss function, we will be able to learn the variables involved in the model. By\nthis context so far, most of the deep neural network model training is based on the error back propagation algorithm. However, applying the error back propagation algorithm to train the deep loopy\nneural network model will encounter great challenges due to the extensive variable dependence\nrelationships created by the loops, which will be illustrated in great detail later.",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 2,
"total_chunks": 25,
"char_count": 1024,
"word_count": 183,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0fcc9937-9ca6-4e6c-93f9-16f0d79d2eda",
"text": "The following part of this paper is organized as follows. In Section 2, we will introduce the existing\nrelated works to this paper. We will analyze the challenges in learning deep loop neural networks\nwith error back-propagation algorithm in Section 3, and a new learning algorithm will be introduced\nin Section 4. Extensive numerical experiments will be provided to evaluate the model performance\nin Section 5, and finally we will conclude this paper in Section 6.",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 3,
"total_chunks": 25,
"char_count": 465,
"word_count": 76,
"chunking_strategy": "semantic"
},
{
"chunk_id": "56f33f25-fa7e-480a-9d08-10346894cfe4",
"text": "Two research topics are closely related to this paper, including deep learning and network representation learning, and we will provide a brief overview of the existing papers published on these two\ntopics as follows. Deep Learning Research and Applications: The essence of deep learning is to compute hierarchical features or representations of the observational data [8, 16]. With the surge of deep learning research and applications in recent years, lots of research works have appeared to apply the deep\nlearning methods, like deep belief network [12], deep Boltzmann machine [24], deep neural network [13,15] and deep autoencoder model [26], in various applications, like speech and audio processing [7, 11], language modeling and processing [1, 19], information retrieval [10, 24], objective\nrecognition and computer vision [16], as well as multimodal and multi-task learning [29,30]. Network Embedding: Network embedding has become a very hot research problem recently, which\ncan project a graph-structured data to the feature vector representations. In graphs, the relation can\nbe treated as a translation of the entities, and many translation based embedding models have been\nproposed, like TransE [2], TransH [28] and TransR [17]. In recent years, many network embedding\nworks based on random walk model and deep learning models have been introduced, like Deepwalk\n[21], LINE [25], node2vec [9], HNE [4] and DNE [27]. Perozzi et al. extends the word2vec model\n[18] to the network scenario and introduce the Deepwalk algorithm [21]. Tang et al. [25] propose\nto embed the networks with LINE algorithm, which can preserve both the local and global network\nstructures. Grover et al. [9] introduce a flexible notion of a node's network neighborhood and design\na biased random walk procedure to sample the neighbors. Chang et al. [4] learn the embedding of\nnetworks involving text and image information. Chen et al. [6] introduce a task guided embedding\nmodel to learn the representations for the author identification problem. 3 Learning Challenges Analysis of Deep Loopy Neural Network In this part, we will analyze the challenges in training the deep loop neural network with traditional\nerror back propagation algorithm. To simplify the settings, we assume the deep loop neural network\nhas only one hidden layer, i.e., k = 1.",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 4,
"total_chunks": 25,
"char_count": 2333,
"word_count": 364,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8155e373-5054-478e-bb01-ddd4b982f246",
"text": "According to the definition of the deep loopy neural network\nmodel provided in Section 1, we can represent the inferred labels for graph nodes, e.g., vi ∈V, as\nvector yi = [yi,1, yi,2, · · · , yi,d]⊤, and its jth entry yi(j) (or yi,j) can be denoted as (yi(j)= σ Pme=1 Wy(e, j) · hi(e) + by(j) , (3)\nhi(e)= σ Pnc=1 Wx(c, e) · xi(c) + bx(e) + Pvp∈Γ(vi) Pmf=1 Wh(f, e) · hp(f) + bh(e) . Here, we will use the mean square error as an example of the loss function, and the loss introduced\nby the model for node vi ∈V compared with the ground truth label ˆyi can be represented as 1 1\nE(vi) = ∥yi −ˆyi∥22 = X (yi(j) −ˆyi(j))2 . (4) 2 2\nj=1",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 5,
"total_chunks": 25,
"char_count": 634,
"word_count": 131,
"chunking_strategy": "semantic"
},
{
"chunk_id": "7730d6a7-1d3d-44c1-8c04-690be3b453fd",
"text": "3.1 Learning Output Layer Variables The variables involved in the deep loopy neural network model can be learned by error back propagation algorithm. For instance, here we will use SGD as the optimization algorithm.",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 6,
"total_chunks": 25,
"char_count": 215,
"word_count": 34,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b1f34ade-9288-4507-8a38-82a20a81aac5",
"text": "Given the node\nvi and its introduced error function E(vi), we can represent the updating equation for the variables\nin the output layer, e.g., Wy(e, j) and by(j), as follows:  ∂E(vi)\nWyτ(e, j) = Wyτ−1(e, j) −ηwτ · ∂Wyτ−1(e,j), \n∂E(vi) (5)\nbyτ(j) = byτ−1(j) −ηbτ · ∂byτ−1(j).  where ηwτ and ηbτ denote the learning rates in updating Wy and by at iteration τ respectively. According to the derivative chain rule, we can represent the partial derivative terms ∂Wy∂E(vi)τ−1(e,j) and\n∂E(vi)\nas follows respectively: ∂by( j)  ∂E(vi) ∂E(vi) ∂yi(j) ∂zhi (j) · · = = yi(j) −ˆyi(j) · yi(j) 1 −yi(j) · hi(e), ∂Wy(e,j) ∂yi(j) ∂zhi (j)  ∂Wy(e,j) (6)\n∂E(vi) ∂E(vi) ∂yi(j) ∂zhi (j)\n ∂by(j) = ∂yi(j) · ∂zhi (j) · ∂by(j) = yi(j) −ˆyi(j) · yi(j) 1 −yi(j) · 1,\nwhere term zhi (j) = Pme=1 Wy(e, j) · hi(e) + by(j).",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 7,
"total_chunks": 25,
"char_count": 802,
"word_count": 148,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5f19d0e9-6965-4450-8560-7179e24946c9",
"text": "3.2 Learning Hidden Layer and Input Layer Variables Meanwhile, when updating the variables in the hidden and output layers, we will encounter great\nchallenges in computing the partial derivatives of the error function regarding these variables. Given\ntwo connected nodes vi, vj ∈V, where (vi, vj) ∈E, from the graph, we have the representation of\ntheir hidden state vectors hi and hj as follows:\nhi = σ Wxxi + bx + Whhj + bh + X (Whhk + bh) , (7)\nvk∈Γ(vi)\\{vj}\nhj = σ Wxxj + bx + Whhi + bh + X (Whhk′ + bh) . (8)\nv′k∈Γ(vj)\\{vi}\nWe can observe that hi and hj co-rely on each other in the computation, whose representation will be\ninvolved in an infinite recursive definition. When we compute the partial derivative of error function\nregarding variable Wx, bx or Wh, bh, we will have an infinite partial derivative sequence involving\nhi and hj according to the chair rule.",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 8,
"total_chunks": 25,
"char_count": 870,
"word_count": 157,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0d46ae5e-17d6-4344-b390-9658548f1c77",
"text": "The problem will be much more serious for connected graphs,\nas illustrated by the following theorem. THEOREM 1 Let G denote the input graph. If graph G is connected (i.e., there exist a path connecting any pairs of nodes in the graph), for any node vi in the graph, its hidden state vector hi will\nbe involved in the hidden state representation of all the other nodes in the graph. PROOF 1 The Theorem can be proved via contradiction.",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 9,
"total_chunks": 25,
"char_count": 434,
"word_count": 79,
"chunking_strategy": "semantic"
},
{
"chunk_id": "449be249-8206-4177-969f-63e8fb315749",
"text": "Here, we assume hidden state vector hi is not involved in the hidden representation of a certain\nnode vj, i.e., hj. Formally, given the neighbor set Γ(vj) of node vj in the network, we know that\nvector hj is defined based on the hidden state vectors {hk}vk∈Γ(vi). Therefore, we can assert that\nvector hi should also not be involved in the representation of all the nodes in Γ(vj). Viewed in this\nperspective, we can also show that hi is not involved in the hidden state representations of the 2-hop\nneighbors of node vj, as well as the 3-hop, and so forth. Considering that graph G is connected, we know there should exist a path of length p connecting vj\nwith vi, i.e., vi will be a p-hop neighbor of node vj, which will contract the claim hi is not involved\nin the hidden state representations of the p-hop neighbors of node vj. Therefore, the assumption we\nmake at the beginning doesn't hold and we will prove the theorem.",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 10,
"total_chunks": 25,
"char_count": 925,
"word_count": 168,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e11f33c5-f416-4ff1-9c88-eaaa7a4befff",
"text": "According to the above theorem, when we try to compute the derivative of the error function regarding variable Wx, we will have\n∂E(vi) ∂hj ∂yi ∂hi ∂E(vi) ∂hi ∂hj ∂zhi · · · · = + X + X · · · · (9)\n∂Wx ∂yi ∂zhi ∂hi ∂Wx vj∈V ∂hj ∂Wx vk∈V ∂hk\nDue to the recursive definition of the hidden state vector {hi}vi∈V in the network, the partial derivative of term ∂E(vi)∂Wx will be impossible to compute mathematically. The partial derivative sequence\nwill extend to an infinite length according to Theorem 1. Similar phenomena can be observed when\ncomputing the partial derivative of the error function regarding variables bx or Wh, bh. To resolve the challenges introduced in the previous section, in this part, we will propose an approximation algorithm to learn the deep loopy neural network model. Given the complete model\narchitecture (which is also in a graph shape), to differentiate it from the input graph data, we will\nname it as the model graph formally, the neuron vectored involved in which are called the neuron\nnodes by default. From the model graph, we propose to extract a set of tree structured model diagrams rooted at certain neuron states in the model graph. Model learning will be mainly performed\non these extracted rooted trees instead. 4.1 g-Hop Model Subgraph and Rooted Spanning Tree Extraction Given the deep loopy neural network model graph, involving the input, output and hidden state\nvariables and the projections parameterized by the variables to be learned, we propose to extract a y6 y5 y4 3-hop subgraph 3-hop rooted tree\ny1 y2 y3 y1 y2 y3 y4 y1\nx1 h1\nh6 h5 h5\nh4 h4\nh2 h3 h4 h1 h3 h1 h3 x2 x3 x4\nh2 h2\nh1 h3 h1 h2 h4 h1 h3 h5\nx6 x5 x4 x4\nx3 x1 x2 x3 x1 x3 x1 x2 x4 x1 x3 x5 x1 x2 Figure 2: Structure of 3-Hop Subgraph and Rooted Spanning Tree at y1. (The bottom feature nodes\nin the gray component attached to the tree leaves are append for model learning purposes.) set of g-hop subgraph (g-subgraph) around certain target neuron node from it, where all the neuron\nnodes involved are within g hops from the target neuron node. DEFINITION 2 (g-hop subgraph): Given the deep loopy neural network model graph G = (N, L)\nand a target neuron node, e.g., n ∈N, we can represent the extracted g-subgraph rooted at n as\nGn = (Nn, Ln), where Nn ⊂N and Ln ⊂L. For all the nodes in set Nn, formed by links in Lyi,\nthere exist a directed path of length less than g connecting the neuron node n with them. For instance, from the deep loopy neural network model graph (with 1 hidden layer) as shown in the\nleft plot of Figure 2, we can extract a 3-subgraph rooted at neuron node y1 as shown in the central\nplot.",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 11,
"total_chunks": 25,
"char_count": 2627,
"word_count": 489,
"chunking_strategy": "semantic"
},
{
"chunk_id": "43c31802-6f9a-4305-a55d-ae1d7c3f6548",
"text": "In the sub-graph, it contains the feature, label and hidden state vectors of nodes v2, v3, v4 as\nwell as the hidden state vector of v5, whose feature and label vectors are not included since they are\n4-hops away from h1. In the model graph, the link direction denote the forward propagation direction. In the model learning process, the error information will propagate from the output layer, i.e., the label neuron nodes,\nbackward to the hidden state and input neuron nodes along the reversed direction of these links. For\ninstance, given the target node y1, its error can be propagated to h1, h2, · · · , h5 and x1, x2, x3, x4,\nbut cannot reach y2, y3 and y4. Meanwhile, to resolve the recursive partial derivative problem introduced in the previous section, in this paper, we propose to further extract a g-hop rooted spanning\ntree (g-tree) from the g-subgraph. The extracted g-tree will be acyclic and all the involved links are\nfrom the children nodes to the parent nodes only. DEFINITION 3 (g-hop rooted spanning tree): Given the extracted g-subgraph around the target\nneuron node n, i.e., Gn = (Nn, Ln), we can represent its g-tree as Tn = (Sn, Rn, n), where n is\nthe root, sets Sn ⊂Nn and Rn ⊂Ln. All the links in Pn are pointing from the children nodes to\nthe parent nodes.",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 12,
"total_chunks": 25,
"char_count": 1282,
"word_count": 229,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5dfea432-f343-4c65-96d1-fafc9992aafe",
"text": "For instance, in the right plot of Figure 2, we display an example of the extracted 3-tree rooted at\nneuron node y1. From the plot, we observe that all the label vectors are removed, since there exist\nno directed edges from them to the root node. Among all the nodes, h1 is 1-hop away from y1,\nx1, h2, h3 and h4 are 2-hop away from y1, and all the remaining nodes are 3 hops away from h1. Given any two pair of connected nodes in the 3-tree, e.g., x1 and y1 (or h4 and h5), the edges\nconnecting them clearly indicate the variables to be learned via error back propagation across them. When learning the loopy neural network, instead of using the whole network, we propose to back\npropagate the errors from the root, e.g., y1, to the remaining nodes in the extracted 3-tree. THEOREM 2 Given g = ∞, the learning process based on g-tree will be identical as learning based\non the original network. Meanwhile, in the case when g is a finite number and g ≥diameter(G), the\ng-tree of any nodes will actually cover all the neuron nodes and variables to be learned. The proof to the above theorem will not be introduced here due to the limited space. Generally,\nlarger g can preserve more complete network structure information. However, on the other hand, as",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 13,
"total_chunks": 25,
"char_count": 1251,
"word_count": 228,
"chunking_strategy": "semantic"
},
{
"chunk_id": "832783bd-2af3-4323-b55a-1e1e37831121",
"text": "g increases, the paths involved in the g-tree from the leaf node to the root node will also be longer,\nwhich may lead to the gradient vanishing/exploding problem as well [20]. 4.2 Rooted Spanning Tree based Learning Algorithm for Loopy Neural Network Model Based on the extracted g-tree, the deep loopy neural network model can be effectively trained,\nand in this section we will introduce the general learning algorithm in detail. Formally, let\nTn = (Sn, Rn, n) denote an extracted spanning tree rooted at neuron node n. Based on the errors computed on node n (if n is in the output layer) or the errors propagated to n, we can further\nback propagate the errors to the remaining nodes in Sn \\ {n}. Formally, from the spanning tree Tn = (Sn, Rn, n), we can define the set of variables involved as\nW, which can be indicated by the links in set Rn. For instance, given the 3-tree as shown in Figure 2,\nwe know that there exist three types of variables involved in the tree diagram, where the variable set\nW = {Wx, bx, Wy, by, Wh, bh}. For the spanning trees extracted from deeper neural network\nmodels, the variable type set will be much larger. Meanwhile, given a random node m ∈Sn, we\nwill use notation Tn(m) to denote a subtree of Tn rooted at m, and notation Γ(m) to represent the\nchildren neuron nodes of m in Tn. Furthermore, for simplicity, we will use notation W ∈T to\ndenote that variable W ∈W is involved in the tree or sub-tree T . Given the spanning tree Tn together with the errors E(n) computed at n (or propagated to n),\nregarding a variable W ∈W, we can first define a basic learning operation between neuron nodes\nx, y ∈Sn (where y ∈Γ(x)) as follows:\n(1 W ∈Tn(m) ∂W,∂x if y is a leaf node;\nPROP(x, y; W) =\n1 W ∈Tn(m) ∂y∂x · Pz∈Γ(y) PROP(y, z; W) , otherwise.\n(10) Based on the above operation, we can represent the partial derivative of the error function E(n)\nregarding variable W as: ∂E(n) ∂E(n)\n= · X PROP(n, m; W) (11)\n∂W ∂n\nm∈Γ(n) THEOREM 3 Formally, given a g-hop rooted spanning tree Tn, the PROP(x, y; W) operation will\n(dmax+1)k+1−dmax−1 ∂E(n)be called at most times in computing the partial derivative term ∂W , where dmax\ndmax denotes the largest node degree in the original input graph data G. PROOF 2 Operation PROP(x, y; W) will be called for each node in the spanning tree Tn except\nthe root node.",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 14,
"total_chunks": 25,
"char_count": 2327,
"word_count": 431,
"chunking_strategy": "semantic"
},
{
"chunk_id": "1cccc931-29c9-419c-9574-8e2821e5efa6",
"text": "Formally, given a max node degree dmax in the original graph, the largest number of\nchildren node connected to a random neuron node in the spanning tree will be dmax+1 (+1 because\nof the connection from the lower layer neuron node). The maximum number of nodes (except the\nroot node) in the spanning tree of depth g can be denoted as the sum\n(dmax + 1) + (dmax + 1)2 + (dmax + 1)3 + · · · + (dmax + 1)g, (12) (dmax+1)g+1−dmax−1which equals to as indicated in the theorem. dmax",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 15,
"total_chunks": 25,
"char_count": 476,
"word_count": 92,
"chunking_strategy": "semantic"
},
{
"chunk_id": "032eb7a5-cda5-4c82-9d78-9c7c8befbf9e",
"text": "Considering that in the model, the hidden neuron nodes are computed based on the lower-level neurons and they don't have input representations. To make the model learnable, as indicated by the\nbottom gray layer below the g-tree in Figure 2, we propose to append the input feature representations to the g-tree leaf nodes, based on which we will be able to compute the node hidden states. The\nlearning process will involve several epochs until convergence, where each epoch will enumerate all\nthe nodes in the graph once. To further boost the convergence rate, several latest optimization algorithms, e.g., Adam [14], can be adopted to replace traditional SGD in updating the model variables. 5 Numerical Experiments To test the effectiveness of the proposed deep loopy neural network and the learning algorithm, extensive numerical experiments have been done on several frequently used graph benchmark datasets, Table 1: Experimental Results on the Douban Graph Dataset.",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 16,
"total_chunks": 25,
"char_count": 970,
"word_count": 152,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a4094d2d-425a-4dca-afd4-67d1708e3f8e",
"text": "methods MSE MAE LRS avg. rank LNN 0.035±0.0 (1) 0.067±0.001 (1) 0.102±0.002 (1) (1) DEEPWALK [21] 0.05±0.0 (7) 0.097±0.0 (7) 0.129±0.003 (8) (7) WALKLETS [22] 0.046±0.001 (3) 0.087±0.001 (3) 0.108±0.001 (3) (2) LINE [25] 0.048±0.0 (6) 0.091±0.001 (6) 0.12±0.003 (6) (6) HPE [5] 0.046±0.001 (3) 0.077±0.001 (2) 0.111±0.002 (4) (2) APP [31] 0.05±0.0 (8) 0.099±0.0 (8) 0.127±0.002 (7) (8) MF [3] 0.045±0.0 (2) 0.089±0.001 (5) 0.102±0.001 (1) (4) BPR [23] 0.046±0.0 (3) 0.089±0.0 (4) 0.114±0.002 (5) (5) Table 2: Experimental Results on the IMDB Graph Dataset.",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 17,
"total_chunks": 25,
"char_count": 556,
"word_count": 86,
"chunking_strategy": "semantic"
},
{
"chunk_id": "92c64696-acf1-43ce-a1da-56324a4a88c2",
"text": "methods MSE MAE LRS avg. rank LNN 0.046±0.0 (1) 0.090±0.001 (1) 0.116±0.002 (1) (1) DEEPWALK [21] 0.065±0.0 (7) 0.128±0.001 (8) 0.187±0.002 (7) (7) WALKLETS [22] 0.057±0.0 (4) 0.112±0.001 (3) 0.139±0.003 (4) (3) LINE [25] 0.061±0.0 (6) 0.121±0.001 (7) 0.169±0.004 (6) (6) HPE [5] 0.055±0.0 (2) 0.107±0.001 (2) 0.127±0.003 (2) (2) APP [31] 0.066±0.0 (8) 0.13±0.001 (6) 0.193±0.002 (8) (7) MF [3] 0.055±0.0 (2) 0.112±0.001 (3) 0.129±0.003 (3) (3) BPR [23] 0.058±0.0 (5) 0.117±0.0 (5) 0.157±0.002 (5) (5) including two social networks: Foursquare and Twitter, as well as two knowledge graphs: Douban\nand IMDB, which will be introduced in this section. 5.1 Experimental Settings The basic statistics information about the graph datasets used in the experiments are as follows:",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 18,
"total_chunks": 25,
"char_count": 772,
"word_count": 118,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8e0ee3b5-6078-49da-b4ae-df2035a32418",
"text": "• Douban: Number of Nodes: 11, 297; Number of Links: 753, 940.\n• IMDB: Number of Nodes: 13, 896; Number of Links: 1, 404, 202.\n• Foursquare: Number of Nodes: 5, 392; Number of Links: 111, 852.\n• Twitter: Number of Nodes: 5, 120; Number of Links: 261, 151. Both Foursquare and Twitter are online social networks. The nodes and links involved in Foursquare\nand Twitter denote the users and their friendships respectively. Douban and IMDB are two movie\nknowledge graphs, where the nodes denote the movies. Based on the movie cast information, we\nconstruct the connections among the movies, where two movies can be linked if they share a common cast member. Besides the network structure information, a group of node attributes can also\nbe collected for users and movies, which cover the user and movie basic profile information respectively. Based on the attribute information, a set of features can be extracted as the node input feature\ninformation. In the experiments, we use the movie genre and user hometown state as the labels.",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 19,
"total_chunks": 25,
"char_count": 1030,
"word_count": 174,
"chunking_strategy": "semantic"
},
{
"chunk_id": "238a9b1b-c712-49bd-940c-f8e2ddc7109f",
"text": "These labeled nodes are partitioned into training and testing sets via 5-fold cross validation: 4-fold\nfor training, and 1-fold for testing. Meanwhile, to demonstrate the advantages of the learned deep loopy neural network against the\nother existing network embedding models, many baseline methods are compared with deep loopy\nneural network in the experiments. The baseline methods cover the state-of-the-art methods on\ngraph data representation learning published in recent years, which include DEEPWALK [21],\nWALKLETS [22], LINE (Large-scale Information Network Embedding) [25], HPE (Heterogeneous\nPreference Embedding) [5], APP (Asymmetric Proximity Preserving graph embedding) [31], MF\n(Matrix Factorization) [3] and BPR (Bayesian Personalized Ranking) [23]. For these network representation baseline methods, based on the learned representation features, we will further train a Table 3: Experimental Results on the Foursquare Social Network Dataset.",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 20,
"total_chunks": 25,
"char_count": 956,
"word_count": 130,
"chunking_strategy": "semantic"
},
{
"chunk_id": "e882e128-7ae3-4533-895a-9bf212b3a71e",
"text": "methods MSE MAE LRS avg. rank LNN 0.002±0.001 (3) 0.003±0.001 (1) 0.121±0.001 (1) (1) DEEPWALK [21] 0.002±0.001 (3) 0.004±0.001 (4) 0.149±0.007 (3) (3) WALKLETS [22] 0.001±0.001 (1) 0.004±0.001 (4) 0.216±0.0016 (5) (3) LINE [25] 0.002±0.001 (3) 0.003±0.001 (1) 0.309±0.0017 (8) (6) HPE [5] 0.001±0.001 (1) 0.003±0.001 (1) 0.246±0.0016 (7) (2) APP [31] 0.002±0.001 (3) 0.012±0.001 (6) 0.145±0.001 (2) (5) MF [3] 0.002±0.001 (3) 0.013±0.001(7) 0.173±0.006 (4) (7) BPR [23] 0.002±0.001 (3) 0.021±0.001 (8) 0.235±0.023 (6) (8) Table 4: Experimental Results on the Twitter Social Network Dataset.",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 21,
"total_chunks": 25,
"char_count": 591,
"word_count": 86,
"chunking_strategy": "semantic"
},
{
"chunk_id": "fdc7805e-5725-44be-867e-8f6c93be8d02",
"text": "methods MSE MAE LRS avg. rank LNN 0.001±0.001 (1) 0.002±0.001 (1) 0.276±0.0013 (1) (1) DEEPWALK [21] 0.001±0.001 (1) 0.003±0.001 (2) 0.31±0.028 (4) (3) WALKLETS [22] 0.001±0.001(1) 0.003±0.001 (2) 0.309±0.006 (3) (2) LINE [25] 0.001±0.001(1) 0.003±0.001 (2) 0.455±0.008 (8) (6) HPE [5] 0.001±0.001(1) 0.003±0.001 (2) 0.351±0.001 (6) (4) APP [31] 0.001±0.0(1) 0.015±0.0 (7) 0.308±0.0018 (2) (5) MF [3] 0.001±0.001(1) 0.013±0.001 (6) 0.341±0.0017 (5) (7) BPR [23] 0.002±0.001 (8) 0.023±0.001 (8) 0.374±0.0015 (7) (8)",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 22,
"total_chunks": 25,
"char_count": 514,
"word_count": 72,
"chunking_strategy": "semantic"
},
{
"chunk_id": "438f03b0-da76-4eed-b187-5b464227462d",
"text": "MLP (multiple layer perceptron) as the classifier. By comparing the inference results by the models\non the testing set with the ground truth label vectors, we will use MSE (mean square error), MAE\n(mean absolute error) and LRS (label ranking loss) as the evaluation metrics. Detailed experimental\nresults will be provided in the following subsection. 5.2 Experimental Results The results achieved by the comparison methods on these 4 different graph datasets are provided\nin Tables 1-4. The best results are presented in a bolded font, and the blue numbers in the table\ndenote the relative rankings of the methods regarding certain evaluation metrics. Compared with\nthe baseline methods, deep loopy neural network can achieve the best performance than the baseline\nmethods. For instance in Table 1, the MSE obtained by deep loopy neural network is 0.035, which\nabout 7.8% lower than the MSE obtained by the 2nd best method, i.e., MF; and 30% lower than\nthat of APP.",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 23,
"total_chunks": 25,
"char_count": 965,
"word_count": 158,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a1aff465-681f-4778-9251-cace183bb3a5",
"text": "Similar results can be observed for the MAE and LRS metrics. At the right hand\nside of the tables, we illustrate the average ranking positions achieved by the methods based on\nthese 3 different evaluation metrics. According to the avg. rank shown in Tables 1, 2 and 4, LNN\ncan achieve the best performance consistently for all the metrics based on the Douban, IMDB,\nFoursquare and Twitter datasets. Although in Table 3, the deep loopy neural network model loses to\nHPE and BPR regarding the MSE metric, but for the other two metrics, it still outperforms all the\nbaseline methods with significant advantages according to the results. 6 Conclusion\nIn this paper, we have introduced the deep loopy neural network model, which is a new deep learning\nmodel proposed for the graph structured data specifically. Due to the extensive connections among\nnodes in the input graph data, the constructed deep loopy neural network model is very challenging\nto learn with the traditional error back-propagation algorithm. To resolve such a problem, we propose to extract a set of k-hop subgraph and k-hop rooted spanning tree from the model architecture,\nvia which the errors can be effectively propagated throughout the model architecture. Extensive\nexperiments have been done on several different categories of graph datasets, and the numerical\nexperiment results demonstrate the effectiveness of both the proposed deep loopy neural network\nmodel and the introduced learning algorithm.",
"paper_id": "1805.07504",
"title": "Deep Loopy Neural Network Model for Graph Structured Data Representation Learning",
"authors": [
"Jiawei Zhang"
],
"published_date": "2018-05-19",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1805.07504v2",
"chunk_index": 24,
"total_chunks": 25,
"char_count": 1473,
"word_count": 234,
"chunking_strategy": "semantic"
}
]