| [ |
| { |
| "chunk_id": "15ded911-2cac-4600-a895-bbeb20fda7ef", |
| "text": "Graph Networks as Learnable Physics Engines for Inference and Control Alvaro Sanchez-Gonzalez 1 Nicolas Heess 1 Jost Tobias Springenberg 1 Josh Merel 1 Martin Riedmiller 1\nRaia Hadsell 1 Peter Battaglia 1 Abstract Pendulum Cartpole Acrobot Swimmer6", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 0, |
| "total_chunks": 66, |
| "char_count": 248, |
| "word_count": 37, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "0e12b2e9-84ec-4859-9438-dcf0a53b1f35", |
| "text": "Understanding and interacting with everyday\nphysical scenes requires rich knowledge about\nthe structure of the world, represented either implicitly in a value or policy function, or explic-2018 Cheetah Walker2d JACO\nitly in a transition model. Here we introduce a\nnew class of learnable models—based on graph\nnetworks—which implement an inductive biasJun\nfor object- and relation-centric representations of Real JACO4\ncomplex, dynamical systems. Our results show\nthat as a forward model, our approach supports Sample 1\naccurate predictions from real and simulated data,\nand surprisingly strong and efficient generalization, across eight distinct physical systems which\nwe varied parametrically and structurally. We[cs.LG] Sample 2 also found that our inference model can perform\nsystem identification. Our models are also differentiable, and support online planning via gradientbased trajectory optimization, as well as offline\npolicy optimization. Our framework offers new\nopportunities for harnessing and exploiting rich Figure 1. (Top) Our experimental physical systems. (Bottom) Samknowledge about the world, and takes a key step ples of parametrized versions of these systems (see videos: link).\ntoward building machines with more human-like\nrepresentations of the world.\nof objects2 and their relations, applying the same objectwise computations to all objects, and the same relation-wise\n1. Introduction computations to all interactions. This allows for combinatorial generalization to scenarios never before experienced,\nMany domains, such as mathematics, language, and physical whose underlying components and compositional rules are\nscale rapidly with the number of elements. For example, a gines make the assumption that bodies follow the same dymulti-link chain can assume shapes that are exponential in namics, and interact with each other following similar rules,\nthe number of angles each link can take, and a box full of e.g., via forces, which is how they can simulate limitless\nbouncing balls yields trajectories which are exponential in scenarios given different initial conditions.\nthe number of bounces that occur. How can an intelligent\nHere we introduce a new approach for learning and con- agent understand and control such complex systems?\ntrolling complex systems, by implementing a structural inA powerful approach is to represent these systems in terms ductive bias for object- and relation-centric representations.\n<peterbattaglia@google.com>. et al., 2009b; Li et al., 2015; Battaglia et al., 2016; Gilmer\net al., 2017). In a physical system, the GN lets us represent\nProceedings of the 35 th International Conference on Machine\nLearning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 2\"Object\" here refers to entities generally, rather than physical\nby the author(s). objects exclusively.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 1, |
| "total_chunks": 66, |
| "char_count": 2822, |
| "word_count": 406, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "cb9d22be-d0b3-4b62-9f6f-0094ea654dc6", |
| "text": "Graph Networks as Learnable Physics Engines for Inference and Control the bodies (objects) with the graph's nodes and the joints ters et al., 2017; Ehrhardt et al., 2017; Amos et al., 2018).\n(relations) with its edges. During learning, knowledge about Our inference model shares similar aims as approaches for\nbody dynamics is encoded in the GN's node update func- learning system identification explicitly (Yu et al., 2017;\ntion, interaction dynamics are encoded in the edge update Peng et al., 2017), learning policies that are robust to hidden\nfunction, and global system properties are encoded in the property variations (Rajeswaran et al., 2016), and learning\nglobal update function. Learned knowledge is shared across exploration strategies in uncertain settings (Schmidhuber,\nthe elements of the system, which supports generalization 1991; Sun et al., 2011; Houthooft et al., 2016). We use\nto new systems composed of the same types of body and our learned models for model-based planning in a similar\njoint building blocks. spirit to classic approaches which use pre-defined models\n(Li & Todorov, 2004; Tassa et al., 2008; 2014), and our work\nAcross seven complex, simulated physical systems, and one\nalso relates to learning-based approaches for model-based\nreal robotic system (see Figure 1), our experimental results\ncontrol (Atkeson & Santamaria, 1997; Deisenroth & Rasshow that our GN-based forward models support accurate\nmussen, 2011; Levine & Abbeel, 2014). We also explore\nand generalizable predictions, inference models3 support\njointly learning a model and policy (Heess et al., 2015; Gu\nsystem identification in which hidden properties are abduced\net al., 2016; Nagabandi et al., 2017). Notable recent, concurfrom observations, and control algorithms yield competitive\nrent work (Wang et al., 2018) used a GNN to approximate a\nperformance against strong baselines. This work reprepolicy, which complements our use of a related architecture\nsents the first general-purpose, learnable physics engine that\nto approximate forward and inference models.\ncan handle complex, 3D physical systems. Unlike classic\nphysics engines, our model has no specific a priori knowledge of physical laws, but instead leverages its object- and 3. Model\nrelation-centric inductive bias to learn to approximate them\nGraph representation of a physical system. Our apvia supervised training on current-state/next-state pairs.\nproach is founded on the idea of representing physOur work makes three technical contributions: GN-based ical systems as graphs: the bodies and joints correforward models, inference models, and control algorithms. spond to the nodes and edges, respectively, as deThe forward and inference models are based on treating picted in Figure 2a. Here a (directed) graph is dephysical systems as graphs and learning about them using fined as G = (g, {ni}i=1···Nn, {ej, sj, rj}j=1···Ne), where\nGNs. Our control algorithm uses our forward and inference g is a vector of global features, {ni}i=1···Nn is a set of\nmodels for planning and policy learning. nodes where each ni is a vector of node features, and\n{ej, sj, rj}j=1···Ne is a set of directed edges where ej is a(For full algorithm, implementation, and methodological\nvector of edge features, and sj and rj are the indices of the\ndetails, as well as videos from all of our experiments, please\nsender and receiver nodes, respectively.\nsee the Supplementary Material.)\nWe distinguish between static and dynamic properties in a\n2. Related Work physical scene, which we represent in separate graphs. A\nstatic graph Gs contains static information about the paramOur work draws on several lines of previous research.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 2, |
| "total_chunks": 66, |
| "char_count": 3675, |
| "word_count": 562, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "7a4eca58-20f1-4529-afcc-86a86294c0b5", |
| "text": "Cog- eters of the system, including global parameters (such as the\nnitive scientists have long pointed to rich generative models time step, viscosity, gravity, etc.), per body/node parameters\nas central to perception, reasoning, and decision-making (such as mass, inertia tensor, etc.), and per joint/edge pa-\n(Craik, 1967; Johnson-Laird, 1980; Miall & Wolpert, 1996; rameters (such as joint type and properties, motor type and\nSpelke & Kinzler, 2007; Battaglia et al., 2013). Our core properties, etc.).", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 3, |
| "total_chunks": 66, |
| "char_count": 504, |
| "word_count": 75, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "93d918b1-80c7-4f36-b0a5-0714735c2809", |
| "text": "A dynamic graph Gd contains information\nmodel implementation is based on the broader class of graph about the instantaneous state of the system. This includes\nneural networks (GNNs) (Scarselli et al., 2005; 2009a;b; each body/node's 3D Cartesian position, 4D quaternion oriBruna et al., 2013; Li et al., 2015; Henaff et al., 2015; Du- entation, 3D linear velocity, and 3D angular velocity.4 Advenaud et al., 2015; Dai et al., 2016; Defferrard et al., 2016; ditionally, it contains the magnitude of the actions applied\nNiepert et al., 2016; Kipf & Welling, 2016; Battaglia et al., to the different joints in the corresponding edges.\n2016; Watters et al., 2017; Raposo et al., 2017; Santoro\n4Some physics engines, such as Mujoco (Todorov et al., 2012),\net al., 2017; Bronstein et al., 2017; Gilmer et al., 2017). One\nrepresent systems using \"generalized coordinates\", which sparsely\nof our key contributions is an approach for learning physi- encode degrees of freedom rather than full body states. Gencal dynamics models (Grzeszczuk et al., 1998; Fragkiadaki eralized coordinates offer advantages such as preventing bodies\net al., 2015; Battaglia et al., 2016; Chang et al., 2016; Wat- connected by joints from dislocating (because there is no degree of\nfreedom for such displacement). In our approach, however, such\n3We use the term \"inference\" in the sense of \"abductive representations do not admit sharing as naturally because there are\ninference\"—roughly, constructing explanations for (possibly par- different input and output representations for a body depending on\ntial) observations—and not probabilistic inference, per se. the system's constraints. Graph Networks as Learnable Physics Engines for Inference and Control Graph representations and GN-based models. (a) A physical system's bodies and joints can be represented by a graph's nodes\nand edges, respectively. (b) A GN block takes a graph as input and returns a graph with the same structure but different edge, node,\nand global features as output (see Algorithm 1). (c) A feed-forward GN-based forward model for learning one-step predictions. (d) A\nrecurrent GN-based forward model. (e) A recurrent GN-based inference model for system identification. Algorithm 1 Graph network, GN concatenated5 with G (e.g., a graph skip connection), and\nInput: Graph, G = (g, {ni}, {ej, sj, rj}) provided as input to the second GN, which returns an output\nfor each edge {ej, sj, rj} do graph, G∗. Our forward model training optimizes the GN so\nGather sender and receiver nodes nsj, nrj that G∗'s {ni} features reflect predictions about the states of\nCompute output edges, e∗j = fe(g, nsj, nrj, ej) each body across a time-step. The reason we used two GNs\nend for was to allow all nodes and edges to communicate with each\nfor each node {ni} do other through the g′ output from the first GN. Preliminary\nAggregate e∗j per receiver, ˆei = Pj/rj=i e∗j tests suggested this provided large performance advantages\nCompute node-wise features, n∗i = fn(g, ni,ˆei) over a single IN/GN (see ablation study in SM Figure H.2).\nend for We also introduce a second, recurrent GN-based forward\nAggregate all edges and nodes ˆe = Pj e∗j, ˆn = Pi n∗i model, which contains three RNN sub-modules (GRUs,\nCompute global features, g∗= fg(g, ˆn,ˆe) (Cho et al., 2014)) applied across all edges, nodes, and\nOutput: Graph, G∗= (g∗, {n∗i }, {e∗j, sj, rj}) global features, respectively, before being composed with a\nGN block (see Figure 2d). Our forward models were all trained to predict state difGraph networks. The GN architectures introduced here ferences, so to compute absolute state predictions we upgeneralize interaction networks (IN) (Battaglia et al., 2016) dated the input state with the predicted state difference. They include global representations and generate a long-range rollout trajectory, we repeatedly fed\noutputs for the state of a system, as well as per-edge outputs. absolute state predictions and externally specified control\nThey are defined as \"graph2graph\" modules (i.e., they map inputs back into the model as input, iteratively. As data preinput graphs to output graphs with different edge, node, and and post-processing steps, we normalized the inputs and\nglobal features), which can be composed in deep and recur- outputs to the GN model.\nrent neural network (RNN) configurations.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 4, |
| "total_chunks": 66, |
| "char_count": 4338, |
| "word_count": 686, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "3a55676e-e932-4d17-8d95-1204c4bcfe7a", |
| "text": "A core GN block\n(Figure 2b) contains three sub-functions—edge-wise, fe, Inference models. System identification refers to infernode-wise, fn, and global, fg—which can be implemented ences about unobserved properties of a dynamic system\nusing standard neural networks. Here we use multi-layer based on its observed behavior. It is important for conperceptrons (MLP).", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 5, |
| "total_chunks": 66, |
| "char_count": 365, |
| "word_count": 51, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e7e3a5c5-bb0d-4885-8395-a53abbdf2b04", |
| "text": "A single feedforward GN pass can be trolling systems whose unobserved properties influence the\nviewed as one step of message-passing on a graph (Gilmer control dynamics. Here we consider \"implicit\" system idenet al., 2017), where fe is first applied to update all edges, fn tification, in which inferences about unobserved properties\nis then applied to update all nodes, and fg is finally applied are not estimated explicitly, but are expressed in latent repto update the global feature. See Algorithm 1 for details. resentations which are made available to other mechanisms. We introduce a recurrent GN-based inference model, which\nForward models. For prediction, we introduce a GN- observes only the dynamic states of a trajectory and conbased forward model for learning to predict future states\n5We define the term \"graph-concatenation\" as combining twofrom current ones. It operates on one time-step, and contains\ngraphs by concatenating their respective edge, node, and global\ntwo GNs composed sequentially in a \"deep\" arrangement features. We define \"graph-splitting\" as splitting the edge, node,\n(unshared parameters; see Figure 2c). The first GN takes an and global features of one graph to form two new graphs with the\ninput graph, G, and produces a latent graph, G′. This G′ is same structure.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 6, |
| "total_chunks": 66, |
| "char_count": 1303, |
| "word_count": 204, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "926d58d4-414f-459b-8d0b-26d96baed939", |
| "text": "Graph Networks as Learnable Physics Engines for Inference and Control (a)\ntruth [au] [au]\nvelocity Ground Position Linear\n(b) (d) [au] ground\n[au] truth\nrollout\nprediction Prediction velocity Orientation (c) Angular (e)\nInitial After 25 0 20 40 60 80 100 0 20 40 60 80 100 50 steps 75 steps 100 steps\nstate control steps Timestep Timestep Evaluation rollout in a Swimmer6. Trajectory videos are here: link-P.F.S6. (a) Frames of ground truth and predicted states\nover a 100 step trajectory. (b-e) State sequence predictions for link #3 of the Swimmer. The subplots are (b) x, y, z-position, (c)\nq0, q1, q2, q3-quaternion orientation, (d) x, y, z-linear velocity, and (e) x, y, z-angular velocity. [au] indicates arbitrary units.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 7, |
| "total_chunks": 66, |
| "char_count": 727, |
| "word_count": 119, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "1b0ff97a-33f0-413d-8f07-d1f56f36a5a3", |
| "text": "structs a latent representation of the unobserved, static prop- tems, and recording the state transitions. We also trained\nerties (i.e., performs implicit system identification). It takes models from recorded trajectories of a real JACO robotic\nas input a sequence of dynamic state graphs, Gd, under under human control during a stacking task.\nsome control inputs, and returns an output, G∗(T), after T\nIn experiments that examined generalization and system\ntime steps. This G∗(T) is then passed to a one-step forward\nidentification, we created a dataset of versions of several of\nmodel by graph-concatenating it with an input dynamic\nour systems—Pendulum, Cartpole, Swimmer, Cheetah and\ngraph, Gd. The recurrent core takes as input, Gd, and hidJACO— with procedurally varied parameters and structure.\nden graph, Gh, which are graph-concatenated5 and passed\nWe varied continuous properties such as link lengths, body\nto a GN block (see Figure 2e). The graph returned by the\nmasses, and motor gears. In addition, we also varied the\nGN block is graph-split5 to form an output, G∗, and upnumber of links in the Swimmer's structure, from 3-15 (we\ndated hidden graph, G∗h. The full architecture can be trained refer to a swimmer with N links as SwimmerN).\njointly, and learns to infer unobserved properties of the system from how the system's observed features behave, and\nMPC planning. We used our GN-based forward model touse them to make more accurate predictions.\nimplement MPC planning by maximizing a dynamic-statedependent reward along a trajectory from a given initial\nControl algorithms. For control, we exploit the fact that state.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 8, |
| "total_chunks": 66, |
| "char_count": 1636, |
| "word_count": 257, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f477e6cf-4fdc-42f6-b4bc-ad61684d724a", |
| "text": "We used our GN forward model to predict the N-step\nthe GN is differentiable to use our learned forward and trajectories (N is the planning horizon) induced by proposed\ninference models for model-based planning within a clas- action sequences, as well as the total reward associated\nsic, gradient-based trajectory optimization regime, also with the trajectory. We optimized these action sequences by\nknown as model-predictive control (MPC). We also develop backpropagating gradients of the total reward with respect to\nan agent which simultaneously learns a GN-based model the actions, and minimizing the negative reward by gradient\nand policy function via Stochastic Value Gradients (SVG) descent, iteratively.\n(Heess et al., 2015). 6\nModel-based reinforcement learning. Methods whether our GN-based model can benefit reinforcement\nlearning (RL) algorithms, we used our model within an\nEnvironments. Our experiments involved seven actu- SVG regime (Heess et al., 2015). The GN forward model\nated Mujoco simulation environments (Figure 1). Six was used as a differentiable environment simulator to obtain\nwere from the \"DeepMind Control Suite\" (Tassa et al., a gradient of the expected return (predicted based on the\n2018)—Pendulum, Cartpole, Acrobot, Swimmer, Cheetah, next state generated by a GN) with respect to a parameWalker2d—and one was a model of a JACO commercial terized, stochastic policy, which was trained jointly with\nrobotic arm. We generated training data for our forward the GN. For our experiments we used a single step predicmodels by applying simulated random controls to the sys- tion (SVG(1)) and compared to sample-efficient model-free\nRL baselines using either stochastic policies (SVG(0)) or\n6MPC and SVG are deeply connected: in MPC the control\ndeterministic policies via the Deep Deterministic Policyinputs are optimized given the initial conditions in a single episode,\nwhile in SVG a policy function that maps states to controls is Gradients (DDPG) algorithm (Lillicrap et al., 2016) (which\noptimized over states experienced during training. is also used as a baseline in the MPC experiments). Graph Networks as Learnable Physics Engines for Inference and Control Individual Fixed Single Model, Multiple Systems\nSystem Models (Pendulum,Cartpole,Acrobot,Swimmer6) 2.7 error 1.21.0 (a) BestbaselineMLP error 2.52.0 (c) Individual Parametrized Single Model, Multiple Systems\nSystem Models (Pendulum,Cartpole,Acrobot,Swimmer6,Cheetah) 0.8 1.5\n0.6 Individual System ID Single Model, Multiple Parametrized Random 1.2 2.4 Models Systems (Pendulum,Cartpole,Acrobot) train data one-step 0.4 one-step 1.0\n100 (a) Constant prediction baseline Rel. 0.2 Rel. 0.5 1.1 error10-1 0.0 0.0 Random\nvalid data MLP 1.2 (b) Best (d)\nbaseline 3.2 1.0 Swimmer6 error error 1.5 DDPG agent 10-3 2.5 0.8 1.0 one-step10-2 test data\nRel.10-4 rollout 0.60.4 rollout 0.5 2.9 100 (b) Constant prediction baseline Rel. 0.2 Rel. 2\n0.0 0.0 error10-1\n10-2 JACO Best MLP Best GN\nSwimmer6 rollout10-3 Pendulum Cartpole Acrobot Swimmer6 Cheetah Walker2d\nRel.10-4\nPendulum Cartpole Swimmer Acrobot Cheetah JACO Walker2d Figure 5. Prediction errors, on (a) one-step and (b) 20-step evaluations, between the best MLP baseline and the best GN model after\nFigure 4. (a) One-step and (b) 100-step rollout errors for different 72 hours of training. Swimmer6 prediction errors, on (c) one-step\nmodels and training (different bars) on different test data (x-axis and (d) 20-step evaluations, between the best MLP baseline and\nlabels), relative to the constant prediction baseline (black dashed the best GN model for data in the training set (dark), data in the\nline). Blue bars are GN models trained on single systems. Red validation set (medium), and data from DDPG agent trajectories\nand yellow bars are GN models trained on multiple systems, with (light).", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 9, |
| "total_chunks": 66, |
| "char_count": 3839, |
| "word_count": 572, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "471712d9-fcea-45aa-ba4c-6ccabc522d83", |
| "text": "The numbers above the bars indicate the ratio between the\n(yellow) and without (red) parametric variation. Note that includ- corresponding generalization test error (medium or light) and the\ning Cheetah in multiple system training caused performance to training error (dark).\ndiminish (light red vs dark red bars), which suggests sharing might\nnot always be beneficial. After normalization, the errors were averaged together. All errors reported are calculated for 1000 100-step\nBaseline comparisons. As a simple baseline, we comsequences from the test set.\npared our forward models' predictions to a constant prediction baseline, which copied the input state as the output\nstate. We also compared our GN-based forward model with 5. Results: Prediction\na learned, MLP baseline, which we trained to make forLearning a forward model for a single system. Our reward predictions using the same data as the GN model. We\nsults show that the GN-based model can be trained to make\nreplaced the core GN with an MLP, and flattened and convery accurate forward predictions under random control. For\ncatenated the graph-structured GN input and target data into\nexample, the ground truth and model-predicted trajectories\na vector suitable for input to the MLP. We swept over 20\nfor Swimmer6 were both visually and quantitatively indistinunique hyperparameter combinations for the MLP architecguishable (see Figure 3). Figure 4's black bars show that the\nture, with up to 9 hidden layers and 512 hidden nodes per\npredictions across most other systems were far better than\nlayer.\nthe constant prediction baseline. As a stronger baseline comAs an MPC baseline, with a pre-specified physical model, parison, Figures 5a-b show that our GN model had lower\nwe used a Differential Dynamic Programming algorithm error than the MLP-based model in 6 of the 7 simulated\n(Tassa et al., 2008; 2014) that had access to the ground- control systems we tested. This was especially pronounced\ntruth Mujoco model.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 10, |
| "total_chunks": 66, |
| "char_count": 1980, |
| "word_count": 311, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "daa46ce0-ac83-46d1-aee2-07407b08c90f", |
| "text": "We also used the two model-free RL for systems with much repeated structure, such as the Swimagents mentioned above, SVG(0) and DDPG, as baselines mer, while for systems with little repeated structure, such\nin some tests. Some of the trajectories from a DDPG agent as Pendulum, there was negligible difference between the\nin Swimmer6 were also used to evaluate generalization of GN and MLP baseline. These results suggest that a GNthe forward models. based forward model is very effective at learning predictive\ndynamics in a diverse range of complex physical systems. Prediction performance evaluation. Unless otherwise We also found that the GN generalized better than the MLP\nspecified, we evaluated our models on squared one-step baseline from training to test data, as well as across different\ndynamic state differences (one-step error) and squared tra- action distributions. Figures 5c-d show that for Swimmer6,\njectory differences (rollout error) between the prediction the relative increase in error from training to test data, and\nand the ground truth. We calculated independent errors to data recorded from a learned DDPG agent, was smaller\nfor position, orientation, linear velocity angular velocity, for the GN model than for the MLP baseline. We speculate\nand normalized them individually to the constant prediction that the GN's superior generalization is a result of implicit Graph Networks as Learnable Physics Engines for Inference and Control 100 Ground truth Feed-forward model Recurrent model\nerror Systems used Constant prediction baseline 100 (c) Constant prediction\nbaseline training [au] error Zero-shot 10-1 rollout10-110-2 for (a) prediction Orientation\nRel.10-3 [au] rollout\n10-2 3 4 5 6 7 8 9 10 11 12 13 14 15 Rel. Number of links in SwimmerN Angular velocity (b)\nFigure 6. Zero-shot dynamics prediction. The bars show the 100- 0 20 40Timestep60 80 100 angleJoint velocityJoint\nstep rollout error of a model trained on a mixture of 3-6 and 8-9\nFigure 7. Real and predicted test trajectories of a JACO robot arm.link Swimmers, and tested on Swimmers with 3-15 links. The dark\nThe recurrent model tracks the ground truth (a) orientations andbars indicate test Swimmers whose number of links the model was\n(b) angular velocities closely. (c) The total 100-step rollout errortrained on (video: link-P.F.SN), the light bars indicate Swimmers\nwas much better for the recurrent model, though the feed-forwardit was not trained on (video: link-P.F.SN(Z)).\nmodel was still well below the constant prediction baseline.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 11, |
| "total_chunks": 66, |
| "char_count": 2537, |
| "word_count": 394, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "a5776de9-f1a0-439b-89f7-c1832c9b9172", |
| "text": "A\nvideo of a Mujoco rendering of the true and predicted trajectories:\nlink-P.F.JR.regularization due to its inductive bias for sharing parameters across all bodies and joints; the MLP, in principle, could\ndevote disjoint subsets of its computations to each body and jectories are visually very close to the ground truth (video:\njoint, which might impair generalization. link-P.F.SN(Z)). Learning a forward model for multiple systems. To evaluate our approach's applicability\nother important feature of our GN model is that it is very to the real world, we trained GN-based forward models on\nflexible, able to handle wide variation across a system's real JACO proprioceptive data; under manual control by\nproperties, and across systems with different structure.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 12, |
| "total_chunks": 66, |
| "char_count": 760, |
| "word_count": 115, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ea91f26e-3b4c-470b-af78-3e540a1444b9", |
| "text": "We a human performing a stacking task. We found the feedtested how it learned forward dynamics of systems with forward GN performance was not as accurate as the recurcontinuously varying static parameters, using a new dataset rent GN forward model7: Figure 7 shows a representative\nwhere the underlying systems' bodies and joints had differ- predicted trajectory from the test set, as well as overall perent masses, body lengths, joint angles, etc. These static formance.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 13, |
| "total_chunks": 66, |
| "char_count": 471, |
| "word_count": 74, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "7648463f-2ac1-4593-bcd8-4cfdc5f61d9f", |
| "text": "These results suggest that our GN-based forward\nstate features were provided to the model via the input model is a promising approach for learning models in real\ngraphs' node and edge attributes. Figure 4 shows that the systems. GN model's forward predictions were again accurate, which\nsuggests it can learn well even when the underlying system\n6. Results: Inferenceproperties vary. We next explored the GN's inductive bias for body- and In many real-world settings the system's state is partially\njoint-centric learning by testing whether a single model observable. Robot arms often use joint angle and velocity\ncan make predictions across multiple systems that vary in sensors, but other properties such as mass, joint stiffness, etc.\ntheir number of bodies and the joint structure. Figure 6 are often not directly measurable.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 14, |
| "total_chunks": 66, |
| "char_count": 829, |
| "word_count": 130, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "29a2e107-63e4-4a48-863f-1b52de67e955", |
| "text": "We applied our system\nshows that when trained on a mixed dataset of Swimmers identification inference model (see Model Section 3) to a\nwith 3-6, 8-9 links, the GN model again learned to make setting where only the dynamic state variables (i.e., position,\naccurate forward predictions. We pushed this even further orientation, and linear and angular velocities) were observed,\nby training a single GN model on multiple systems, with and found it could support accurate forward predictions\ncompletely different structures, and found similarly positive (during its \"prediction phase\") after observing randomly\nresults (see Figure 4, red and yellow bars). This highlights controlled system dynamics during an initial 20-step \"ID\na key difference, in terms of general applicability, between phase\" (see Figure 8). GN and MLP models: the GN can naturally operate on To further explore the role of our GN-based system identivariably structured inputs, while the MLP requires fixed- fication, we contrasted the model's predictions after an ID\nsize inputs. phase, which contained useful control inputs, against an ID\nThe GN model can even generalize, zero-shot, to systems phase that did not apply control inputs, across three differwhose structure was held out during training, as long as they ent Pendulum systems with variable, unobserved lengths.\nare composed of bodies and joints similar to those seen dur- Figure 9 shows that the GN forward model with an identifiing training. For the GN model trained on Swimmers with able ID phase makes very accurate predictions, but with an\n3-6, 8-9 links, we tested on held-out Swimmers with 7 and unidentifiable ID phase its predictions are very poor.\n10-15 links. Figure 6 shows that zero-shot generalization 7This might result from lag or hysteresis which induces longperformance is very accurate for 7 and 10 link Swimmers, range temporal dependencies that the feed-forward model cannot\nand degrades gradually from 11-15 links. Still, their tra- capture. Graph Networks as Learnable Physics Engines for Inference and Control Trained ignoring parameters System ID Target\nUsing true parameters System ID for a different system\nerror 100 Constant prediction baseline\n10-1\nrollout10-2\nRel.10-3 N/A (a)\nPendulum Cartpole Swimmer Cheetah JACO Target System identification performance.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 15, |
| "total_chunks": 66, |
| "char_count": 2317, |
| "word_count": 354, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ea5c77c4-6cbe-4ed2-9fbd-c872d26e1c36", |
| "text": "The y-axis represents 100-step rollout error, relative to the trivial constant prediction baseline (black dashed line). The baseline GN-based model\n(black bars) with no system identification module performs worst. (b)\nA model which was always provided the true static parameters\n(medium blue bars) and thus did not require system identification Figure 10. Frames from a 40-step GN-based MPC trajectory of the\nperformed best. A model without explicit access to the true static simulated JACO arm. (a) Imitation of the pose of each individual\nparameters, but with a system identification module (light blue body of the arm (13 variables x 9 bodies). (b) Imitation of only the\nbars), performed generally well, sometimes very close to the model palm's pose (13 variables). The full videos are here: link-C.F.JA(o)\nwhich observed the true parameters. But when that same model and link-C.F.JA(a).\nwas presented with an ID phase whose hidden parameters were\ndifferent (but from the same distribution) from its prediction phase\n(red bars), its performance was similar or worse than the model proaches for exploiting our GN model in continuous control.\n(black) with no ID information available. (The N/A column is\nbecause our Swimmer experiments always varied the number of Model-predictive control for single systems. We\nlinks as well as parameters, which meant the inferred static graph trained a GN forward model and used it for MPC by opticould not be concatenated with the initial dynamic graph.) mizing the control inputs via gradient descent to maximize\nID phase Prediction phase ID phase Prediction phase predicted reward under a known reward function. We found\n1 (a) (c) our GN-based MPC could support planning in all of our\n0 Action −1 control systems, across a range of reward functions. For magnitude 1 (b) (d)\nexample, Figure 10 shows frames of simulated JACO tra-\n0 jectories matching a target pose and target palm location,\nLen: 0.1 Pendulumprojection expected Len: 0.2 respectively, under MPC with a 20-step planning horizon.\n−1 prediction Len: 0.3\n0 20 40 60 80 100 0 20 40 60 80 100 In the Swimmer6 system with a reward function that max- Timestep Timestep\nimized the head's movement toward a randomly chosen\nFigure 9. System identification analysis in Pendulum. (a) Control target, GN-based MPC with a 100-step planning horizon seinputs are applied to three Pendulums with different, unobservable\nlected control inputs that resulted in coordinated, swimminglengths during the 20-step ID phase, which makes the system\nlike movements. Despite the fact that the Swimmer6 GNidentifiable. (b) The model's predicted trajectories (dashed curves)\nmodel used for MPC was trained to make one-step predic-track the ground truth (solid curves) closely in the subsequent\n80-step prediction phase. (c) No control inputs are applied to tions under random actions, its swimming performance was\nthe same systems during the ID phase, which makes the system close to both that of a more sophisticated planning algoidentifiable. (d) The model's predicted trajectories across systems rithm which used the true Mujoco physics as its model, as\nare very different from the ground truth. well as that of a learned DDPG agent trained on the system (see Figure 11a). And when we trained the GN model\nA key advantage of our system ID approach is that once the using a mixture of both random actions and DDPG agent\nID phase has been performed for some system, the inferred trajectories, there was effectively no difference in perforrepresentation can be stored and reused to make trajectory mance between our approach, versus the Mujoco planner\npredictions from different initial states of the system. This and learned DDPG agent baselines (see video: link-C.F.S6).\ncontrasts with an approach that would use an RNN to both\nFor Cheetah with reward functions for maximizing forwardinfer the system properties and use them throughout the\nmovement, maximizing height, maximizing squared verti-trajectory, which thus would require identifying the same\ncal speed, and maximizing squared angular speed of thesystem from data each time a new trajectory needs to be\ntorso, MPC with a 20-step horizon using a GN model re-predicted given different initial conditions.\nsulted in running, jumping, and other reasonable patterns of\nmovements (see video: link-C.F.Ch(k)).\n7. Differentiable models can be valuable for model-based se- Model-predictive control for multiple systems. Similar\nquential decision-making, and here we explored two ap- to how our forward models learned accurate predictions", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 16, |
| "total_chunks": 66, |
| "char_count": 4563, |
| "word_count": 713, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ed38ebdd-209a-4d15-8572-a7d18dff53c9", |
| "text": "Graph Networks as Learnable Physics Engines for Inference and Control 100 Trained on Single model trained on -1e3 SVG(1) target[%] 80 Swimmer6 Swimmer{3,4,5,6,8,9} reward -2e3 SVG(0)\nto steps -3e3 60 Average episode -4e3 progresscontrol 40\n0.0 0.2 0.4 0.6 0.8 1.0 DPG agent 700 Training episodes 1e4 Mujoco-based planner 20\nLearned model (random data) planner Maximumafter 0 (a) (b) Learned model (random+DPG data) planner Figure 12. Learning curves for Swimmer6 SVG agents. The GN-\n6 3 4 5 6 7 8 9 10 11 12 13 14 15 based agent (blue) asymptotes earlier, and at a higher performance,\nNumber of Swimmer links than the model-free agent (red).", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 17, |
| "total_chunks": 66, |
| "char_count": 641, |
| "word_count": 107, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "dd94d96f-c36f-427d-9dae-ea4e47d09293", |
| "text": "The lines represent median performance for 6 random seeds, with 25 and 75% confidence intervals.Figure 11. GN-based MPC performance (% distance to target after\n700 steps) for (a) model trained on Swimmer6 and (b) model\ntrained on Swimmers with 3-15 links (see Figure 6). In (a), GNbased MPC (blue point) is almost as good as the Mujoco-based\nplanner (black line) and trained DDPG (grey line) baselines. When the model and policy were trained simultaneously.8 Comthe GN-based MPC's model is trained on a mixture of random and pared to a model-free agent (SVG(0)), our GN-based SVG\nDDPG agent Swimmer6 trajectories (red point), its performance is\nagent (SVG(1)) achieved a higher level performance afas good as the strong baselines. In (b) the GN-based MPC (blue\nter fewer episodes (Figure 12). For GN-based agents with\npoint) (video: link-C.F.SN) is competitive with a Mujoco-based\nmore than one forward step (SVG(2-4)), however, the per-planner baseline (black) (video: link-C.F.SN(b)) for 6-10 links,\nbut is worse for 3-5 and 11-15 links. Note, the model was not formance was not significantly better, and in some cases\ntrained on the open points, 7 and 10-15 links, which correspond was worse (SVG(5+)).\nto zero-shot model generalization for control. Error bars indicate\nmean and standard deviation across 5 experimental runs. 8.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 18, |
| "total_chunks": 66, |
| "char_count": 1331, |
| "word_count": 210, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f8c9fd9c-6ed3-451f-aa90-78e3a884c205", |
| "text": "This work introduced a new class of learnable forward and\ninference models, based on \"graph networks\" (GN), whichacross multiple systems, we also found they could support\nimplement an object- and relation-centric inductive bias.MPC across multiple systems (in this video, a single model\nAcross a range of experiments we found that these modelsis used for MPC in Pendulum, Cartpole, Acrobot, Swimare surprisingly accurate, robust, and generalizable whenmer6 and Cheetah: link-C.F.MS). We also found GN-based\nused for prediction, system identification, and planning inMPC could support zero-shot generalization in the control\nchallenging, physical systems.setting, for a single GN model trained on Swimmers with\n3-6, 8-9 links, and tested on MPC on Swimmers with 7, While our GN-based models were most effective in systems\n10-15 links. Figure 11b shows that it performed almost as with common structure among bodies and joints (e.g., Swimwell as the Mujoco baseline for many of the Swimmers. mers), they were less successful when there was not much\nopportunity for sharing (e.g., Cheetah). Our approach also\ndoes not address a common problem for model-based planModel-predictive control with partial observations. ners that errors compound over long trajectory predictions. Because real-world control settings are often partially observable, we used the system identification GN model (see Some key future directions include using our approach for\nSections 3 and 5) for MPC under partial observations in control in real-world settings, supporting simulation-to-real\nPendulum, Cartpole, SwimmerN, Cheetah, and JACO. The transfer via pre-training models in simulation, extending our\nmodel was trained as in the forward prediction experiments, models to handle stochastic environments, and performing\nwith an ID phase that applied 20 random control inputs to system identification over the structure of the system as\nimplicitly infer the hidden properties. Our results show that well as the parameters.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 19, |
| "total_chunks": 66, |
| "char_count": 1997, |
| "word_count": 292, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "a2a23b6d-204d-4de8-9f37-a9887e84d725", |
| "text": "Our approach may also be useful\nour GN-based forward model with a system identification within imagination-based planning frameworks (Hamrick\nmodule is able to control these systems (Cheetah video: et al., 2017; Pascanu et al., 2017), as well as integrated\nlink-C.I.Ch. All videos are in SM Table A.2). architectures with GN-like policies (Wang et al., 2018). This work takes a key step towards realizing the promise of\nmodel-based methods by exploiting compositional represenModel-based reinforcement learning. In our second aptations within a powerful statistical learning framework, and\nproach to model-based control, we jointly trained a GN\nopens new paths for robust, efficient, and general-purpose\nmodel and a policy function using SVG (Heess et al., 2015),\npatterns of reasoning and decision-making.\nwhere the model was used to backpropagate error gradients\nto the policy in order to optimize its parameters. Crucially, 8In preliminary experiments, we found little benefit of preour SVG agent does not use a pre-trained model, but rather training the model, though further exploration is warranted.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 20, |
| "total_chunks": 66, |
| "char_count": 1105, |
| "word_count": 164, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "be7b7a88-b129-4d71-bc4e-1a8fc35936cc", |
| "text": "Graph Networks as Learnable Physics Engines for Inference and Control References Ehrhardt, S., Monszpart, A., Mitra, N. Learning a physical long-term predictor. arXiv preprint\nAmos, B., Dinh, L., Cabi, S., Rothrl, T., Muldal, A., Erez,\nT., Tassa, Y., de Freitas, N., and Denil, M. Learning\nawareness models. Fragkiadaki, K., Agrawal, P., Levine, S., and Malik, J. Learning visual predictive models of physics for playAtkeson, C. G. and Santamaria, J.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 21, |
| "total_chunks": 66, |
| "char_count": 450, |
| "word_count": 69, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "eddfe694-e2b8-4780-a5c7-ffef71698088", |
| "text": "A comparison of diing billiards. CoRR, abs/1511.07404, 2015.\nrect and model-based reinforcement learning. In Robotics\nand Automation, 1997. Proceedings., 1997 IEEE Interna- Gilmer, J., Schoenholz, S. F., Vinyals, O., and\ntional Conference on, volume 4, pp. 3557–3564. Neural message passing for quantum chem-\n1997. istry. In ICML, pp. 1263–1272, 2017. Battaglia, P., Pascanu, R., Lai, M., Rezende, D. Grzeszczuk, R., Terzopoulos, D., and Hinton, G. NeuroanInteraction networks for learning about objects, relations imator: Fast neural network emulation and control of\nand physics. In NIPS, pp. 4502–4510, 2016. physics-based models. In Proceedings of the 25th annual conference on Computer graphics and interactive\nBattaglia, P. B., and Tenenbaum, J. Simtechniques, pp. 9–20. ACM, 1998.\nulation as an engine of physical scene understanding.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 22, |
| "total_chunks": 66, |
| "char_count": 840, |
| "word_count": 120, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "391624f1-4b66-4cbe-bbae-1de0794390fe", |
| "text": "Proceedings of the National Academy of Sciences, 110 Gu, S., Lillicrap, T. P., Sutskever, I., and Levine, S. Con-\n(45):18327–18332, 2013. tinuous deep q-learning with model-based acceleration.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 23, |
| "total_chunks": 66, |
| "char_count": 192, |
| "word_count": 27, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "d32da874-9957-4f75-9872-647594664dbd", |
| "text": "CoRR, abs/1603.00748, 2016. M., Bruna, J., LeCun, Y., Szlam, A., and Vandergheynst, P. Geometric deep learning: going beyond Hamrick, J. J., Pascanu, R., Vinyals, O.,\neuclidean data. IEEE Signal Processing Magazine, 34(4): Heess, N., and Battaglia, P. Metacontrol for adap-\n18–42, 2017. tive imagination-based optimization. arXiv preprint\nBruna, J., Zaremba, W., Szlam, A., and LeCun, Y. Spectral networks and locally connected networks on graphs. Heess, N., Wayne, G., Silver, D., Lillicrap, T., Erez, T.,\nstochastic value gradients. In NIPS, pp. 2944–2952, 2015. B., Ullman, T., Torralba, A., and Tenenbaum, Cho, K., Van Merri¨enboer, B., Bahdanau, D., and Bengio, Y. Houthooft, R., Chen, X., Duan, Y., Schulman, J., Turck,\nOn the properties of neural machine translation: Encoder- F. Curiosity-driven exploration in\n2014. CoRR, abs/1605.09674, 2016. The nature of explanation, volume 445.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 24, |
| "total_chunks": 66, |
| "char_count": 891, |
| "word_count": 130, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "8986d86c-365f-499d-ad87-6419f48690a8", |
| "text": "Mental models in cognitive science. Cognitive science, 4(1):71–115, 1980. Dai, H., Dai, B., and Song, L. Discriminative embeddings Kingma, D. Auto-encoding variational\nof latent variable models for structured data. In ICLR, 2014.\npp. 2702–2711, 2016. Semi-supervised classificaDefferrard, M., Bresson, X., and Vandergheynst, P. Con- tion with graph convolutional networks. arXiv preprint\nspectral filtering.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 26, |
| "total_chunks": 66, |
| "char_count": 407, |
| "word_count": 53, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f3d1c7e4-3ec3-4439-bacb-84dedb1df639", |
| "text": "In NIPS, pp. 3844–3852, 2016. Levine, S. and Abbeel, P. Learning neural network poliDeisenroth, M. and Rasmussen, C. Pilco: A model-based cies with guided policy search under unknown dynamics.\nand data-efficient approach to policy search.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 27, |
| "total_chunks": 66, |
| "char_count": 238, |
| "word_count": 35, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "8822c467-38e2-49f2-8862-419c5ffa7104", |
| "text": "In ICML 28, In Ghahramani, Z., Welling, M., Cortes, C., Lawrence,\npp. 465–472. D., and Weinberger, K. Q. (eds.), NIPS 27, pp. 1071–\n1079. Curran Associates, Inc., 2014. K., Maclaurin, D., Iparraguirre, J., Bombarell, R., Hirzel, T., Aspuru-Guzik, A., and Adams, R. Li, W. and Todorov, E. Iterative linear quadratic regulator\nConvolutional networks on graphs for learning molecular design for nonlinear biological movement systems. In NIPS, pp. 2224–2232, 2015. ICINCO (1), pp. 222–229, 2004.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 28, |
| "total_chunks": 66, |
| "char_count": 491, |
| "word_count": 74, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "a9eaf14a-6be0-475e-bc60-61e938071a37", |
| "text": "Graph Networks as Learnable Physics Engines for Inference and Control Li, Y., Tarlow, D., Brockschmidt, M., and Zemel, R. Scarselli, F., Gori, M., Tsoi, A. C., Hagenbuchner, M., and\nGated graph sequence neural networks. arXiv preprint Monfardini, G. The graph neural network model. J., Pritzel, A., Heess, N., Erez, T., Schmidhuber, J. Curious model-building control systems. Tassa, Y., Silver, D., and Wierstra, D.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 29, |
| "total_chunks": 66, |
| "char_count": 415, |
| "word_count": 63, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "dd84b6a4-7d9b-427d-a136-d13ff6a1eb6c", |
| "text": "Continuous control In Proc. Neural Networks, pp. 1458–1463.\nwith deep reinforcement learning. Forward models for physio- Spelke, E. Developlogical motor control. Neural networks, 9(8):1265–1279, mental science, 10(1):89–96, 2007.\n1996. J., and Schmidhuber, J. Planning to\nNagabandi, A., Kahn, G., Fearing, R.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 30, |
| "total_chunks": 66, |
| "char_count": 308, |
| "word_count": 41, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4915aed0-95c7-4932-bd22-1276ba38c731", |
| "text": "S., and Levine, S. be surprised: Optimal bayesian exploration in dynamic\nNeural network dynamics for model-based deep rein- environments. In AGI, volume 6830 of Lecture Notes in\nforcement learning with model-free fine-tuning. arXiv Computer Science, pp. 41–51. Tassa, Y., Erez, T., and Smart, W. Receding horizon\nNiepert, M., Ahmed, M., and Kutzkov, K. Learning convo- differential dynamic programming. In NIPS, pp. 1465–\nlutional neural networks for graphs. In ICML, pp. 2014– 1472, 2008.\n2023, 2016. Tassa, Y., Mansard, N., and Todorov, E. Control-limited\nPascanu, R., Li, Y., Vinyals, O., Heess, N., Buesing, L., differential dynamic programming. In Robotics and AuRacani`ere, S., Reichert, D., Weber, T., Wierstra, D., and tomation (ICRA), 2014 IEEE International Conference\nBattaglia, P. Learning model-based planning from scratch. on, pp. 1168–1175. Tassa, Y., Doron, Y., Muldal, A., Erez, T., Li, Y., Casas, D. B., Andrychowicz, M., Zaremba, W., and Abbeel, d. L., Budden, D., Abdolmaleki, A., Merel, J., Lefrancq,\nP.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 31, |
| "total_chunks": 66, |
| "char_count": 1024, |
| "word_count": 153, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "6cb27360-1495-423c-864a-bb62cf7c6a08", |
| "text": "Sim-to-real transfer of robotic control with dynamics A., et al. Deepmind control suite. arXiv preprint Rajeswaran, A., Ghotra, S., Levine, S., and Ravindran, B. Todorov, E., Erez, T., and Tassa, Y.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 32, |
| "total_chunks": 66, |
| "char_count": 198, |
| "word_count": 31, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "faf7560c-eee3-4e04-a0c0-e676f67d4610", |
| "text": "Mujoco: A physics\nEpopt: Learning robust neural network policies using engine for model-based control. In Intelligent Robots\nmodel ensembles. CoRR, abs/1610.01283, 2016. and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 5026–5033. Raposo, D., Santoro, A., Barrett, D., Pascanu, R., Lillicrap, T., and Battaglia, P. Discovering objects and their Wang, T., Liao, R., Ba, J., and Fidler, S. Nervenet: Learnrelations from entangled scene representations. arXiv ing structured policy with graph neural networks.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 33, |
| "total_chunks": 66, |
| "char_count": 523, |
| "word_count": 72, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "3406e219-ae15-4abc-aa5b-bcc1884ab1cd", |
| "text": "Watters, N., Zoran, D., Weber, T., Battaglia, P., Pascanu, R.,Rezende, D. J., Mohamed, S., and Wierstra, D. Stochasand Tacchetti, A.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 34, |
| "total_chunks": 66, |
| "char_count": 132, |
| "word_count": 20, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "d01a0444-3c0c-4bd4-8f6b-0ae9d3d40ddc", |
| "text": "Visual interaction networks: Learning tic backpropagation and approximate inference in deep\na physics simulator from video. In NIPS, pp. 4542–4550, generative models. In ICML 31, 2014.\n2017.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 35, |
| "total_chunks": 66, |
| "char_count": 190, |
| "word_count": 27, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "d03f561a-701e-4471-8824-26911f880bbd", |
| "text": "Santoro, A., Raposo, D., Barrett, D. G., Malinowski, M.,\nYu, W., Liu, C. Preparing for the un- Pascanu, R., Battaglia, P., and Lillicrap, T. A simple\nknown: Learning a universal policy with online system neural network module for relational reasoning.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 36, |
| "total_chunks": 66, |
| "char_count": 251, |
| "word_count": 40, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "12dfafb8-d2d0-4579-a2f8-2c57cad33363", |
| "text": "In NIPS,\nidentification. CoRR, abs/1702.02453, 2017. pp. 4974–4983, 2017. Scarselli, F., Yong, S. L., Gori, M., Hagenbuchner, M., Tsoi,\nA. Graph neural networks for ranking web pages. In Web Intelligence, 2005. The 2005 IEEE/WIC/ACM International Conference on,\npp. 666–672. Scarselli, F., Gori, M., Tsoi, A. C., Hagenbuchner, M., and\nMonfardini, G. Computational capabilities of graph neural networks. IEEE Transactions on Neural Networks, 20\n(1):81–102, 2009a. Supplementary Material: Graph Networks as Learnable Physics Engines for\nInference and Control Alvaro Sanchez-Gonzalez 1 Nicolas Heess 1 Jost Tobias Springenberg 1 Josh Merel 1 Martin Riedmiller 1\nRaia Hadsell 1 Peter Battaglia 1", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 37, |
| "total_chunks": 66, |
| "char_count": 691, |
| "word_count": 99, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "30b2720e-8dce-4d39-9985-74cc03025ecc", |
| "text": "Summary of prediction and control videos Representative trajectory prediction videos. Each shows several rollouts from different initial states for a single model trained\non random control inputs. The labels encode the videos' contents: [Prediction/Control].[Fixed/Parameterized/System ID].[(System\nabbreviation)] Fixed Parametrized System ID", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 38, |
| "total_chunks": 66, |
| "char_count": 342, |
| "word_count": 40, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "8ed7347e-b533-4398-a4e9-761f1487facd", |
| "text": "Pendulum link-P.F.Pe link-P.P.Pe link-P.I.Pe\nCartpole link-P.F.Ca link-P.P.Ca link-P.I.Ca\nAcrobot link-P.F.Ac - -\nSwimmer6 link-P.F.S6 - -\n(eval. DDPG) link-P.F.S6(D) - -\nSwimmerN link-P.F.SN link-P.P.S6 link-P.I.S6\n(zero-shot) link-P.F.SN(Z) - -\nCheetah link-P.F.Ch link-P.P.Ch link-P.I.Ch\nWalker2d link-P.F.Wa - -\nJACO link-P.F.JA link-P.P.JA link-P.I.JA\nMultiple systems link-P.F.MS link-P.P.MS -\n(with cheetah) link-P.F.MC - -\nReal JACO link-P.F.JR - - Representative control trajectory videos. Each shows several MPC trajectories from different initial states for a single trained\nmodel. The labels encode the videos' contents: [Prediction/Control].[Fixed/Parameterized/System ID].[(System abbreviation)] Fixed Parametrized System ID Pendulum (balance) link-C.F.Pe link-C.P.Pe link-C.I.Pe\nCartpole (balance) link-C.F.Ca link-C.P.Ca link-C.I.Ca\nAcrobot (swing up) link-C.F.Ac - -\nSwimmer6 (reach) link-C.F.S6 - -\nSwimmerN (reach) link-C.F.SN link-C.P.SN link-C.I.SN\n\" baseline link-C.F.SN(b) - -\nCheetah (move) link-C.F.Ch(m) link-C.P.Ch link-C.I.Ch\nCheetah (k rewards) link-C.F.Ch(k) - -\nWalker2d (k rewards) link-C.F.Wa(k) - -\nJACO (imitate pose) link-C.F.JA(o) link-C.P.JA(o) link-C.I.JA(o)\nJACO (imitate palm) link-C.F.JA(a) link-C.P.JA(a) link-C.I.JA(a)\nMultiple systems link-C.F.MS link-C.P.MS -", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 39, |
| "total_chunks": 66, |
| "char_count": 1305, |
| "word_count": 152, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "bc2921e1-f91a-4678-910d-0652cd93cc2d", |
| "text": "Graph Networks as Learnable Physics Engines for Inference and Control Description of the simulated environments Name Number Generalized Actions Random parametrizationa\n(Timestep) of bodies coordinates (relative range of variation,\n(inc. uniformly sampled)\nworld) Pendulum 2 Total: 1 1: rotation torque at axis Length (0.2-1)\n(20 ms) 1: angle of pendulum Mass (0.5-3) Cartpole 3 Total: 2 1: horizontal force to cart Mass of cart (0.2-2)\n(10 ms) 1: horizontal position of cart Length of pole (0.3-1)\n1: angle of pole Thickness (0.4-2.2) of pole Acrobot 3 Total: 2 1: rotation force between N/A\n(10 ms) 2: angle of each of the links the links\nangle of pole SwimmerN N+1 Total: N+2 N-1: rotation force be- Number of links (3 to 9 links)\n(20 ms) 2: 2-d position of head tween the links Individual lengths of links (0.3-2)\n1: angle of head Thickness (0.5-5)\nN-1: angle of rest of links Cheetah 8 Total: 9 6: rotation force at thighs, Base angles (-0.1 to 0.1 rad)\n(10 ms) 2: 2-d position of torso shins and feet Individual lengths of bodies (0.5-2\n1: angle of torso approx.)\n6: thighs, shins and feet an- Thickness (0.5-2)\ngles Walker2d 8 Total: 9 6: rotation at hips, knees N/A\n(2.5 ms) 2: 2-d position of torso and ankles\n1: angle of torso\n6: thighs, leg and feet angles Jaco 10 Total: 9 9: velocity target at each Individual body masses (0.5-1.5)\n(100 ms) 3: angles of coarse joints joint Individual motor gears (0.5-1.5).\n3: angles of fine joints\n3: angles of fingers aDensity of bodies is kept constant for any changes in size.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 40, |
| "total_chunks": 66, |
| "char_count": 1526, |
| "word_count": 267, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "c4bab098-c203-47d9-8344-85890857c33f", |
| "text": "Unless otherwise indicated, we applied random control inputs to the system to generate the training data. The control\nsequences were randomly selected time steps from spline interpolations of randomly generated values (see SM Figure C.1). A video of the resulting random system trajectories is here: Video. value 0.5\nAction −0.50.0 −1.0\n0 20 40 60 80 100\nTimestep", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 41, |
| "total_chunks": 66, |
| "char_count": 363, |
| "word_count": 58, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "36cc5e0e-2c3a-4434-85f2-f1a1bcff5cf6", |
| "text": "Sample random sequences obtained from the same distribution than that used to generate random system data to train the\nmodels. Sample trajectory video: Video. Graph Networks as Learnable Physics Engines for Inference and Control", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 42, |
| "total_chunks": 66, |
| "char_count": 228, |
| "word_count": 34, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f759d48a-e129-445f-b17b-e9e9bd44addd", |
| "text": "For each of the individual fixed systems, we generated 10000 100-step sequences corresponding to about 106 supervised\ntraining examples. Additionally, we generated 1000 sequences for validation, and 1000 sequences for testing purposes. In the case of the parametrized environments, we generated 20000 100-step sequences corresponding to about 2 · 106\nsupervised training examples. Additionally, we generated 5000 sequences for validation, and 5000 sequences for testing\npurposes. Models trained on multiple environments made use of the corresponding datasets mixed within each batch in equal\nproportion. The real JACO data was obtained under human control during a stacking task. It consisted of 2000 (train:1800, valid:100,\ntest:100) 100-step (timestep 40 ms) trajectories.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 43, |
| "total_chunks": 66, |
| "char_count": 774, |
| "word_count": 109, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "3d1e18a6-be18-4b5e-a540-8ccbf66423c7", |
| "text": "The instantaneous state of the system was represented in this case by\nproprioceptive information consisting of joint angles (cosine and sine) and joint velocities for each connected body in the\nJACO arm, replacing the 13 variables in the dynamic graph. As the Real JACO observations correspond to the generalized coordinates of the simulated JACO Mujoco model, we use the\nsimulated JACO to render the Real JACO trajectories throughout the paper. Implementation of the models Algorithms were implemented using TensorFlow and Sonnet. We used custom implementations of the graph networks\n(GNs) as described in the main text. Graph network architectures Standard sizes and output sizes for the GNs used are:", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 44, |
| "total_chunks": 66, |
| "char_count": 703, |
| "word_count": 109, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "1addf881-9a06-46d1-a0f1-382dcddd306f", |
| "text": "• Edge MLP: 2 or 3 hidden layers. 256 to 512 hidden cells per layer. • Node and Global MLP: 2 hidden layers. 128 to 256 hidden cells per layer. • Updated edge, node and global size: 128 • (Recurrent models) Node, global and edge size for state graph: 20 • (Parameter inference) Node, global and edge size for abstract static graph: 10", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 45, |
| "total_chunks": 66, |
| "char_count": 334, |
| "word_count": 63, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "106c6c62-f654-438a-8980-2ae259b14046", |
| "text": "All internal MLPs used layer-wise ReLU activation functions, except for output layers. The two-layer GN core is wrapped by input and output normalization blocks. The input normalization performs linear\ntransformations to produce a zero-mean, unit-variance distributions for each of the global, node and edge features. It is\nworth noting that for node/edge features, the same transformation is applied to all nodes/edges in the graph, without having\nspecific normalizer parameters for different bodies/edges in the graph. This allows to reuse the same normalizer parameters\nregardless of the number and type of nodes/edges in the graph. This input normalization is also applied to the observed\ndynamic graph in the parameter inference network. Similarly, inverse normalization is applied to the output nodes of the forward model, to guarantee that the network only\nneeds to output nodes with zero-mean and unit-variance. No normalization is applied to the inferred static graph (from the system identification model), in the output of the parameter\ninference network, nor the input forward prediction network, as in this case the static graph is already represented in a latent\nfeature space. Graph Networks as Learnable Physics Engines for Inference and Control Algorithm D.1 Forward prediction algorithm. Input: trained GNs GN1, GN2 and normalizers Normin, Normout.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 46, |
| "total_chunks": 66, |
| "char_count": 1366, |
| "word_count": 204, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "7fd67935-89e3-42a6-9ceb-a9c48770814f", |
| "text": "Input: dynamic state xt0 and actions applied xt0 to a system at the current timestep. Input: system parameters p\nBuild static graph Gs using p\nBuild input dynamic nodes Ndt0 using xt0\nBuild input dynamic edges Et0d using at0\nBuild input dynamic graph Gd using Ndt0 and Et0d\nBuild input graph Gi = concat(Gs, Gd)\nObtain normalized input graph Gni = Normin(Gi)\nObtain graph after the first GN: G ′ = GN1(Gni )\nObtain normalized predicted delta dynamic graph: G∗= GN2(concat(Gni , G′))\nObtain normalized predicted delta dynamic nodes: ∆Ndn = G∗.nodes\nObtain predicted delta dynamic nodes: ∆Nd = Norm−1out(∆Ndn )\nObtain next dynamic nodes Ndt0+1 by updating Ndt0 with ∆Nd\nExtract next dynamic state xt0+1 from Ndt0+1\nOutput: next system state xt0+1 Algorithm D.2 Forward prediction with System ID. Input: trained parameter inference recurrent GN GNp. Input: trained GNs and normalizers from Algorithm D.1.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 47, |
| "total_chunks": 66, |
| "char_count": 901, |
| "word_count": 145, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "d6efdd8f-c679-4f1c-b416-7cbcba445fe9", |
| "text": "Input: dynamic state xt0 and actions applied xt0 to a parametrized system at the current timestep. Input: a 20-step sequence of observed dynamic states xseq and actions xseq for same instance of the system. Build dynamic graph sequence Gseqd using xseqi and aseqi\nObtain empty graph hidden state Gh.\nfor each graph Gtd in Gseqd do\nGo, Gh = GNp(Normin(Gtd), Gh),\nend for\nAssign GID = Go\nUse GID instead of Gs in Algorithm D.1 to obtain xt0+1 from xt0 and xt0\nOutput: next system state xt0+1 When training individual models for systems with translation invariance (Swimmer, Cheetah and Walker2d), we always\nre-centered the system around 0 before the prediction, and moved it back to its initial location after the prediction. This\nprocedure was not applied when multiple systems were trained together.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 48, |
| "total_chunks": 66, |
| "char_count": 799, |
| "word_count": 132, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4b07236c-c6c0-425b-a67d-abfb5a531523", |
| "text": "Prediction of dynamic state change Instead of using the one-step model to predict the absolute dynamic state, we used it to predict the change in dynamic state,\nwhich was then used to update the input dynamic state. For the position, linear velocity, and angular velocity, we updated\nthe input by simply adding their corresponding predicted changes. For orientation, where the output represents the rotation\nquaternion between the input orientation and the next orientation (forced to have unit norm), we computed the update using\nthe Hamilton product. Forward prediction algorithms Our forward model takes the system parameters, the system state and a set of actions, to produce the next system state as\nexplained in SM Algorithm D.1. ONE-STEP PREDICTION WITH SYSTEM ID Graph Networks as Learnable Physics Engines for Inference and Control", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 49, |
| "total_chunks": 66, |
| "char_count": 840, |
| "word_count": 131, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "01bea71a-782d-47f8-9476-a7c5a7471949", |
| "text": "Algorithm D.3 One step of the training algorithm\nBefore training: initialize weights of GNs GN1, GN2 and accumulators of normalizers Normin, Normout. Input: batch of dynamic states of the system {xt0} and actions applied {at0} at the current timestep\nInput: batch of dynamic states of the system at the next timestep {xt0+1}\nInput: batch of system parameters {pi}\nfor each example in batch do\nBuild static graph Gs using pi\nBuild input dynamic nodes Ndt0 using xt0\nBuild input dynamic edges Et0d using at0\nBuild output dynamic nodes Ndt0+1 using xt0+1\nAdd noise to input dynamic nodes Ndt0\nBuild input dynamic graph Gd using Ndt0 and Et0d\nBuild input graph Gi = concat(Gs, Gd)\nObtain target delta dynamic nodes ∆N ′d from Ndt0+1 and Ndt0\nUpdate Normin using Gi\nUpdate Normout using ∆Nd\nObtain normalized input graph Gni = Normin(Gi)\nObtain normalized target nodes: ∆Ndn′ = Normout(∆N ′d)\nObtain normalized predicted delta dynamic nodes: ∆Ndn = GN2(concat(Gni , GN1(Gni ))).nodes\nCalculate dynamics prediction loss between ∆Ndn and ∆Ndn′ .\nend for\nUpdate weights of GN1, GN2 using Adam optimizer on the total loss with gradient clipping. For the System ID foward predictions the model takes a system state and a set of actions for a specific instance of a\nparametrized system, together with a sequence of observed system states and actions for a for the same system instance. The\nobserved sequence is used to identify the system and then produce the next system state as described in Algorithm D.2. In the case of rollout predictions, the System ID is only performed once, on the provided observed sequence, using the same\ngraph for all of the one-step predictions required to generate the trajectory. We trained the one-step forward model in a supervised manner using algorithm D.3. Part of the training required finding\nmean and variance parameters for the input and output normalization, which we did online by accumulating information\n(count, sum and squared sum) about the distributions of the input edge/node/global features, and the distributions of the\nchange in the dynamic states of the nodes, and using that information to estimate the mean and standard deviation of each of\nthe features. Due to the fact that our representation of the instantaneous state of the bodies is compatible with configurations where the\njoint constraints are not satisfied, we need to train our model to always produced outputs within the manifold of configurations\nallowed by the joints. This was achieved by adding random normal noise (magnitude set as a hyper-parameter) to the nodes\nof the input dynamic graph during training. As a result, the model not only learns to make dynamic predictions, but to put\nback together systems that are slightly dislocated, which is key to achieve small rollout errors.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 50, |
| "total_chunks": 66, |
| "char_count": 2794, |
| "word_count": 457, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "91c2b40f-2178-46b0-a446-d727139c1301", |
| "text": "ABSTRACT PARAMETER INFERENCE The training of the parameter inference recurrent GN is performed as described in Algorithm D.4. The recurrent GN and\nthe dynamics GN are trained together end-to-end by sampling a random 20-step sequence for the former, and a random\nsupervised example for the latter from 100-step graph sequences, with a single loss based on the prediction error for the\nsupervised example. This separation between the sequence at the supervised sample, encourages the recurrent GN to truly\nextract abstract static properties that are independent from the specific 20-step trajectory, but useful for making dynamics\npredictions under any condition. Graph Networks as Learnable Physics Engines for Inference and Control Algorithm D.4 End-to-end training algorithm for System ID. Before training: initialize weights of parameter inference recurrent GN GNp, as well as weights from Algorithm D.3. Input: a batch of 100-step sequences with dynamic states {xseqi } and actions {xseqi }\nfor each sequence in batch do\nPick a random 20-step subsequence xsubseqi and asubseqi . Build dynamic graph sequence Gsubseqd using xsubseqi and asubseqi\nObtain empty graph hidden state Gh.\nfor each graph Gtd in Gsubseqd do\nGo, Gh = GNp(Normin(Gtd), Gh),\nend for\nAssign GID = Go\nPick a different random timestep t0 from {xseqi }, {xseqi }\nApply Algorithm D.3 to timestep t0 using final GID instead Gs to obtain the dynamics prediction loss.\nend for\nUpdate weights of GNp, GN1, GN2 using Adam optimizer on the total loss with gradient clipping. RECURRENT ONE-STEP PREDICTIONS The one-step prediction recurrent model, used for the Real JACO predictions, is trained from 21-step sequences using the\nteacher forcing method. The first 20 graphs in the sequence are used as input graphs, while the last 20 graphs in the sequence\nare used as target graphs. During training, the recurrent model is used to sequentially process the input graphs, producing at\neach step a predicted dynamic graph, which is stored, and a graph state, which is fed together with the next input graph in\nthe next iteration. After processing the entire sequence, the sequence of predicted dynamic graphs and the target graphs are\nused together to calculate the loss. We use a standard L2-norm between the normalized expected and predicted delta nodes, for the position, linear velocity, and\nangular velocity features.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 51, |
| "total_chunks": 66, |
| "char_count": 2380, |
| "word_count": 377, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "6a007119-af43-4cff-9107-4c2d9bdfaf9e", |
| "text": "We do this for the normalized features to guarantee a balanced relative weighting between the\ndifferent features.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 52, |
| "total_chunks": 66, |
| "char_count": 113, |
| "word_count": 17, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "75d102ad-ddb4-4f06-a4d3-a788f574e6fe", |
| "text": "In the case of the orientation, we cannot directly calculate the L2-norm between the predicted rotation\nquaternion qp to the expected rotation quaternion qe, as a quaternion q and −q represent the same orientation. Instead, we\nminimize the angle distance between qp and qe by minimizing the loss 1 −cos2 (qe· qp) after. Models were trained with a batch size of 200 graphs/graph sequences, using an Adam optimizer on a single GPU. Starting\nlearning rates were tuned at 1−4. We used two different exponential decay with factor of 0.975 updated every 50000 (fast\ntraining) or 200000 (slow training) steps. We trained our models using early stopping or asymptotic convergence based the rollout error on 20-step sequences from the\nvalidation set. Simple environments (such as individual fixed environments) would typically train using the fast training\nconfiguration for a period between less than a day to a few days, depending on the size of the environment and the size of the\nnetwork. Using slow training in these cases only yields a marginal improvement. On the other hand, more complex models\nsuch as those learning multiple environments and multiple parametrized environments benefited from the slow training to\nachieve optimal behavior for periods of between 1-3 weeks. MLP baseline architectures For the MLP baselines, we used 5 different models (ReLU activation) spanning a large capacity range: • 3 hidden layers, 128 hidden cells per layer • 3 hidden layers, 512 hidden cells per layer • 9 hidden layers, 128 hidden cells per layer Graph Networks as Learnable Physics Engines for Inference and Control", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 53, |
| "total_chunks": 66, |
| "char_count": 1608, |
| "word_count": 259, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "3673ea83-66ce-40ba-9996-b30707d46ef0", |
| "text": "Algorithm F.1 MPC algorithm\nInput: initial system state x0,\nInput: randomly initialized sequence of actions {at}. Input: pretrained dynamics model M such\nxt0+1 = M(xt0, at0)\nInput: Trajectory cost function L such\nc = C({xt}, {at})\nfor a number of iterations do\nx0r = x0\nfor t in range(0, horizon) do\nxt+1r = M(xtr, at)\nend for\nCalculate trajectory cost c = C({xtr}, {at})\nCalculate gradients {gta} = ∂{at}∂c\nApply gradient based update to {at}\nend for\nOutput: optimized action sequence {at} • 9 hidden layers, 512 hidden cells per layer • 5 hidden layers, 256 hidden cells per layer The corresponding MLP replaces the 2-layer GN core, with additional layers to flatten the input graphs into feature batches,\nand to reconstruct the graphs at the output. Both normalization and graph update layers are still applied at graph level, in the\nsame way that for the GN-based model. Each of the models was trained four times using initial learning rates of 1−3 and 1−4 and learning rate decays every 50000\nand 200000 steps. The model performing best on validation rollouts for each environment, out of the 20 hyperparameter\ncombinations was chosen as the MLP baseline.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 54, |
| "total_chunks": 66, |
| "char_count": 1160, |
| "word_count": 195, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "1b6ff11b-7228-4ea4-9b6b-fbb861e4c5c1", |
| "text": "Model-based planning algorithms MPC PLANNER WITH LEARNED MODELS We implemented MPC using our learned models as explained in SM Algorithm F.1. We applied the algorithm in a receding\nhorizon manner by iteratively planning for a fixed horizon (see SM Table F.2), applying the first action of the sequence, and\nincreasing the horizon by one step, reusing the shifted optimal trajectory computed in the previous iteration. We typically\nperformed between 3 and 51 optimization iterations N from each initial state, with additional N · horizon iterations at the\nvery first initial state, to warm-up the fully-random initial action sequence. BASELINE MUJOCO-BASED PLANNER As a baseline planning approach we used the iterative Linear-Quadratic-Gaussian (iLQG) trajectory optimization approach\nproposed in (Tassa et al., 2014). This method alternates between forward passes (rollouts) which integrate the dynamics\nforward for a current control sequence and backwards passes which consists of perturbations to the control sequence to\nimprove upon the recursively computed objective function. Note that in the backwards pass, each local perturbation can be\nformulated as an optimization problem, and linear inequality constraints ensure that the resulting control trajectory does not\nrequire controls outside of the range that can be feasibly generated by the corresponding degrees of freedom in the MuJoCo\nmodel. The overall objective optimized corresponds to the total cost over J a finite horizon: T −1\nJ(x0, U) = X ℓ(xt, ut) + ℓ(xT ) (1)\nt=0", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 55, |
| "total_chunks": 66, |
| "char_count": 1533, |
| "word_count": 233, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "672be895-7dda-4bb1-8c62-d0969e3bbeb8", |
| "text": "Graph Networks as Learnable Physics Engines for Inference and Control where x0 is the initial state, ut is the control signal (i.e. action) taken at timestep t, U is the trajectory of controls, ℓ(·) is the\ncost function. We assume the dynamics are deterministic transitions xt+1 = f(xt, ut). While this iLQG planner does not work optimally when the dynamics involve complex contacts, for relatively smooth\ndynamics as found in the swimmer, differential dynamic programming (DDP) style approaches works well (Tassa et al.,\n2008).", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 56, |
| "total_chunks": 66, |
| "char_count": 528, |
| "word_count": 84, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ec09d05a-6f63-48d7-b04f-cc9148ccaf96", |
| "text": "Relevant cost functions are presented in SM Section F.2. Planning configuration Name Task Planning Reward to maximize (summed for all timesteps)\nhorizon", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 57, |
| "total_chunks": 66, |
| "char_count": 152, |
| "word_count": 22, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5a04b3ff-21c6-4511-9d1d-f819ffd41fc8", |
| "text": "Pendulum Balance 50, 100 Negative angle between the quaternion of the pendulum and the target\nquaternion corresponding to the balanced position. (0 when balanced at\nthe top, < 0 otherwise). Cartpole Balance 50, 100 Same as Pendulum-Balance calculated for the pole. Acrobot Swing up 100 Same as Pendulum-Balance summed for both acrobot links.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 58, |
| "total_chunks": 66, |
| "char_count": 341, |
| "word_count": 53, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "483738eb-6a51-4e25-bee6-b8ea584e3ea5", |
| "text": "Swimmer Mover towards target 100 Projection of the displacement vector of the Swimmer head from the\nprevious timestep on the target direction, The target direction is calculated as the vector joining the head of the swimmer at the first planning\ntimestep with the target location. The reward is shaped (0.01 contribution) with the negative squared projection on the perpendicular target\ndirection. Cheetah Move forward 20 Horizontal component of the absolute velocity of the torso. Vertical position 20 Vertical component of the absolute position of the torso. Squared vertical speed 20 Squared vertical component of the absolute velocity of the torso. Squared angular speed 20 Squared angular velocity of the torso. Walker2d Move forward 20 Horizontal component of the absolute velocity of the torso. Vertical position 20 Vertical component of the absolute position of the torso. Inverse verticality 20 Same as Pendulum-Balance summed for torso, thighs and legs. Feet to head height 20 Summed squared vertical distance between the position of each of the\nfeet and the height of Walker2d. Jaco Imitate Palm Pose 20 Negative dynamic-state loss (as described in Section D.7.4) between the\nposition-and-orientation of the body representing the JACO palm and\nthe target position-and-orientation . Imitate Full Pose 20 Same as Jaco-Imitate Palm Pose but summed across all the bodies forming JACO (see SM Section D.7.4).", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 59, |
| "total_chunks": 66, |
| "char_count": 1414, |
| "word_count": 218, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "23391cd7-0335-4bbf-b8e4-77288a4d8a5a", |
| "text": "Reinforcement learning agents Our RL experiments use three base algorithms for continuous control: DDPG (Lillicrap et al., 2016), SVG(0) and\nSVG(N) (Heess et al., 2015). All of these algorithms find a policy π that selects an action a in a given state x by maximizing\nthe expected discounted reward,", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 60, |
| "total_chunks": 66, |
| "char_count": 299, |
| "word_count": 49, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4bd64bd9-18e8-475e-95de-fa5b1e634753", |
| "text": "Q(x, a) = Eh X γtr(x, a)i , (2) where r(x, a) is the per-step reward and γ denotes the discount factor. Learning in all algorithms we consider occurs\noff-policy. That is, we continuously generate experience via the current best policy π, storing all experience (sequences of\nstates, actions and rewards) it into a replay buffer B, and minimize a loss defined on samples from B via stochastic gradient\ndescent. Graph Networks as Learnable Physics Engines for Inference and Control The DDPG algorithm (Lillicrap et al., 2016) learns a deterministic control policy π = µθ(s) with parameters θ and a\ncorresponding action-value function Qµφ(s, a), with parameters φ. Both of these mapping are parameterized via neural\nnetworks in our experiments. Learning proceeds via gradient descent on two objectives. The objective for learning the Q function is to minimize the\none-step Bellman error using samples from a replay buffer, that is we seek to find arg minφ L(φ) by following the gradient, h 2i ∇φL(φ) = E(xt,at,xt+1,rt)∈B ∇φ Qµφ(xt, at) −y ,\n(3)\nwith y = rt + γQµφ′(xt+1, µθ′(xt+1)) where φ′ and θ′ denote the parameters of target Q-value and policy networks, that are periodically copied from the current\nparameters, this is common practice in RL to stabilize training (we update the target networks every 1000 gradient steps). The objective for learning the policy is performed by searching for an action that obtains maximum value, as judged by the\nlearned Q-function. That is we find arg minθ L(θ) by following the deterministic policy gradient (Lillicrap et al., 2016), ∇θLDPG(θ) = Ext∈B h −∇θQµθ (xt, µθ(xt))i . (4) For our experiments with the familiy of Stochastic Value Gradient (SVG) (Heess et al., 2015) algorithms we considered\ntwo-variants a model-free baseline SVG(0) that optimizes a stochastic policy based on a learned Q-function as well as a\nmodel-based version SVG(N) (using our Graph Net model) that unrolls the system dynamics for N-steps.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 61, |
| "total_chunks": 66, |
| "char_count": 1956, |
| "word_count": 319, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "77ddd47c-0ea2-4c33-80cb-21082bdfbb7f", |
| "text": "SVG(0) In the model-free variant learning proceeds similarly to the DDPG algorithm. We learn both, a parametric Q-value\nestimator as well as a (now stochastic) policy πθ(a|x) from which actions can be sampled. In our implementation learning\nof the Q-function is performed by following the gradient from Equation (3), with µ(x) replaced by samples a ∼πθ(a|x). For the policy learning step we can learn via a stochastic analogue of the deterministic policy gradient from Equation (4),\nthe so called stochastic value gradient, which reads ∇θLSVG(θ) = −∇θE xt∈B h Qπθ (xt, ·)i . (5)\na∼πθ(a|xt) For a Gaussian policy (as used in this paper) the gradient of this expectation can be calculated via the reparameterization\ntrick (Kingma & Welling, 2014; Rezende et al., 2014). SVG(N) For the model based version we used a variation of SVG(N) that employs an action-value function – instead of\nthe value function estimator used in the original paper. This allowed us to directly compare the performance of a SVG(0)\nagent, which is model free, with SVG(1) which calculates policy gradients using a one-step model based horizon. In particular, similar to Equation (5), we obtain the model based policy gradient as ∇θLSVG(N)(θ) = −∇θE xt∈B h rt(xt, at) + γQπθ (xt+1, at) | xt+1 = g(xt, at)i , (6)\nat∼πθ(a|xt)\nat+1∼πθ(a|xt+1) where g denotes the dynamics, as predicted by the GN and the gradient can, again, be computed via reparameterization (we\nrefer to Heess et al. (2015) for a detailed discussion). We experimented with SVG(1) on the swimmer domain with six links (Swimmer 6). Since in this case, the goal for the\nGN is to predict environment observations (as opposed to the full state for each body), we constructed a graph from the\nobservations and actions obtained from the environment. SM Figure H.3 describes the observations and actions and shows\nhow they were transformed into a graph.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 62, |
| "total_chunks": 66, |
| "char_count": 1883, |
| "word_count": 311, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "2acfa1b2-392e-49dd-b971-c72bf93f3a1a", |
| "text": "Graph Networks as Learnable Physics Engines for Inference and Control Mujoco variables included in the graph conversion We retrieved the the absolute position, orientation, linear and angular velocities for each body: • Nodes: (for each body)\nAbsolute body position (3 vars): mjData.xpos\nAbsolute body quaternion orientation position (4 vars): mjData.xquat\nAbsolute linear and angular velocity (6 vars): mj objectVelocity (mjOBJ XBODY, flg local=False) • Edges: (for each joint) Magnitude of action at joint: mjData.ctrl (0, if not applicable). We performed an exhaustive selection of global, body, and joint static properties from mjModel: • Global: mjModel.opt.{timestep, gravity, wind, magnetic, density, viscosity, impratio, o margin, o solref, o solimp,\ncollision type (one-hot), enableflags (bit array), disableflags (bit array)}. • Nodes: (for each body) mjModel.body {mass, pos, quat, inertia, ipos, iquat}. • Edges: (for each joint)\nDirection of edge (1: parent-to-child, -1: child-to-parent). Motorized flag (1: if motorized, 0 otherwise). Joint properties: mjModel.jnt {type (one-hot), axis, pos, solimp, solref, stiffness, limited, range, margin}. Actuator properties: mjModel.opt.actuator {biastype (one-hot), biasprm, cranklength, ctrllimited, ctrlrange, dyntype\n(one-hot), dynprm, forcelimited, forcerange, gaintype (one-hot), gainprm, gear, invweight0, length0, lengthrange}. Most of these properties are constant for all environments use, however, they are kept for completeness.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 63, |
| "total_chunks": 66, |
| "char_count": 1496, |
| "word_count": 197, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "192682a5-6339-4e07-9cfe-d2f621566a41", |
| "text": "While we do not\ninclude geom properties such as size, density or shape, this information should be partially encoded in the inertia tensor\ntogether with the mass. Supplementary figures (a)\ntruth [au] [au]\nvelocity Ground Position Linear\n(b) (d)\n[au] groundtruth\n[au] rollout\nprediction\nPrediction velocity Orientation (c) Angular (e)\nInitial After 25 0 20 40 60 80 100 0 20 40 60 80 100 50 steps 75 steps 100 steps\nstate control steps Timestep Timestep Model trained on Swimmer6 trajectories under random control evaluated on a trajectory generated by a DDPG agent. Trajectories are also available in video [link-P.F.S6(D)]. (Left) Key-frames comparing the ground truth and predicted sequence within a\n100 step trajectory. (Right) Full state sequence prediction for the third link of the Swimmer, consisting of Cartesian position (3 vars),\nquaternion orientation (4 vars), Cartesian linear velocity (3 vars) and Cartesian angular velocity (3 vars). The full prediction contains such\n13 variables for each of the links, that is 78 variables. Graph Networks as Learnable Physics Engines for Inference and Control 102 2 GNs\nerror IN (1 GN)\n2 GNs no\nrollout 101 global2 GNs noupdate\nnodes update\nRel. 2 GNs no\nedges update Without graph With graph\nskip connection skip connection", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 64, |
| "total_chunks": 66, |
| "char_count": 1275, |
| "word_count": 204, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "701d1c90-2d46-4a4a-bf51-93faab79d1ac", |
| "text": "Ablation study of the architecture using the rollout error over 20 step test sequences in Swimmer6 to evaluate relative\nperformance. The performance of the architecture used in this work (a sequence of two GNs, blue) is compared to: an Interaction Network\n(IN) (Battaglia et al., 2016), which is equivalent in this case to a single GN (grey), and a sequence of GNs where the first GN is not\nallowed to update either the global (red), the nodes (purple) or the edges(green) of the output graph. Results are shown both for a purely\nsequential connection of the GNs, and for a model with a graph skip connection, where the output graph of the first GN, is concatenated to\nthe input graph, before feeding it into the second GN. The results show that the performance of the double GN is far superior than that of\nthe equivalent IN. They also show that the global update performed by the GN is necessary for the model to perform well. We hypothesize\nthis is due to the long range dependencies within the graph that exist within swimmer, and the ability of the global update to quickly\npropagate such dependencies across the entire graph. Similar results may have been obtained without global updates by using a deeper\nstack of GNs to allow information to flow across the entire graph. Each model was trained from three different seeds. The figure depicts\nthe mean, and the standard deviation of the asymptotic performance of the three seeds. Node features Edge features\nWorld (W) 1 0 0 0 0 0 1 0 0 x L0-T 0 Vector distance\nL0 L0 L0 0 1 0 yL0-T 0 Head-Target (x, y) Head (L0) 0 1 0 vx vy ω\n0 0 1 θ J1 f J1 Joint L0-L1 (J1)\nL1 L1 L1\nLink 1 (L1) 0 0 1 vx vy ω J2 J2 0 0 1 θ f Joint L1-L2 (J2)\nLink 2 (L2) 0 0 1 vxL2 vyL2 ωL2 ... ...\n... ... JN JN\nLN LN LN 0 0 1 θ f Joint L{N-1}-LN (JN)\nLink N (LN) 0 0 1 vx vy ω\n} } Edge type\nNode type", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 65, |
| "total_chunks": 66, |
| "char_count": 1827, |
| "word_count": 360, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "1e302672-16b6-40e7-a48a-00203307737b", |
| "text": "Arrangement as a graph of the default 25-feature observation and 5 actions provided in the Swimmer 6 task from the DeepMind\nControl Suite (Tassa et al., 2018). The observation consists of: (to target) the distance between the head and the target projected in the\naxis of the head (xL0-T, yL0-T), (joints) the angle of each joint JN between adjacent swimmer links LN-1 and LN (θJN) and (body velocities)\nthe linear and angular velocity of each link LN projected in its own axis (vLNx , vLNy , ωLN). The actions consists of the force applied to each\nof the joints (f JN) connecting the links. Because our graphs are directed, all of the edges were duplicated, with an additional -1, 1 feature\nindicating the direction.", |
| "paper_id": "1806.01242", |
| "title": "Graph networks as learnable physics engines for inference and control", |
| "authors": [ |
| "Alvaro Sanchez-Gonzalez", |
| "Nicolas Heess", |
| "Jost Tobias Springenberg", |
| "Josh Merel", |
| "Martin Riedmiller", |
| "Raia Hadsell", |
| "Peter Battaglia" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01242v1", |
| "chunk_index": 66, |
| "total_chunks": 66, |
| "char_count": 716, |
| "word_count": 124, |
| "chunking_strategy": "semantic" |
| } |
| ] |