[ { "chunk_id": "ba354a5f-8a5a-41b9-9c8f-bd1c6269ee3c", "text": "Relational inductive bias for physical construction in humans and machines Hamrick∗,1 (jhamrick@google.com), Kelsey R. Allen∗,2 (krallen@mit.edu),\nVictor Bapst1 (vbapst@google.com), Tina Zhu1 (tinazhu@google.com),\nKevin R. McKee1 (kevinrmckee@google.com), Joshua B. Tenenbaum2 (jbt@mit.edu),\nPeter W. Battaglia1 (peterbattaglia@google.com)\n1DeepMind; London, UK", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 0, "total_chunks": 29, "char_count": 361, "word_count": 37, "chunking_strategy": "semantic" }, { "chunk_id": "46866d2f-b04b-4cd6-99c4-59b26ae38318", "text": "translational invariance—one we might call a \"spatial inducAbstract tive bias\" because it builds in specific assumptions about the\nspatial structure of the world. Similarly, a relational inducWhile current deep learning systems excel at tasks such as tive bias builds in specific assumptions about the relational\nobject classification, language processing, and gameplay, few\ncan construct or modify a complex system such as a tower of structure of the world.2018 blocks. We hypothesize that what these systems lack is a \"relational inductive bias\": a capacity for reasoning about inter- While logical and probabilistic models naturally contain\nobject relations and making choices over a structured descrip- strong relational inductive biases as a result of propositional\ntion of a scene. To test this hypothesis, we focus on a task thatJun and/or causal representations, current state-of-the-art deep re- involves gluing pairs of blocks together to stabilize a tower,\n4 and quantify how well humans perform. We then introduce inforcement learning (deep RL) systems rarely use such exa deep reinforcement learning agent which uses object- and plicit notions and, as a result, often struggle when faced with\nrelation-centric scene and policy representations and apply it\nstructured, combinatorial problems.", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 1, "total_chunks": 29, "char_count": 1304, "word_count": 192, "chunking_strategy": "semantic" }, { "chunk_id": "805c4e5f-8ea4-4198-93c5-68738ce25c09", "text": "Consider the \"gluing to the task. Our results show that these structured representations allow the agent to outperform both humans and more task\" in Figure 1, which requires gluing pairs of blocks tona¨ıve approaches, suggesting that relational inductive bias is gether to cause an otherwise unstable tower to be stable unan important component in solving structured reasoning problems and for building more intelligent, flexible machines. der gravity. Though seemingly simple, this task is not trivial. It requires (1) reasoning about variable numbers and config-[cs.LG] Keywords: physical construction; reinforcement learning;\ndeep learning; relational reasoning; object-based reasoning urations of objects; (2) choosing from variably sized action\nspaces (depending on which blocks are in contact); and (3)\nselecting where to apply glue, from a combinatorial number Introduction\nof possibilities. Although this task is fundamentally about\nHuman physical reasoning—and cognition more broadly— physical reasoning, we will show that the most important type\nis rooted in a rich system of knowledge about objects and of inductive bias for solving it is relational, not physical: the\nrelations (Spelke & Kinzler, 2007) which can be composed physical knowledge can be learned, but relational knowledge\nto support powerful forms of combinatorial generalization. is much more difficult to come by. Analogous to von Humboldt's characterization of the producWe instantiate a relational inductive bias in a deep RL agent tivity of language as making \"infinite use of finite means\",\nvia a \"graph network\", a neural network for relational reason- objects and relations are the building blocks which help exing whose relatives (Scarselli et al., 2009) have proven effec- plain how our everyday scene understanding can operate over\ntive in theoretical computer science (Dai et al., 2017), quan- infinite scenarios. Similarly, people interact with everyday\ntum chemistry (Gilmer et al., 2017), robotic control (Wang et scenes by leveraging these same representations. Some of the\n2016; Chang et al., 2017). Our approach contrasts with stan- things, a capacity for composing objects and parts under redard deep learning approaches to physical reasoning, which lational constraints, which gives rise to our most remarkable\nare often computed holistically over a fixed representation achievements, from the pyramids to space stations.\nand do not explicitly have a notion of objects or relations\nOne of the fundamental aims of artificial intelligence (AI)\n(e.g. Lerer et al., 2016; W.", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 2, "total_chunks": 29, "char_count": 2565, "word_count": 380, "chunking_strategy": "semantic" }, { "chunk_id": "7dfe3052-b658-4dc3-8618-f3b8f2f24fee", "text": "Further, our work\nis to be able to interact with the world as robustly and flexfocuses on interaction, while much of the work on physical\nibly as people do. We hypothesize that this flexibility is, in\nreasoning has focused on the task of prediction (e.g. Fragkipart, afforded by what we call relational inductive bias. An\nadaki et al., 2016; Mottaghi, Bagherinezhad, et al., 2016;\ninductive bias more generally is the set of assumptions of a\nMottaghi, Rastegari, et al., 2016; Stewart & Ermon, 2017;\nlearning algorithm that leads it to choose one hypothesis over\nBhattacharyya et al., 2018) or inference (e.g. Wu et al., 2016;\nanother independent of the observed data. Such assumptions\nDenil et al., 2017). Perhaps the most related works to ours\nmay be encoded in the prior of a Bayesian model (Griffiths\nare W. Li et al. (2017) and Yildirim et al. (2017), which both\net al., 2010), or instantiated via architectural assumptions in a\nfocus on building towers of blocks. For example, the weight-sharing architecture\net al. (2017)'s approach is learning-based, it does not include\nof a convolutional neural network induces an inductive bias of\na relational inductive bias; similarly, Yildirim et al. (2017)'s\n∗Denotes equal contribution. approach has a relational inductive bias, but no learning. This goal of this paper is not to present a precise computa- Gluing Phase Gravity Phase\n+1pt\ntional model of how humans solve the gluing task, nor is it to Glue\nclaim state-of-the-art performance on the gluing task. Rather, No\nthe goal is to characterize the type of inductive bias that is\nnecessary in general for solving such physical construction Stimulus\ntasks. Our work builds on both the broader cognitive literGlue -1pt +3ptature on relational reasoning using graphs (e.g. Collins &\nLoftus, 1975; Shepard, 1980; Griffiths et al., 2007; Kemp Partial\n& Tenenbaum, 2008) as well as classic approaches like relational reinforcement learning (Dˇzeroski et al., 2001), and\nrepresents a step forward by showing how relational knowl- Glue -1pt -1pt +16pt\nedge can be disentangled from physical knowledge through\nrelational policies approximated by deep neural networks. Optimal\nThe contributions of this work are to: (1) introduce the gluing task, an interactive physical construction problem that requires making decisions about relations among objects; (2)\nmeasure human performance in the gluing task; (3) develop Figure 1: The gluing task. Given an unstable tower of\na deep RL agent with an object- and relation-centric scene blocks, the task is to glue pairs of blocks together to keep\nrepresentation and action policy; and (4) demonstrate the im- the tower stable.", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 3, "total_chunks": 29, "char_count": 2664, "word_count": 426, "chunking_strategy": "semantic" }, { "chunk_id": "3799a201-6f6a-4555-a400-c1177eaac458", "text": "Three examples of performing the task are\nportance of relational inductive bias by comparing the per- shown here. Green blocks in the gravity phase indicate staformance of our agent with several alternatives, as well as ble blocks.", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 4, "total_chunks": 29, "char_count": 231, "word_count": 37, "chunking_strategy": "semantic" }, { "chunk_id": "1c0035a5-33a7-42e2-9b81-47e8cdd924fa", "text": "Top: no glue is used, and only one block rehumans, on both the gluing task and several control tasks that mains standing (+1 points). Middle row: one glue is used (-1\nisolate different aspects of the full problem. points), resulting in three blocks standing (+3 points). Bottom row: two glues are used (-2 points), resulting in a stable\nThe Gluing Task tower (+6 points); this is the minimal amount of glue to keep\nthe tower stable (+10 points). See https://goo.gl/f7Ecw8\nParticipants We recruited 27 volunteers from within Deep- for a video demonstrating the task. Each participant was treated in accordance with protocols of the UCL Research Ethics Committee, and completed screen for an indefinite amount of time. Participants could\n144 trials over a one-hour session.", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 5, "total_chunks": 29, "char_count": 771, "word_count": 125, "chunking_strategy": "semantic" }, { "chunk_id": "ab377989-0829-46de-93dc-058b8c6120be", "text": "Two participants did not click on one object (either a block or the floor) to select it,\ncomplete the task within the allotted time and were excluded and then another object to \"glue\" the two together. Glue was\nfrom analysis, leaving 25 participants total. only applied if the two objects were in contact. If glue had already been applied between the two objects, then the glue was\nStimuli and Design The stimuli were towers of blocks simremoved. Both these actions—applying glue to non-adjacent\nilar to those used by Battaglia et al. (2013) and Hamrick et al.\nobjects and ungluing an already-glued connection—still cost\n(2016).", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 6, "total_chunks": 29, "char_count": 628, "word_count": 104, "chunking_strategy": "semantic" }, { "chunk_id": "4f1d755e-7916-400a-8155-8cf014ebfec9", "text": "Towers were created by randomly placing blocks on one point.1 To finish the gluing phase, participants pressed\ntop of each other, with the following constraints: the tower\nthe \"enter\" key which triggered the gravity phase, during\nwas constructed in a 2D plane, and each block except the first\nwhich gravity was applied for 2s so participants could see\nwas stacked on another block. The set of towers was filtered\nwhich blocks moved from their starting positions. Finally,\nto include only those in which at least one block moved when\nparticipants were told how many points they earned and could\ngravity was applied.", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 7, "total_chunks": 29, "char_count": 614, "word_count": 102, "chunking_strategy": "semantic" }, { "chunk_id": "1137081d-82d9-42cc-8ef0-db7eb1cb6b76", "text": "In an initial practice session, nine unique\nthen press \"space\" to begin the next trial. Physics was simutowers (1 each of 2-10 blocks) were presented in increasing\nlated using the Mujoco physics engine (Todorov et al., 2012)\norder of size. In the experimental session, 135 unique towwith a timestep of 0.01. After the experiment was completed,\ners (15 each of 2-10 blocks), which were disjoint from the\nparticipants completed a short survey.\npractice set, were presented in a random order in 5 sets of 27. Participants earned points depending on how well they per- Results The gluing task was challenging for the human parformed the gluing task. They lost one point for each pair of ticipants, but they still performed far above chance.", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 8, "total_chunks": 29, "char_count": 736, "word_count": 123, "chunking_strategy": "semantic" }, { "chunk_id": "ca0b3203-77e6-4892-9783-2e6cb9e2f2f9", "text": "We disobjects they tried to glue, and earned one point for each block covered several trends in people's behavior, such as working\nthat remained unmoved after gravity was applied. As a bonus, from top-to-bottom and spending more time before applying\nif participants used the minimum amount of glue necessary to the first glue than before subsequent glue. The results here\nkeep the tower stable, they received 10 additional points. The represent a preliminary exploration of people's behavior in\nmaximum possible scores in the practice and experimental construction tasks, opening the door for future research and\nsessions were 131 points and 1977 points, respectively. providing a baseline comparison for artificial agents. Procedure Each trial consisted of two phases: the gluing\n1While this choice of reward structure is perhaps unfair to huphase, and the gravity phase. The trial began in the glu- mans, it provided a fairer comparison to our agents who would othing phase, during which a static tower was displayed on the erwise not be incentivized to complete the task quickly.", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 9, "total_chunks": 29, "char_count": 1082, "word_count": 171, "chunking_strategy": "semantic" }, { "chunk_id": "49d7ddcf-2057-4313-b910-0fad90a2a3f5", "text": "Participants achieved an average score of 900 points, with π0 = dece(fe( , , ))\nthe lowest score being 468 points and the highest score be- π1 = dece(fe( , , ))\ning 1154 points (out of 1977). There was a small (though not π2 = dece(fe( , , ))\nquite significant) effect of learning, with a Pearson correlation π3 = dece(fe( , , ))\nof r = 0.15, 95% CI [−0.01,0.30] between trial number and π4 = dece(fe( , , ))\naverage scaled reward (confidence intervals were computed πσ= decg(fσ( ∑ , ∑ ))around the median using 10,000 bootstrap samples with replacement; \"scaled rewards\" were computed by normalizing Figure 2: Graph network agent. First, the positions and orirewards such that 0 corresponded to the reward obtained if entations of the blocks are encoded as nodes, and the presence\nno actions were taken, and 1 corresponded to the maximum of glue is encoded as edges. These representations are then\nachievable reward). used to compute a Q-value for each edge, as well as a Q-value\nParticipants' response times revealed they were sig- for taking the \"stop\" action. See text for details.\nnificantly slower to click on the first block in a\npair than the second block, with a difference of t = implicitly includes a more foundational type of knowledge:\n4.48s, 95% CI [4.34s,4.62s].", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 10, "total_chunks": 29, "char_count": 1277, "word_count": 223, "chunking_strategy": "semantic" }, { "chunk_id": "873bd194-292f-4b74-a90b-48c9f7d056bb", "text": "This suggests they had decided that of objects and relations. Inspired by evidence that obon which pair to glue before clicking the first block. We found jects and relations are a core part of human cognition (e.g.\nthat people were significantly slower to choose the first gluing Spelke & Kinzler, 2007), we focus on decomposing the task\naction (t = 4.43s, 95% CI [4.30s,4.56s]; averages computed into a relational reasoning problem which involves computausing the mean of log RTs) than any subsequent gluing action tions over pairs of elements and their relations.\n(t = 2.07s, 95% CI [2.00s,2.15s]; F(1,12878) = 149.14, p <\n0.001). Also, we found an effect of the number of blocks Graph Networks\non response time (F(1,12878) = 429.68, p < 0.001) as well\nas an interaction between the number of blocks and whether A key feature of our deep RL agent is that it expresses its\nthe action was the first glue or not (F(1,12878) = 14.57, decision-making policy as a function over an object- and\np < 0.001), with the first action requiring more time per block relation-centric state representation, which reflects a strong\nthan subsequent actions. These results suggest that people relational inductive bias. Specifically, inside the agent is a\nmay either decide where to place glue before acting, or at graph network (GN), a neural network model which can be\nleast engage in an expensive encoding operation of a useful trained to approximate functions on graphs. A GN is a genrepresentation of the stimulus. eralization of recent neural network approaches for learning\nOn an open-ended strategy question in the post-experiment physics engines (Battaglia et al., 2016; Chang et al., 2017), as\nsurvey, 10 of 25 participants reported making glue selec- well as message-passing neural networks (Gilmer et al., 2017;\ntions top-to-bottom, and another 3 reported sometimes work- Scarselli et al., 2009). GNs have been shown to be effective\ning top-to-bottom and sometimes bottom-to-top. We cor- at solving classic combinatorial optimization problems (Dai\nroborated this quantitatively by, for each trial, fitting a line et al., 2017), inspiring our agent architecture for performing\nbetween the action number and the height of the glue lo- physical construction tasks.\ncation, and find their slopes were generally negative (β = Here, we define a graph as a set of N nodes, E edges, and\n−0.07, 95% CI [−0.08,−0.06]).\na global feature G. In the gluing task's \"tower graph\", nodes\nWe compared people's choice of glue configuration to op- correspond to blocks; edges correspond to pairs of blocks;\ntimal glue configurations, and found that people were signif- and global properties could correspond to any global piece\nicantly more likely to apply glue when it was not necessary of information, such as the overall stability of the tower. A\n(73% of errors) than to fail to apply glue when it was neces- GN takes as input a tower graph, and returns a graph with the\nsary (N = 3901, p < 0.001).", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 11, "total_chunks": 29, "char_count": 2975, "word_count": 492, "chunking_strategy": "semantic" }, { "chunk_id": "488daf49-228a-4986-9ecb-dc0bc99dfc0c", "text": "Additionally, participants were same size and shape. The representation of the nodes, edges,\nvery good at avoiding invalid actions: although they had the and globals encode semantic information: the node represenoption for gluing together pairs of blocks that were not in tation corresponds to position (x) and orientation (q), and the\ncontact, they only did so 1.3% (out of N = 6454) of the time. edges to the presence of glue (u). The global features correSimilarly, participants did not frequently utilize the option to spond to (or are a function of) the whole graph; for example,\nun-glue blocks (0.29% out of N = 6454), likely because it this could be the stability of the tower.\nincurred a penalty. It is possible that performance would inOur model architectures first encode the block properties\ncrease if participants were allowed to un-glue blocks without\ninto a distributed node representation ni using an encoder,a penalty, enabling them to temporarily use glue as a working\ni.e. ni = encn(xi,qi;θencn). For an edge eij, we similarlymemory aid; we leave this as a question for future research.\nencode the edge properties into a distributed representation\nusing a different encoder, i.e. eij = ence(uij;θence). Ini- Leveraging Relational Representations\ntially, the global properties are empty and set to zero, i.e. What type of knowledge is necessary for solving the gluing g = 0. With these node, edge, and global representations, the\ntask? Physical knowledge is clearly important, but even that standard GN computes functions over pairs of nodes (e.g., (a) Stability (b) Glue variable number of blocks, where the input edges were la- 1.00 1.00\nbeled to indicate whether or not glue was present (1 for glue,\n0.95 0.98\n0 for no glue). Glue was sampled randomly for each scene,\n0.96\n0.90 and stability was defined as no blocks falling. We tested two\n0.94\n0.85 settings: fully connected graphs (where the graph included all Accuracy Accuracy 0.92 # Recurrences block-to-block edges), and sparse graphs (where edges were\n0.80 Full 0.90 0 2\nSparse 1 3 only present between blocks that were in contact). In both\n0.75 0.88 cases, GNs learned to accurately predict the stability of par- 0 2500 5000 7500 10000 0 2500 5000 7500 10000\nEpoch Epoch tially glued towers, but the sparse graph inputs yielded more\nefficient learning (Figure 3a). Results are shown for the case\nFigure 3: Supervised results for scenes with five blocks.\nof 5 blocks, but these results are also consistent across tow-\n(a) Stability prediction for input graphs with contact informaers with 6-9 blocks. We also tested whether GNs can learn\ntion (sparse) or without (full). (b) Optimal glue prediction for\nwhether a contact between two blocks should be glued.", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 12, "total_chunks": 29, "char_count": 2733, "word_count": 448, "chunking_strategy": "semantic" }, { "chunk_id": "4b40d891-6641-42a7-b0bf-a392426b19df", "text": "As\nmodels with different numbers of recurrent steps.\ndiscussed previously, some glue locations require reasoning\nabout how forces propagate throughout the structure. Weto determine whether those nodes are in contact)2, edges\ntherefore hypothesized that multiple message passing steps(e.g. to determine the force acting on a block), and globwould be necessary to propagate this information, and indeed,als (e.g. to compute overall stability). Specifically, the edge\nwe found that one recurrence was enough to dramatically im-model is computed as: e′i j = fe(ni,nj,ei j,g;θ fe); the node\nprove glue prediction accuracy (Figure 3b).model as n′i = fn(ni,∑j e′i j,g;θfn); and the globals model as\ng′ = fg(g,∑i n′i,∑i,j e′i j;θfg). The GN can be applied multiple Sequential Decision Making Experiments\ntimes, recurrently, where e′i j, n′i, and g′ are fed in as the new\nei j, ni, and g on the next step. From the supervised learning experiments, we concluded that\nApplying the GN to compute interaction terms and update GNs can accurately predict stability and select individual glue\nthe nodes recurrently can be described as message passing points. Next we integrated these components into a full RL\n(Gilmer et al., 2017), which propagates information across agent that performs the same gluing task that people faced,\nthe graph. In the gluing task, such learned information prop- involving multiple actions and delayed rewards.\nagation may parallel the propagation of forces and other con- Design We considered three agents: the multilayer percepstraints over the structure. For intuition, consider the tower tron (or MLP) agent, the fully-connected graph network (or\nin Figure 1. After one application of the edge model, the GN-FC) agent, the graph network (or GN) agent, and the simGN should be able to determine which block pairs are locally ulation agent.3 As most deep RL agents are implemented eiunstable, such as the top-most block in the figure, and thus ther as MLPs or CNNs with no relational structure, our first\nrequire glue. However, it does not have enough information agent chose actions according to a Q-function approximated\nto be able to determine that the bottom-most block in Fig- by a MLP; as MLPs have a fixed input and output size, we\nure 1 also needs to be glued, because it is fully supporting the trained a separate MLP for each tower size. The GN and\nblock above it.", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 13, "total_chunks": 29, "char_count": 2388, "word_count": 386, "chunking_strategy": "semantic" }, { "chunk_id": "24c598f1-61b3-48cd-b476-016064bd9b30", "text": "Recurrent message-passing allows informa- GN-FC agents (which had relational knowledge, but no extion about other blocks to be propagated to the bottom-most plicit physical knowledge) also chose actions according to a\none, allowing for non-local relations to be reasoned about. Q-function and used 3 recurrent steps. The GN agent used a\nGiven the updated edge, node, and global representations, sparse graph structure with edges corresponding to the conwe can decode them into edge-specific predictions, such tact points between the blocks, while the GN-FC used a fully\nas Q-values or unnormalized log probabilities (Figure 2). connected graph structure and thus had to learn which edges\nFor the supervised setting, edges are glued with probabil- corresponded to valid actions. Finally, the simulation agent\nity pij ∝dece(e′ij;θdece). For the sequential decision mak- (which had both relational and physical knowledge) chose acing setting, we decode one action for each edge in the graph tions using simulation. Specifically, for each unglued contact\n(πij = dece(e′ij;θdece)) plus a \"stop\" action to end the gluing point, the agent ran a simulation to compute how many blocks\nphase (πσ = decg(g′;θdecg)). would fall if that point were glued, and then chose the point\nwhich resulted in the fewest blocks falling. This procedureSupervised Learning Experiments\nwas repeated until no blocks fell. Note that the simulation\nBefore investigating the full gluing task, we first explored agent is non-optimal as it chooses glue points greedily.\nhow components of the graph network agent could perform\nThe effect of relational structure Both the MLP and thekey sub-tasks in a supervised setting, such as predicting staGN-FC agents take actions on the fully-connected graph (i.e.,bility or inferring which edges should be glued.\nthey both can choose pairs of blocks which are not adjacent);\nTo test the GN's stability predictions, we used towers with\nthe main difference between them is that the GN-FC agent has\n2These functions are learned and thus these examples are not\nliterally what the agent is computing, but we provide them here to 3Additional details about the agent architectures and training\ngive an intuition for how GNs behave. regimes are available in the appendix. (a) (b) that it also performed the best out of all the agents.\n1.0 Specifically, the simulation agent earned on average M =\n1500 0.8 156.20, 95% CI [70.80,249.60] points more than the GN\n0.6 agent, perhaps suggesting that there is a benefit to using a Reward 1000 Reward\nmodel-based policy rather than a model-free policy (note,\n0.4 Total however, that the simulation agent has access to a perfect 500 Scaled\n0.2 simulator; a more realistic implementation would likely fare 900 615 1505 1689 1845\n0 0.0 somewhat worse). However, we emphasize that the gain in H MLP GN-FC GN Sim 2 3 4 5 6 7 8 9 10\n# Blocks performance by between the GN agent and the simulation\nagent was much less than that between the MLP and GN-FC\nFigure 4: (a) Comparison of overall reward for humans and\nagents, suggesting that relational knowledge may be more imagents. H: human; MLP: MLP agent; GN-FC: GN agent opportant than explicit physical knowledge in solving complex\nerating over a fully-connected graph; GN: GN agent operatphysical reasoning problems like the gluing task.\ning over a sparse graph; Sim: simulation agent. (b) Comparison of scaled reward across towers of different sizes. Re- Comparison to humans Although our goal was not to\nwards are scaled such that 0 corresponds to the reward ob- build a model of human cognition on the gluing task, we still\ntained when no actions are taken, and 1 to the optimal reward. compared people's behavior to that of the GN agent to elucidate any obvious differences. Participants' average reward\na relational inductive bias while the MLP does not. This re- fell between the MLP and GN-FC agents' (Figure 4a). As\nlational inductive bias makes a large difference, with the GN- in Figure 4b, both agents and humans had increasing diffiFC agent earning M = 883.60, 95% CI [719.40,1041.00] culty solving the task as a function of tower size, though this\nmore points on average (Figure 4a) and also achieving more was expected: as the number of blocks in the tower increases,\npoints across different tower sizes (Figure 4b). there is an exponential increase in the number of possible glue\nGiving the correct relational structure in the GN agent combinations.", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 14, "total_chunks": 29, "char_count": 4448, "word_count": 724, "chunking_strategy": "semantic" }, { "chunk_id": "87301ce7-32f7-4ae5-aebb-96bf351e5968", "text": "Specifically, for a tower with k contact points,\nfurther improves performance, with the GN agent achieving there are 2k possible ways glue can be applied (around 1000\nM = 183.20, 95% CI [73.20,302.40] more points on average possibilities for a 10-block tower), and optimally solving the\nthan the GN-FC agent. Thus, although the GN-FC agent task would require enumerating each of these possibilities.\ndoes make use of relations, it does not always utilize the Our agents do not do this, and it is unlikely that humans do\ncorrect structure which ends up hurting its performance. In- either; therefore, the drop in performance as a function of\ndeed, we can observe that the GN-FC agent attempts invalid tower size is not surprising.\nglue actions—for example, choosing edges between objects Looking more closely, we found the GN agent made difthat are not adjacent, or self-edges—a whopping 31% (out of ferent patterns of errors than humans within scenes. For exN = 1345) of the time.", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 15, "total_chunks": 29, "char_count": 980, "word_count": 162, "chunking_strategy": "semantic" }, { "chunk_id": "bd2663a1-41a3-4a77-b411-24240353cd4d", "text": "The MLP agent similarly picks \"in- ample, while we found that people were more likely to make\nvalid\" edges 46% (out of N = 417) of the time. false positives (applying glue when none was needed), we\nThe GN agents also exhibit much stronger generalization did not find this to be true of the GN agent (41% of errors,\nthan the MLP agent. To test generalization, we trained a N = 155, p < 0.05). This difference might be a result of persecond set of agents which did not observe towers of 7 or ceptual uncertainty in humans, which leads to a tendency to\n10 blocks during training, and compared their test perfor- over-estimate the instability of towers (Battaglia et al., 2013).\nmance to our original set of agents. The GN agent exhibited no detectable degradation in performance for ei- Discussion\nther tower size, with a difference in scaled reward of\nIn this paper, we explored the importance of relational inducM = 0.01, 95% CI [−0.03,0.05] on 7-block towers and M =\ntive bias in performing interactive physical reasoning tasks.\n0.05, 95% CI [−0.01,0.10] on 10-block towers. The GNWe introduced a novel construction problem—the \"gluing\nFC agent interpolated successfully to 7-block towers (M =\ntask\"—which involved gluing pairs of blocks together to sta-\n−0.04, 95% CI [−0.08,0.00]), but struggled when extrapobilize a tower of blocks. Our analysis showed that humans\nlating to 10-block towers (M = 0.44, 95% CI [0.27,0.61]).\ncould perform far above chance and discovered they used sysBy definition, the MLP agent cannot generalize to new tower\ntematic strategies, such as working top-to-bottom and reasonsizes because it is trained on each size independently. We\ning about the whole glue configuration, before taking their\nattempted to test for generalization anyway by training a sinfirst action. Drawing on the view from cognitive psycholgle MLP on all towers and using zero-padding in the inputs\nogy that humans understand the world in terms of objects and\nfor smaller towers. However, this version of the MLP agent\nrelations (Shepard, 1980; Spelke & Kinzler, 2007; Kemp &\nwas unable to solve the task at all, achieving an average of\nTenenbaum, 2008), we developed a new deep RL agent that\nM = 78.00, 95% CI [−140.00,296.00] points total.\nuses a decision-making policy based on object- and relationThe effect of physical knowledge The simulation agent centric representations, and measured its ability to learn to\nwas the only agent which incorporated explicit physi- perform the gluing task. These structured representations\ncal knowledge through its simulations, and we found were instantiated using graph networks (GNs), a family of neural network models that can be trained to approximate GN Agents Th GN-FC agent had the same inputs and outfunctions on graphs. Our experiments showed that an agent puts as the MLP agent. The inputs to the GN agent also\nwith an object- and relation-centric policy could solve the included the positions and orientations of all objects in the\ntask even better than humans, while an agent without such scene, but the \"glue\" vector instead had size Esparse ≈N\na relational inductive bias performed far worse. This sug- (where Esparse is the number of pairs of blocks in contact);\ngests that a bias for acquiring relational knowledge is a key the GN agent was also told which blocks, specifically, were\ncomponent of physical interaction, and can be effective even in contact. There were Esparse +1 outputs in the final layer.\nwithout an explicit model of physical dynamics.", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 16, "total_chunks": 29, "char_count": 3511, "word_count": 577, "chunking_strategy": "semantic" }, { "chunk_id": "f2bc5625-2f47-4935-9432-f58ae08243cc", "text": "Both GN agents used node, edge, and globals encoders\nOf course, model-based decision-making systems are pow- which were each linear layers with an output dimensionalerful tools (Silver et al., 2016), and cognitive psychology ity of size 64. The edge, node, and global models were each\nwork has found evidence that humans use internal physics a MLP with two hidden layers of 64 units (with ReLUs) and\nmodels for physical prediction (Battaglia et al., 2013), infer- an output dimensionality of 64. In these models we also used\nence (Hamrick et al., 2016), causal perception (Gerstenberg \"skip\" connections as in Dai et al. (2017), which means that\net al., 2012), and motor control (Kawato, 1999). Indeed, we also fed in both encoded and non-encoded inputs to the\nwe found that the best performing agent in our task was the model. We additionally used a gated recurrent unit (GRU) as\n\"simulation\" agent, which used both relational and physical the core for our recurrent loop, similar to Y. Li et al. (2016).\nknowledge. Provisioning deep RL agents with joint model- We passed the outputs of the recurrent GN to a second GN\nfree and model-based strategies inspired by cognitive psy- decoder (with the same architecture for the edge, node, and\nchology has proven fruitful in imagination-based decision- global models). This second GN helps the agent decompose\nmaking (Hamrick et al., 2017), and implementing relational the problem, such as first detecting which block pairs are in\ninductive biases in similar systems should afford greater com- contact, and then determining which of those pairs should be\nbinatorial generalization over state and action spaces. glued. Finally, the edge and global values were further decoded by two hidden layers of 64 units (with ReLUs) and a More generally, the relational inductive bias possessed by\nfinal linear layer with a single output.our GN agent is not specific to physical scenes. Indeed, certain aspects of human cognition have previously been studied Training Procedure\nand modeled in ways that are explicitly relational, such as in\nBoth the GN and MLP agents were trained for 300k episodes\nanalogical reasoning (e.g. Gentner, 1983; Holyoak, 2012).", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 17, "total_chunks": 29, "char_count": 2189, "word_count": 352, "chunking_strategy": "semantic" }, { "chunk_id": "2b19a618-6a95-4ffd-9971-17c180bb5980", "text": "In\non 100k scenes for each tower size (900k total scenes), which\nother cognitive domains, GNs might help capture how people\nwere distinct from those in the behavioral experiment. We\nbuild cognitive maps of their environments and use them to\nused Q-learning with experience replay (Mnih et al., 2015)\nnavigate; how they schedule their day to avoid missing imwith a replay ratio of 16, a learning rate of 1e-4, a batch\nportant meetings; or how they decide whom to interact with\nsize of 16, a discount factor of 0.9, and the Adam optimizer\nat a cocktail party. Each of these examples involves a set of\n(Kingma & Ba, 2015). Epsilon was annealed over 100k envientities, locations, or events which participate in interactive\nronment steps from 1.0 to 0.01.\nrelationships and require arbitrarily complex relational reaBecause the MLP agent had fixed input and output sizes\nsoning to perform successfully.\nthat depend on the number of blocks in the scene, we trained\nIn sum, this work demonstrates how deep RL can be im- nine separate MLP agents (one for each tower size). Both GN\nproved by adopting relational inductive biases like those in agents were trained simultaneously on all tower sizes using a\nhuman cognition, and opens new doors for developing formal curriculum in which we began training on the next size tower\ncognitive models of more complex, interactive human behav- (as well as all previous sizes) after every 10k episodes.\niors like physical scene construction and interaction. Acknowledgements We would like to thank Tobias Pfaff, References\nSam Ritter, and anonymous reviewers for helpful comments. Simulation as an engine of physical scene understanding.", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 18, "total_chunks": 29, "char_count": 1667, "word_count": 271, "chunking_strategy": "semantic" }, { "chunk_id": "2c700c25-7249-4cfe-b620-c592cce42d01", "text": "PNAS,\n110(45), 18327–18332. Supplementary Material Battaglia, P. W., Pascanu, R., Lai, M., Rezende, D., & Kavukcuoglu,\nK. (2016). Interaction Networks for Learning about Objects, Relations, and Physics. Architectural Details Bhattacharyya, A., Malinowski, M., Schiele, B., & Fritz, M. (2018). Long-term image boundary extrapolation. B., Ullman, T., Torralba, A., & Tenenbaum, J.", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 19, "total_chunks": 29, "char_count": 378, "word_count": 51, "chunking_strategy": "semantic" }, { "chunk_id": "9ded0024-b22d-4421-a240-7826b482b551", "text": "B. (2017).MLP Agent The MLP agent had three hidden layers with A compositional object-based approach to learning physical dy-\nagent consisted of the concatenated positions and orientations Collins, A. A spreading-activation theory\nof semantic processing. Psychological Review, 82(6).\nof all objects in the scene, as well as a one-hot vector of size Dai, H., Khalil, E. B., Zhang, Y., Dilkina, B., & Song, L. (2017). E fc = N(N −1)/2 indicating which objects had glue between Learning combinatorial optimization algorithms over graphs.\nthem. There were E fc + 1 outputs in the final layer: one for NIPS 2017, 30. Denil, M., Agrawal, P., Kulkarni, T. D., Erez, T., Battaglia, P. W., &\neach pair of blocks plus the floor (including non-adjacent ob- de Freitas, N. (2017).", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 20, "total_chunks": 29, "char_count": 768, "word_count": 126, "chunking_strategy": "semantic" }, { "chunk_id": "61d411e3-792e-4c59-8b83-5d3c34a2b0bf", "text": "Learning to Perform Physics Experiments Dˇzeroski, S., De Raedt, L., & Driessens, K. (2001). Relational with symbolic, geometric, and dynamic constraints. In CogSci\nreinforcement learning. Machine learning, 43(1-2), 7–52. 2017. Fragkiadaki, K., Agrawal, P., Levine, S., & Malik, J. (2016). Learning visual predictive models of physics for playing billiards.", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 21, "total_chunks": 29, "char_count": 357, "word_count": 49, "chunking_strategy": "semantic" }, { "chunk_id": "afd2d088-32a0-48cb-bbb0-5c98a93516c6", "text": "In\nGentner, D. (1983). Structure-mapping: A theoretical framework\nfor analogy. Cognitive science, 7(2), 155–170. Gerstenberg, T., Goodman, N. Noisy Newtons: Unifying process and dependency\naccounts of causal attribution. Gilmer, J., Schoenholz, S. F., Vinyals, O., & Dahl, G.", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 22, "total_chunks": 29, "char_count": 275, "word_count": 38, "chunking_strategy": "semantic" }, { "chunk_id": "5cd2a63d-b22b-4e20-8d87-7a75ebf86e30", "text": "Neural message passing for quantum chemistry. arXiv\nGriffiths, T. L., Chater, N., Kemp, C., Perfors, A., & Tenenbaum,\nJ. Probabilistic models of cognition: exploring representations and inductive biases. L., Steyvers, M., & Firl, A. (2007). Google and the\nmind: Predicting fluency with pagerank. Psychological Science,\n18(12), 1069-1076. J., Pascanu, R., Vinyals, O., Heess, N.,\nHamrick, J. Inferring mass in complex physical scenes via probabilistic simulation.", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 23, "total_chunks": 29, "char_count": 462, "word_count": 65, "chunking_strategy": "semantic" }, { "chunk_id": "29aa057d-03b6-407f-8660-6571e4a4ce07", "text": "Cognition, 157, 61–76. Analogy and relational reasoning. Morrison (Eds.), The Oxford handbook\nof thinking and reasoning (pp. 234–259).", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 24, "total_chunks": 29, "char_count": 134, "word_count": 18, "chunking_strategy": "semantic" }, { "chunk_id": "8e2ce8b6-f6c8-48b6-aacd-2be53e156fb2", "text": "Internal models for motor control and trajectory\nplanning. Current Opinions in Neurobiology, 9(6), 718–727. Kemp, C., & Tenenbaum, J. The discovery of structural\nform. PNAS, 105(31), 10687-10692. Adam: A method for stochastic\nLi, W., Leonardis, A., & Fritz, M. (2016). To Fall or Not to Fall: A\nVisual Approach fo Physical Stability Prediction. arXiv preprint\nLi, W., Leonardis, A., & Fritz, M. (2017). Visual stability prediction\nfor robotic manipulation. Li, Y., Tarlow, D., Brockschmidt, M., & Zemel, R. Gated\nMnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. a., Veness, J., Bellemare, M. Human-level control\nthrough deep reinforcement learning.", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 25, "total_chunks": 29, "char_count": 647, "word_count": 98, "chunking_strategy": "semantic" }, { "chunk_id": "d77ff5a6-1001-4c1c-8f40-c8677b8da40a", "text": "Mottaghi, R., Bagherinezhad, H., Rastegari, M., & Farhadi, A. Mottaghi, R., Rastegari, M., Gupta, A., & Farhadi, A. (2016). \"What\nhappens if...\": Learning to Predict the Effect of Forces in Images. Scarselli, F., Gori, M., Tsoi, A., Hagenbuchner, M., & Monfardini,\nG. (2009). The graph neural network model. IEEE Transactions\non Neural Networks, 20, 61–80. Multidimensional scaling, tree-fitting, and\nclustering.", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 26, "total_chunks": 29, "char_count": 412, "word_count": 60, "chunking_strategy": "semantic" }, { "chunk_id": "17ada9e3-a947-4f45-a264-cf97820634b5", "text": "Science, 210(4468), 390–398. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Driessche, G. Kavukcuoglu, K. (2016). Mastering the game of\nGo with deep neural networks and tree search. Nature, 529(7585),\n484–489. Developmental Science, 10, 89–96. Stewart, R., & Ermon, S. (2017).", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 27, "total_chunks": 29, "char_count": 290, "word_count": 44, "chunking_strategy": "semantic" }, { "chunk_id": "ec890632-a724-41ce-aef0-df4fe4fe5b94", "text": "Label-free supervision of neural\nnetworks with physics and domain knowledge. Todorov, E., Erez, T., & Tassa, Y. (2012). MuJoCo: A physics\nengine for model-based control. Wang, T., Liao, R., Ba, J., & Fidler, S. (2018). NerveNet: Learning\nWu, J., Lim, J. J., Zhang, H., Tenenbaum, J. Physics 101: Learning physical object properties from\nunlabeled videos. Yildirim, I., Gerstenberg, T., Saeed, B., Toussaint, M., & Tenenbaum, J. Physical problem solving: Joint planning", "paper_id": "1806.01203", "title": "Relational inductive bias for physical construction in humans and machines", "authors": [ "Jessica B. Hamrick", "Kelsey R. Allen", "Victor Bapst", "Tina Zhu", "Kevin R. McKee", "Joshua B. Tenenbaum", "Peter W. Battaglia" ], "published_date": "2018-06-04", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.01203v1", "chunk_index": 28, "total_chunks": 29, "char_count": 468, "word_count": 71, "chunking_strategy": "semantic" } ]