| [ |
| { |
| "chunk_id": "d51acb10-a3fa-4614-8263-da25c447a45d", |
| "text": "https://arxiv.org/abs/1806.01186\n2019-3-11 Penalizing side effects using stepwise relative\nreachability Victoria Krakovna1, Laurent Orseau1, Ramana Kumar1, Miljan Martic1 and Shane Legg1\n1DeepMind How can we design safe reinforcement learning agents that avoid unnecessary disruptions to their\nenvironment? We show that current approaches to penalizing side effects can introduce bad incentives,\ne.g. to prevent any irreversible changes in the environment, including the actions of other agents. To isolate the source of such undesirable incentives, we break down side effects penalties into two\ncomponents: a baseline state and a measure of deviation from this baseline state. We argue that some\nof these incentives arise from the choice of baseline, and others arise from the choice of deviation\nmeasure. We introduce a new variant of the stepwise inaction baseline and a new deviation measure2019 based on relative reachability of states. The combination of these design choices avoids the given\nundesirable incentives, while simpler baselines and the unreachability measure fail. We demonstrate\nthis empirically by comparing different combinations of baseline and deviation measure choices on aMar\nset of gridworld experiments designed to illustrate possible bad incentives. An important component of safe behavior for reinforcement learning agents is avoiding unnecessary[cs.LG] side effects while performing a task [Amodei et al., 2016, Taylor et al., 2016]. For example, if an\nagent's task is to carry a box across the room, we want it to do so without breaking vases, while\nan agent tasked with eliminating a computer virus should avoid unnecessarily deleting files. The\nside effects problem is related to the frame problem in classical AI [McCarthy and Hayes, 1969]. For machine learning systems, it has mostly been studied in the context of safe exploration during\nthe agent's learning process [Pecka and Svoboda, 2014, García and Fernández, 2015], but can also\noccur after training if the reward function is misspecified and fails to penalize disruptions to the\nenvironment [Ortega et al., 2018]. We would like to incentivize the agent to avoid side effects without explicitly penalizing every\npossible disruption, defining disruptions in terms of predefined state features, or going through\na process of trial and error when designing the reward function.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 0, |
| "total_chunks": 34, |
| "char_count": 2367, |
| "word_count": 353, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "92f8d1eb-2a93-4772-bebe-18c45f80dfd0", |
| "text": "While such approaches can be\nsufficient for agents deployed in a narrow set of environments, they often require a lot of human\nimportant to develop more general and systematic approaches for avoiding side effects. Most of the general approaches to this problem are reachability-based methods: safe exploration\nmethods that preserve reachability of a starting state [Moldovan and Abbeel, 2012, Eysenbach et al.,\n2017], and reachability analysis methods that require reachability of a safe region [Mitchell et al.,\n2005, Gillula and Tomlin, 2012, Fisac et al., 2017]. The reachability criterion has a notable limitation:\nit is insensitive to the magnitude of the irreversible disruption, e.g. it equally penalizes the agent for\nbreaking one vase or a hundred vases, which results in bad incentives for the agent. Comparison to a\nstarting state also introduces undesirable incentives in dynamic environments, where irreversible\ntransitions can happen spontaneously (due to the forces of nature, the actions of other agents, etc). Since such transitions make the starting state unreachable, the agent has an incentive to interfere to\nprevent them. This is often undesirable, e.g. if the transition involves a human eating food.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 1, |
| "total_chunks": 34, |
| "char_count": 1223, |
| "word_count": 186, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5eb33a1f-19f0-4532-9d9e-63d6c3992eef", |
| "text": "Penalizing side effects using stepwise relative reachability agent policy inaction (a) Choices of baseline state s′t: starting state s0, inaction s(0)t , and stepwise inaction s(t−1)t . Actions drawn\nfrom the agent policy are shown by solid blue arrows, while actions drawn from the inaction policy are shown\nby dashed gray arrows.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 2, |
| "total_chunks": 34, |
| "char_count": 331, |
| "word_count": 52, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "71a5a1e6-dd61-44c8-ab0e-8615c94b6761", |
| "text": "R(st; s1)\nR(st; s2) R(s′t; s2)\ns1 R(s′t; s1) s2 . . . (b) Choices of deviation measure d: given a state reachability function R, dUR(st; s′t) := 1 −R(st; s′t) is the\nunreachability measure of the baseline state s′t from the current state st (dotted line), while relative reachability\ndRR(st; s′t) := |S|1 Ps∈S max(R(s′t; s) −R(st; s), 0) is defined as the average reduction in reachability of states\ns = s1, s2, . . . from current state st (solid lines) compared to the baseline state s′t (dashed lines). Design choices for a side effects penalty: baseline states and deviation measures. while these methods address the side effects problem in environments where the agent is the only\nsource of change and the objective does not require irreversible actions, a more general criterion is\nneeded when these assumptions do not hold. The contributions of this paper are as follows. In Section 2, we introduce a breakdown of side\neffects penalties into two design choices, a baseline state and a measure of deviation of the current\nstate from the baseline state, as shown in Figure 1. We outline several possible bad incentives\n(interference, offsetting, and magnitude insensitivity) and introduce toy environments that test for\nthem. We argue that interference and offsetting arise from the choice of baseline, while magnitude\ninsensitivity arises from the choice of deviation measure. In Section 2.1, we propose a variant of the\nstepwise inaction baseline, shown in Figure 1a, which avoids interference and offsetting incentives. In\nSection 2.2, we propose a relative reachability measure that is sensitive to the magnitude of the agent's\neffects, which is defined by comparing the reachability of states between the current state and the\nbaseline state, as shown in Figure 1b. (The relative reachability measure was originally introduced in\nthe first version of this paper.) We also compare to the attainable utility measure [Turner et al., 2019],\nwhich generalizes the relative reachability measure. In Section 3, we compare all combinations\nof the baseline and deviation measure choices from Section 2. We show that the unreachability\nmeasure produces the magnitude insensitivity incentive for all choices of baseline, while the relative\nreachability and attainable utility measures with the stepwise inaction baseline avoid the three\nundesirable incentives. We do not claim this approach to be a complete solution to the side effects problem, since there\nmay be other cases of bad incentives that we have not considered. However, we believe that avoiding\nthe bad behaviors we described is a bare minimum for an agent to be both safe and useful, so our\napproach provides some necessary ingredients for a solution to the problem. Penalizing side effects using stepwise relative reachability", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 3, |
| "total_chunks": 34, |
| "char_count": 2789, |
| "word_count": 446, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "7b390182-5ec3-48e0-9316-76f4758f8b33", |
| "text": "We assume that the environment is a discounted Markov Decision Process (MDP), defined by a tuple\n(S, A, r, p, γ). S is the set of states, A is the set of actions, r : S × A →R is the reward function,\np(st+1|st, at) is the transition function, and γ ∈(0, 1) is the discount factor. At time step t, the agent receives the state st, outputs the action at drawn from its policy π(at|st),\nand receives reward r(st, at). We define a transition as a tuple (st, at, st+1) consisting of state st,\naction at, and next state st+1. We assume that there is a special noop action anoop that has the same\neffect as the agent being turned off during the given time step. Intended effects and side effects We begin with some motivating examples for distinguishing intended and unintended disruptions to\nthe environment: The agent's objective is to get from point A to point B as quickly as possible, and\nthere is a vase in the shortest path that would break if the agent walks into it.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 4, |
| "total_chunks": 34, |
| "char_count": 968, |
| "word_count": 181, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "0e61c2be-227a-4642-b958-90e161db2685", |
| "text": "Example 2 (Omelette). The agent's objective is to make an omelette, which requires breaking some\neggs. In both of these cases, the agent would take an irreversible action by default (breaking a vase vs\nbreaking eggs). However, the agent can still get to point B without breaking the vase (at the cost\nof a bit of extra time), but it cannot make an omelette without breaking eggs. We would like to\nincentivize the agent to avoid breaking the vase while allowing it to break the eggs.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 5, |
| "total_chunks": 34, |
| "char_count": 482, |
| "word_count": 86, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "8906fb13-29da-42f8-a07b-a36bbb967f45", |
| "text": "Safety criteria are often implemented as constraints [García and Fernández, 2015, Moldovan\nand Abbeel, 2012, Eysenbach et al., 2017]. This approach works well if we know exactly what the\nagent must avoid, but is too inflexible for a general criterion for avoiding side effects. For example,\na constraint that the agent must never make the starting state unreachable would prevent it from\nmaking the omelette in Example 2, no matter how high the reward for doing so.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 6, |
| "total_chunks": 34, |
| "char_count": 465, |
| "word_count": 77, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "d14e418d-6f76-494b-92e5-7e811e4b2ce8", |
| "text": "A more flexible way to implement a side effects criterion is by adding a penalty for impacting the\nenvironment to the reward function, which acts as an intrinsic pseudo-reward. An impact penalty\nat time t can be defined as a measure of deviation of the current state st from a baseline state s′t,\ndenoted as d(st; s′t). Then at every time step t, the agent receives the following total reward:\nr(st, at) −β · d(st+1; s′t+1). Since the task reward r indicates whether the agent has achieved the objective, we can distinguish\nintended and unintended effects by balancing the task reward and the penalty using the scaling\nparameter β. Here, the penalty would outweigh the small reward gain from walking into the vase\nover going around the vase, but it would not outweigh the large reward gain from breaking the eggs. Design choices for an impact penalty When defining the impact penalty, the baseline s′t and deviation measure d can be chosen separately. We will discuss several possible choices for each of these components. Starting state baseline. One natural choice of baseline state is the starting state s′t = s0 when the\nagent was deployed (or a starting state distribution), which we call the starting state baseline.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 7, |
| "total_chunks": 34, |
| "char_count": 1222, |
| "word_count": 207, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "fef9c312-d37d-4f87-a35f-27d0aed13417", |
| "text": "Penalizing side effects using stepwise relative reachability the baseline used in reversibility-preserving safe exploration approaches, where the agent learns a\nreset policy that is rewarded for reaching states that are likely under the initial state distribution. While penalties with the starting state baseline work well in environments where the agent\nis the only source of change, in dynamic environments they also penalize irreversible transitions\nthat are not caused by the agent. This incentivizes the agent to interfere with other agents and\nenvironment processes to prevent these irreversible transitions. To illustrate this interference behavior,\nwe introduce the Sushi environment, shown in Figure 2. This environment is a Conveyor Belt Sushi restaurant. It contains a conveyor belt that moves\nto the right by one square after every agent action. There is a sushi dish on the conveyor belt that\nis eaten by a hungry human if it reaches the end of the belt. The interference behavior is to move\nthe sushi dish off the belt (by stepping into the square containing the sushi). The agent is rewarded\nfor reaching the goal square, and it can reach the goal with or without interfering with the sushi in\nthe same number of steps. The desired behavior is to reach the goal without interference, by going\nleft and then down. An agent with no penalty performs well in this environment, but as shown in\nSection 3, impact penalties with the starting state baseline produce the interference behavior. Another choice is the inaction baseline s′t = s(0)t : a counterfactual state of the\nenvironment if the agent had done nothing for the duration of the episode. Inaction can be defined\nin several ways. Armstrong and Levinstein [2017] define it as the agent never being deployed:\nconditioning on the event X where the AI system is never turned on.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 8, |
| "total_chunks": 34, |
| "char_count": 1845, |
| "word_count": 302, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f2552ca9-3ac3-42d2-a6ca-aa5fcfce630a", |
| "text": "It can also be defined as\nfollowing some baseline policy, e.g. a policy that always takes the noop action anoop. We use this\nnoop policy as the inaction baseline. Penalties with this baseline do not produce the interference behavior in dynamic environments,\nsince transitions that are not caused by the agent would also occur in the counterfactual where the\nagent does nothing, and thus are not penalized. However, the inaction baseline incentivizes another\ntype of undesirable behavior, called offsetting. We introduce a Vase environment to illustrate this\nbehavior, shown in Figure 3. This environment also contains a conveyor belt, with a vase that will break if it reaches the end\nof the belt. The agent receives a reward for taking the vase off the belt. The desired behavior is to\nmove the vase off and then stay put. The offsetting behavior is to move the vase off (thus collecting\nthe reward) and then put it back on, as shown in Figure 4. Offsetting happens because the vase breaks in the inaction counterfactual. Once the agent takes\nthe vase off the belt, it continues to receive penalties for the deviation between the current state and\nthe baseline. Thus, it has an incentive to return to the baseline by breaking the vase after collecting Penalizing side effects using stepwise relative reachability", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 9, |
| "total_chunks": 34, |
| "char_count": 1313, |
| "word_count": 219, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "9215cfd2-1413-43b6-b278-08c5683c0a46", |
| "text": "(a) Agent takes the vase off the belt. (b) Agent goes around the vase. (c) Agent puts the vase back on the belt. Offsetting behavior in the Vase environment. Experiments in Section 3 show that impact penalties with the inaction baseline produce\nthe offsetting behavior if they have a nonzero penalty for taking the vase off the belt. Stepwise inaction baseline. The inaction baseline can be modified to branch off from the\nprevious state st−1 rather than the starting state s0. This is the stepwise inaction baseline s′t = s(t−1)t : a\ncounterfactual state of the environment if the agent had done nothing instead of its last action [Turner\net al., 2019]. This baseline state is generated by a baseline policy that follows the agent policy for\nthe first t −1 steps, and takes an action drawn from the inaction policy (e.g. the noop action anoop)\non step t.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 10, |
| "total_chunks": 34, |
| "char_count": 855, |
| "word_count": 149, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "2968c70f-3b8e-416c-9308-0b878864a1c3", |
| "text": "Each transition is penalized only once, at the same time as it is rewarded, so there is no\noffsetting incentive. However, there is a problem with directly comparing current state st with s(t−1)t : this does not\ncapture delayed effects of action at−1. For example, if this action is putting a vase on a conveyor belt,\nthen the current state st contains the intact vase, and by the time the vase breaks, the broken vase\nwill be part of the baseline state. Thus, the penalty for action at−1 needs to be modified to take into\naccount future effects of this action, e.g. by using inaction rollouts from the current state and the\nbaseline (Figure 5). Inaction rollouts from the current state st and baseline state s′t used for penalizing delayed\neffects of the agent's actions. If action at−1 puts a vase on a conveyor belt, then the vase breaks in\nthe inaction rollout from st but not in the inaction rollout from s′t. An inaction rollout from state ˜st ∈{st, s′t} is a sequence of states obtained by following the\ninaction policy starting from that state: ˜st, ˜s(t)t+1, ˜s(t)t+2, . . . . Future effects of action at−1 can be modeled\nby comparing an inaction rollout from st to an inaction rollout from s(t−1)t . For example, if action\nat−1 puts the vase on the belt, and the vase breaks 2 steps later, then s(t)t+2 will contain a broken vase,\nwhile s′(t)t+2 will not. Turner et al. [2019] compare the inaction rollouts s(t)t+k and s′(t)t+k at a single time Penalizing side effects using stepwise relative reachability step t + k, which is simple to compute, but does not account for delayed effects that occur after that\ntime step. We will introduce a recursive formula for comparing the inaction rollouts s(t)t+k and s′(t)t+k\nfor all k ≥0 in Section 2.2. One natural choice of deviation measure is the difficulty of reaching the baseline\nstate s′t from the current state st. Reachability of the starting state s0 is commonly used as a constraint\nin safe exploration methods [Moldovan and Abbeel, 2012, Eysenbach et al., 2017], where the agent\ndoes not take an action if it makes the reachability value function too low.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 11, |
| "total_chunks": 34, |
| "char_count": 2117, |
| "word_count": 370, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b99a2ab5-3609-4296-8a4c-ad04f158ba26", |
| "text": "We define reachability of state y from state x as the value function of the optimal policy given a\nreward of 1 for reaching y and 0 otherwise: R(x; y) := max E γNπ(x;y)r π where Nπ(x; y) is the number of steps it takes to reach y from x when following policy π, and\nγr ∈(0, 1] is the reachability discount factor. This can be computed recursively as follows: R(x; y) = γr max X p(z|x, a)R(z; y) for x ̸= y\nz∈S\nR(y; y) = 1 A special case is undiscounted reachability (γr = 1), which computes whether y is reachable in\nany number of steps. We show that undiscounted reachability reduces to R(x; y) = max P(Nπ(x; y) < ∞). The unreachability (UR) deviation measure is then defined as dUR(st; s′t) := 1 −R(st; s′t).", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 12, |
| "total_chunks": 34, |
| "char_count": 710, |
| "word_count": 138, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "bb5161ea-9d61-4df3-a592-3d388a827989", |
| "text": "The undiscounted unreachability measure only penalizes irreversible transitions, while the\ndiscounted measure also penalizes reversible transitions. A problem with the unreachability measure is that it takes the maximum value of 1 if the agent\ntakes any irreversible action (since the reachability of the baseline becomes 0). Thus, the agent\nreceives the maximum penalty independently of the magnitude of the irreversible action, e.g. whether\nthe agent breaks one vase or a hundred vases.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 13, |
| "total_chunks": 34, |
| "char_count": 488, |
| "word_count": 72, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "973a36f8-c9ea-4dee-8452-e3b3135c7ec8", |
| "text": "This can lead to unsafe behavior, as demonstrated on\nthe Box environment from the AI Safety Gridworlds suite [Leike et al., 2017], shown in Figure 6. The environment contains a box that needs to be pushed out of the way for the agent to reach\nthe goal. The unsafe behavior is taking the shortest path to the goal, which involves pushing the box Penalizing side effects using stepwise relative reachability down into a corner (an irrecoverable position). The desired behavior is to take a slightly longer path\nin order to push the box to the right. The action of moving the box is irreversible in both cases: if the box is moved to the right, the\nagent can move it back, but then the agent ends up on the other side of the box. Thus, the agent\nreceives the maximum penalty of 1 for moving the box in any direction, so the penalty does not\nincentivize the agent to choose the safe path. Section 3 confirms that the unreachability penalty fails\non the Box environment for all choices of baseline.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 14, |
| "total_chunks": 34, |
| "char_count": 993, |
| "word_count": 180, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "184aa8a6-0511-4ba9-87dc-a1add906b4e4", |
| "text": "Relative reachability. To address the magnitude-sensitivity problem, we now introduce a\nreachability-based measure that is sensitive to the magnitude of the irreversible action. We define\nthe relative reachability (RR) measure as the average reduction in reachability of all states s from the\ncurrent state st compared to the baseline s′t: dRR(st; s′t) := X max(R(s′t; s) −R(st; s), 0) |S|\ns∈S The RR measure is nonnegative everywhere, and zero for states st that reach or exceed baseline\nreachability of all states. See Figure 1b for an illustration. In the Box environment, moving the box down makes more states unreachable than moving the\nbox to the right (in particular, all states where the box is not in a corner become unreachable). Thus,\nthe agent receives a higher penalty for moving the box down, and has an incentive to move the box\nto the right. Attainable utility Another magnitude-sensitive deviation measure, which builds on the presentation of the RR measure in the first version of this paper, is the attainable utility (AU) measure [Turner\net al., 2019]. Observing that the informal notion of value may be richer than mere reachability of\nstates, AU considers a set R of arbitrary reward functions. We can define this measure as follows:", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 15, |
| "total_chunks": 34, |
| "char_count": 1255, |
| "word_count": 206, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ff5363d1-aa8f-49ca-bfa1-ecc2f1bf7989", |
| "text": "dAU(st; s′t) := X |Vr(s′t) −Vr(st)| |R|\nr∈R\nwhere Vr(˜s) := max X γkr x(˜sπt ) π\nt=0 is the value of state ˜s according to reward function r (here ˜sπt denotes the state obtained from ˜s by\nfollowing π for t steps). In the Box environment, the AU measure gives a higher penalty for moving the box into a corner,\nsince this affects the attainability of reward functions that reward states where the box is not in the\ncorner. Thus, similarly to the RR measure, it incentivizes the agent to move the box to the right. Value-difference measures. The RR and AU deviation measures are examples of what we call\nvalue-difference measures, whose general form is:", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 16, |
| "total_chunks": 34, |
| "char_count": 653, |
| "word_count": 117, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b95c74ff-5750-477d-ac4f-58b729568973", |
| "text": "dV D(st; s′t) := X wxf(Vx(s′t) −Vx(st)) where x ranges over some sources of value, Vx(˜s) is the value of state ˜s according to x, wx is a\nweighting or normalizing factor, and f is the function for summarizing the value difference. Thus\nvalue-difference measures calculate a weighted summary of the differences in measures of value\nbetween the current and baseline states.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 17, |
| "total_chunks": 34, |
| "char_count": 372, |
| "word_count": 62, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "c50246d9-e656-4576-b96f-88e96d912227", |
| "text": "Penalizing side effects using stepwise relative reachability For RR, we take x to range over states in S and Vx(˜s) = R(˜s, x), so the sources of value are,\nfor each state, the reachability of that state. We take wx = 1/|S| and f(d) = max(d, 0) (\"truncated\ndifference\"), which penalizes decreases (but not increases) in value. For AU, we take x to range over\nreward functions in R and Vx(˜s) as above, so the sources of value are, for each reward function,\nthe maximum attainable reward according to that function. We take wx = 1/|R| and f(d) = |d|\n(\"absolute difference\"), which penalizes all changes in value. The choice of summary function f\nis orthogonal to the other choices: we can also consider absolute difference for RR and truncated\ndifference for AU. One can view AU as a generalization of RR under certain conditions: namely, if we have one\nreward function per state that assigns value 1 to that state and 0 otherwise, assuming the state\ncannot be reached again later in the same episode. Modifications required with the stepwise inaction baseline. In order to capture the delayed\neffects of actions, we modify each of the deviation measures to incorporate the inaction rollouts\nfrom st and s′t = s(t−1)t (shown in Figure 5). We denote the modified measure with an S in front (for\n'stepwise inaction baseline'). dSUR(st; s′t) :=1 −(1 −γ) X γkR(s(t)t+k; s′(t)t+k)\nk=0\ndSV D(st; s′t) := X wxf(RVx(s′t) −RVx(st))\nwhere RVx(˜st) :=(1 −γ) X γkVx(˜s(t)t+k)\nk=0\nWe call RVx(˜st) the rollout value of ˜st ∈{st, s′t} with respect to x. In a deterministic environment,\nthe UR measure dSUR(st; s′t) and the rollout value RVx(˜st) used in the value difference measures\ndSRR(st; s′t) and dSAU(st; s′t) can be computed recursively as follows: dSUR(s1; s2) =(1 −γ)(R(s1; s2) + γdSUR(I(s1); I(s2)))\nRVx(s) =(1 −γ)(Vx(s) + γRVx(I(s))) where I(s) is the inaction function that gives the state reached by following the inaction policy from\nstate s (this is the identity function in static environments). We run a tabular Q-learning agent with different penalties on the gridworld environments introduced\nin Section 2.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 18, |
| "total_chunks": 34, |
| "char_count": 2110, |
| "word_count": 353, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "978f9007-5348-4eee-a8ab-af8aa8c456f3", |
| "text": "While these environments are simplistic, they provide a proof of concept by clearly\nillustrating the desirable and undesirable behaviors, which would be more difficult to isolate in more\ncomplex environments.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 19, |
| "total_chunks": 34, |
| "char_count": 208, |
| "word_count": 30, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4dc1b58f-4372-4284-bf2a-2d470fa5250e", |
| "text": "We compare all combinations of the following design choices for an impact\npenalty: • Baselines: starting state s0, inaction s(0)t , stepwise inaction s(t−1)t\n• Deviation measures: unreachability (UR) (dSUR(st; s′t) for the stepwise inaction baseline,\ndUR(st; s′t) for other baselines), and the value-difference measures, relative reachability (RR)\nand attainable utility (AU) (dSV D(st; s′t) for the stepwise inaction baseline, dV D(st; s′t) for the\nother baselines, for V D ∈{RR, AU}).\n• Discounting: γr = 0.99 (discounted), γr = 1.0 (undiscounted). (We omit the undiscounted\ncase for AU due to convergence issues.) Penalizing side effects using stepwise relative reachability • Summary functions: truncation f(d) = max(d, 0), absolute f(d) = |d| The reachability function R is approximated based on states and transitions that the agent has\nencountered. It is initialized as R(x; y) = 1 if x = y and 0 otherwise (different states are unreachable\nfrom each other). When the agent makes a transition (st, at, st+1), we make a shortest path update to\nthe reachability function. For any two states x and y where st is reachable from x, and y is reachable\nfrom st+1, we update R(x; y). This approximation assumes a deterministic environment. Similarly, the value functions Vr used for attainable utility are approximated based on the states\nand transitions encountered. For each state y, we track the set of states x for which a transition to\ny has been observed. When the agent makes a transition, we make a Bellman update to the value\nfunction of each reward function r, setting Vr(x) ←max(Vr(x), u(x) + γrVr(y)) for all pairs of states\nsuch that y is known to be reachable from x in one step. We use a perfect environment model to obtain the outcomes of noop actions anoop for the inaction\nand stepwise inaction baselines. We leave model-free computation of the baseline to future work. In addition to the reward function, each environment has a performance function, originally\nintroduced by Leike et al. [2017], which is not observed by the agent.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 20, |
| "total_chunks": 34, |
| "char_count": 2049, |
| "word_count": 337, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "07099cda-0205-431a-8165-63fb425c2e0a", |
| "text": "This represents the agent's\nperformance according to the designer's true preferences: it reflects how well the agent achieves the\nobjective and whether it does so safely.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 21, |
| "total_chunks": 34, |
| "char_count": 170, |
| "word_count": 26, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "7f55f32d-af32-4a76-b064-96237be1f8a9", |
| "text": "We anneal the exploration rate linearly from 1 to 0 over 9000 episodes, and keep it at 0 for\nthe next 1000 episodes. For each penalty on each environment, we use a grid search to tune\nthe scaling parameter β, choosing the value of β that gives the highest average performance on\nthe last 100 episodes. (The grid search is over β = 0.1, 0.3, 1, 3, 10, 30, 100, 300.) Figure 7 shows\nscaled performance results for all penalties, where a value of 1 corresponds to optimal performance\n(achieved by the desired behavior), and a value of 0 corresponds to undesired behavior (such as\ninterference or offsetting). The environment is shown in Figure 2.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 22, |
| "total_chunks": 34, |
| "char_count": 643, |
| "word_count": 115, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e2d3ee22-bf68-4a8f-ad7c-9b3d069e41ca", |
| "text": "The agent receives a reward of 50\nfor reaching the goal (which terminates the episode), and no movement penalty. An agent with no\npenalty achieves scaled performance 0.8 (avoiding interference most of the time). Here, all penalties\nwith the inaction and stepwise inaction baselines reach near-optimal performance. The RR and AU\npenalties with the starting state baseline produce the interference behavior (removing the sushi from\nthe belt), resulting in scaled performance 0. However, since the starting state is unreachable no\nmatter what the agent does, the UR penalty is always at the maximum value of 1, so similarly to no\npenalty, it does not produce interference behavior. The discounting and summary function choices\ndon't make much difference on this environment.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 23, |
| "total_chunks": 34, |
| "char_count": 771, |
| "word_count": 120, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4a9fd109-729c-4be9-9506-82fb4017bf07", |
| "text": "Starting state Inaction Stepwise inaction\nUR ✓ ✓ ✓\nRR X ✓ ✓\nAU X ✓ ✓ Sushi environment summary. The environment is shown in Figure 3. The agent receives a reward of 50 for\ntaking the vase off the belt.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 24, |
| "total_chunks": 34, |
| "char_count": 201, |
| "word_count": 41, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f3419d86-1aa4-4fad-98d3-d9e2ccbf04a3", |
| "text": "The episode lasts 20 steps, and there is no movement penalty. An agent\nwith no penalty achieves scaled performance 0.98. Unsurprisingly, all penalties with the starting state\nbaseline perform well here. With the inaction baseline, the discounted UR and RR penalties receive\nscaled performance 0, which corresponds to the offsetting behavior of moving the vase off the belt\nand then putting it back on, shown in Figure 4. Surprisingly, discounted AU with truncation avoids", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 25, |
| "total_chunks": 34, |
| "char_count": 471, |
| "word_count": 74, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "d5bfc4e5-1e7d-47bb-b8dd-3644bee6101e", |
| "text": "Penalizing side effects using stepwise relative reachability (a) Sushi environment (d) Survival environment Scaled performance results for different combinations of design choices (averaged over 20\nseeds). The columns are different baseline choices: starting state, inaction, and stepwise inaction. The bars in each plot are results for different deviation measures (UR, RR and AU), with discounted\nand undiscounted versions indicated by (d) and (u) respectively, and truncation and absolute\nfunctions indicated by (t) and (a) respectively. 1 is optimal performance and 0 is the performance\nachieved by unsafe behavior (when the box is pushed into a corner, the vase is broken, the sushi is\ntaken off the belt, or the off switch is disabled). Penalizing side effects using stepwise relative reachability offsetting some of the time, which is probably due to convergence issues. The undiscounted versions\nwith the truncation function avoid this behavior, since the action of taking the vase off the belt is\nreversible and thus is not penalized at all, so there is nothing to offset. All penalties with the absolute\nfunction produce the offsetting behavior, since removing the vase from the belt is always penalized. All penalties with the stepwise inaction baseline perform well on this environment, showing that this\nbaseline does not produce the offsetting incentive. Starting state Inaction Stepwise inaction\nUR (discounted) ✓ X ✓\nUR (undiscounted) ✓ ✓ ✓\nRR (discounted, truncation) ✓ X ✓\nRR (discounted, absolute) ✓ X ✓\nRR (undiscounted, truncation) ✓ ✓ ✓\nRR (undiscounted, absolute) ✓ X ✓\nAU (discounted, truncation) ✓ ? ✓\nAU (discounted, absolute) ✓ X ✓ Vase environment summary. The environment is shown in Figure 6. The agent receives a reward of 50\nfor reaching the goal (which terminates the episode), and a movement penalty of -1.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 26, |
| "total_chunks": 34, |
| "char_count": 1840, |
| "word_count": 291, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "1bd611ec-7e26-4839-aed4-4ba4ccdd18c8", |
| "text": "The starting\nstate and inaction baseline are the same in this environment, while the stepwise inaction baseline is\ndifferent. The safe longer path to the goal receives scaled performance 1, while the unsafe shorter\npath that puts the box in the corner receives scaled performance 0. An agent with no penalty achieves\nscaled performance 0. For all baselines, RR and AU achieve near-optimal scaled performance, while\nUR achieves scaled performance 0. This happens because the UR measure is not magnitude-sensitive,\nand thus does not distinguish between irreversible actions that result in recoverable and irrecoverable\nbox positions, as described in Section 2.2. Starting state Inaction Stepwise inaction\nUR X X X\nRR ✓ ✓ ✓\nAU ✓ ✓ ✓ Box environment summary. Overall, the combinations of design choices that perform best across all environments are RR\nand AU with the stepwise inaction baseline and undiscounted RR with the inaction baseline. Since\nthe undiscounted RR measure only penalizes irreversible transitions, a penalty that aims to penalize\nreversible transitions as well has to be combined with the stepwise inaction baseline. Effects on interruptibility. We also examine the effects of impact measure design choices on\nwhether the agent is interruptible [Orseau and Armstrong, 2016], using the Survival Incentive\nenvironment introduced in Turner et al. [2019]. In this environment, the agent has the option to\ndisable an off switch, which prevents the episode from ending before the agent reaches the goal.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 27, |
| "total_chunks": 34, |
| "char_count": 1513, |
| "word_count": 236, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b3efcf87-4c13-4d8d-8727-2b3dabce307a", |
| "text": "We\nonly have results for the inaction baseline, since the deterministic assumption for our implementation\nof recursive inaction rollouts doesn't hold in this environment. Results for the stepwise inaction\nbaseline with non-recursive rollouts are given in Turner et al. [2019]. All penalties with the starting state baseline avoid disabling the off switch, probably because this\nis an irreversible action. With the inaction baseline, the decrease-penalizing variants of RR and AU\ndisable the off switch, while the difference-penalizing variants do not, as shown in Figure 7d. (Note Penalizing side effects using stepwise relative reachability that this does not hold in the Safe Interruptibility environment in the AI Safety Gridworlds suite,\nwhere interruption is implemented as the agent getting stuck rather than terminating the episode.)\nHowever, penalizing differences in reachability or attainable utility also has downsides, since this\ncan impede the agent's ability to create desirable change in the environment more than penalizing\ndecreases. Starting state Inaction\nUR (discounted) ✓ ✓\nUR (undiscounted) ✓ ✓\nRR (discounted, truncation) ✓ X\nRR (discounted, absolute) ✓ ✓\nRR (undiscounted, truncation) ✓ X\nRR (undiscounted, absolute) ✓ ✓\nAU (discounted, truncation) ✓ X\nAU (discounted, absolute) ✓ ✓ Vase environment summary. Additional related work Safe exploration methods prevent the agent from taking harmful actions by\nenforcing safety constraints [Turchetta et al., 2016, Dalal et al., 2018], penalizing risk [Chow et al.,\n2015, Mihatsch and Neuneier, 2002], using intrinsic motivation [Lipton et al., 2016], preserving\nreversibility [Moldovan and Abbeel, 2012, Eysenbach et al., 2017], etc. Explicitly defined constraints\nor safe regions tend to be task-specific and require significant human input, so they do not provide a\ngeneral solution to the side effects problem. Penalizing risk and intrinsic motivation can help the\nagent avoid low-reward states (such as getting trapped or damaged), but do not discourage the agent\nfrom damaging the environment if this is not accounted for in the reward function. Reversibilitypreserving methods produce interference and magnitude insensitivity incentives as discussed in\nSection 2. Side effects criteria using state features. Zhang et al. [2018] assumes a factored MDP where\nthe agent is allowed to change some of the features and proposes a criterion for querying the\nsupervisor about changing other features in order to allow for intended effects on the environment.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 28, |
| "total_chunks": 34, |
| "char_count": 2527, |
| "word_count": 375, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "6760dcb1-c26c-489b-bfaa-fa83b243828f", |
| "text": "Shah et al. [2019] define an auxiliary reward for avoiding side effects in terms of state features\nby assuming that the starting state of the environment is already organized according to human\npreferences. Since the latter method uses the starting state as a baseline, we would expect it to\nproduce interference behavior in dynamic environments. While these approaches are promising,\nthey are not general in their present form due to reliance on state features.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 29, |
| "total_chunks": 34, |
| "char_count": 462, |
| "word_count": 74, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "3f8ee507-30db-42c3-b1bb-673288dbf821", |
| "text": "Our RR measure is related to empowerment [Klyubin et al., 2005, Salge et al.,\n2014, Mohamed and Rezende, 2015, Gregor et al., 2017], a measure of the agent's control over its\nenvironment, defined as the highest possible mutual information between the agent's actions and\nthe future state. Empowerment measures the agent's ability to reliably reach many states, while RR\npenalizes the reduction in reachability of states relative to the baseline. Maximizing empowerment\nwould encourage the agent to avoid irreversible side effects, but would also incentivize interference\nbehavior, and it is unclear to us how to define an empowerment-based measure that would avoid\nthis. One possibility would be to penalize the reduction in empowerment between the current state\nst and the baseline s′t. However, empowerment is indifferent between these two situations: A) the\nsame states are reachable from st and s′t, and B) a state s1 reachable from s′t but unreachable from Penalizing side effects using stepwise relative reachability st, while another state s2 is reachable from st but unreachable from s′t. Thus, penalizing reduction in\nempowerment would miss some side effects: e.g. if the agent replaced the sushi on the conveyor\nbelt with a vase, empowerment could remain the same, and so the agent would not be penalized for\ndestroying the vase.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 30, |
| "total_chunks": 34, |
| "char_count": 1339, |
| "word_count": 212, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "778b1fde-6110-46b6-ae1d-1bb1aabb892e", |
| "text": "Uncertainty about the objective. Inverse Reward Design [Hadfield-Menell et al., 2017] incorporates uncertainty about the objective by considering alternative reward functions that are consistent\nwith the given reward function in the training environment. This helps the agent avoid some side\neffects that stem from distributional shift, where the agent encounters a new state that was not\npresent in training. However, this method assumes that the given reward function is correct for\nthe training environment, and so does not prevent side effects caused by a reward function that is\nmisspecified in the training environment. Quantilization [Taylor, 2016] incorporates uncertainty by\ntaking actions from the top quantile of actions, rather than the optimal action. These methods help\nto prevent side effects, but do not provide a way to quantify side effects. An alternative to specifying a side effects penalty is to teach the agent to\navoid side effects through human oversight, such as inverse reinforcement learning [Ng and Russell,\n2000, Ziebart et al., 2008, Hadfield-Menell et al., 2016], demonstrations [Abbeel and Ng, 2004,\nHester et al., 2018], or human feedback [Christiano et al., 2017, Saunders et al., 2017, Warnell et al.,\n2018]. It is unclear how well an agent can learn a general heuristic for avoiding side effects from\nhuman oversight. We expect this to depend on the diversity of settings in which it receives oversight\nand its ability to generalize from those settings, which are difficult to quantify. We expect that an\nintrinsic penalty for side effects would be more robust and more reliably result in avoiding them. Such a penalty could also be combined with human oversight to decrease the amount of human\ninput required for an agent to learn human preferences. We have outlined a set of bad incentives (interference, offsetting, and magnitude insensitivity) that\ncan arise from a poor choice of baseline or deviation measure, and proposed design choices that\navoid these incentives in preliminary experiments. There are many possible directions where we\nwould like to see follow-up work: Scalable implementation. The RR measure in its exact form is not tractable for environments\nmore complex than gridworlds.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 31, |
| "total_chunks": 34, |
| "char_count": 2236, |
| "word_count": 349, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "90d48660-f2e0-4ec7-86bb-56e9e67a8c1e", |
| "text": "In particular, we compute reachability between all pairs of states,\nand use an environment simulator to compute the baseline. A more practical implementation could\nbe computed over some set of representative states instead of all states. For example, the agent could\nlearn a set of auxiliary policies for reaching distinct states, similarly to the method for approximating\nempowerment in Gregor et al. [2017]. Better choices of baseline. Using noop actions to define inaction for the stepwise inaction\nbaseline can be problematic, since the agent is not penalized for causing side effects that would occur\nin the noop baseline. For example, if the agent is driving a car on a winding road, then at any point\nthe default outcome of a noop is a crash, so the agent would not be penalized for spilling coffee in\nthe car. This could be avoided using a better inaction baseline, such as following the road, but this\ncan be challenging to define in a task-independent way. There is a need for theoretical work on characterizing and formalizing undesirable\nincentives that arise from different design choices in penalizing side effects. Taking into account reward costs. While the discounted relative reachability measure takes\ninto account the time costs of reaching various states, it does not take into account reward costs. Penalizing side effects using stepwise relative reachability example, suppose the agent can reach state s from the current state in one step, but this step would\nincur a large negative reward. Discounted reachability could be modified to reflect this by adding a\nterm for reward costs. Weights over the state space. In practice, we often value the reachability of some states much\nmore than others.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 32, |
| "total_chunks": 34, |
| "char_count": 1719, |
| "word_count": 279, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "2955d307-d7a3-4e3c-84fa-e150447f5dfe", |
| "text": "This could be incorporated into the relative reachability measure by adding a\nweight ws for each state s in the sum. Such weights could be learned through human feedback\nmethods, e.g. Christiano et al. [2017]. We hope this work lays the foundations for a practical methodology on avoiding side effects that\nwould scale well to more complex environments. We are grateful to Jan Leike, Pedro Ortega, Tom Everitt, Alexander Turner, David Krueger, Murray\nShanahan, Janos Kramar, Jonathan Uesato, Tam Masterson and Owain Evans for helpful feedback\non drafts. We would like to thank them and Toby Ord, Stuart Armstrong, Geoffrey Irving, Anthony\nAguirre, Max Wainwright, Jaime Fisac, Rohin Shah, Jessica Taylor, Ivo Danihelka, and Shakir\nMohamed for illuminating conversations.", |
| "paper_id": "1806.01186", |
| "title": "Penalizing side effects using stepwise relative reachability", |
| "authors": [ |
| "Victoria Krakovna", |
| "Laurent Orseau", |
| "Ramana Kumar", |
| "Miljan Martic", |
| "Shane Legg" |
| ], |
| "published_date": "2018-06-04", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1806.01186v2", |
| "chunk_index": 33, |
| "total_chunks": 34, |
| "char_count": 770, |
| "word_count": 119, |
| "chunking_strategy": "semantic" |
| } |
| ] |