Motivation: Robots face two challenges in natural environments: ● Underspecified goals: no human to specify exact goal configuration ● Uncertain dynamics: Effects of robot's actions on novel objects is uncertain Approach: ● For underspecified goals: • Pose task as a constrained optimization problem over a set of reward or cost terms. • Can be defined manually or modeled from human ● For uncertain dynamics: • Quickly approximate dynamics for a set of actions • Plan efficiently using sampling-based techniques Our algorithm: ● Searches in object configuration space using Rapidly-exploring Random Trees (RRT) ● Adds leaves to search tree by forward-simulating the learned dynamics for each object-action pair ● Uses directGD heuristic to quickly search optimization landscape ● Returns a plan from the starting state to the most optimal reachable state, given cost function
Manipulation under uncertainty Initial state ● Robot begins with a workspace containing three unfamiliar objects ● Robot provided a cost function expressing the following desiderata: • Orthogonality • Cicumscribed area • Distance from edge of workspace
Solution ● All objects pushed to orthonormal orientations in the center of the workspace ● All paths free of collisions and redundant actions ● Robot monitored error and replanned as necessary
Generalization to other manipulation tasks Appropriate for tasks naturally expressed as optimization of a cost function: ● Arranging clutter on a surface ● Multiple object placement ● Table setting
Model Learning Goal: discover the dynamics of each object class over a set of action primitives
Advantages Appropriate for tasks naturally expressed as optimization of a cost function: • Unlike conventional single-shot methods, doesn't require user specified goals • Always guaranteed to return reachable solution • Favorable anytime characteristics • Feasible for real-time planning in high DOF problems Similar to Reinforcement Learning formalism, but trades path optimality for realtime feasibility • RL can require many full-passes through configuration space to converge to optimal policy • Handling continuous features requires discretization, tiling, or other appoaches • RL better suited for problems with sparse reward landscape, but optimizations offer a gradient (like shaping reward) which allows fast heuristic search with RRT