Motivation:Robots face two challenges in natural environments:● Underspecified goals: no human to specify exactgoal configuration● Uncertain dynamics: Effects of robot's actions onnovel objects is uncertainApproach:● For underspecified goals:• Pose task as a constrained optimization problemover a set of reward or cost terms.• Can be defined manually or modeled from human● For uncertain dynamics:• Quickly approximate dynamics for a set of actions• Plan efficiently using sampling-based techniquesOur algorithm:● Searches in object configuration space usingRapidly-exploring Random Trees (RRT)● Adds leaves to search tree by forward-simulatingthe learned dynamics for each object-action pair● Uses directGD heuristic to quickly searchoptimization landscape● Returns a plan from the starting state to themost optimal reachable state, given cost functionManipulation under uncertaintyInitial state● Robot begins with a workspace containing three unfamiliar objects● Robot provided a cost function expressing the following desiderata:• Orthogonality• Cicumscribed area• Distance from edge of workspaceSolution● All objects pushed to orthonormalorientations in the center of the workspace● All paths free of collisions and redundantactions● Robot monitored error and replanned asnecessaryGeneralization to other manipulation tasksAppropriate for tasks naturally expressed asoptimization of a cost function:● Arranging clutter on a surface● Multiple object placement● Table settingModel LearningGoal: discover the dynamics of each objectclass over a set of action primitivesAdvantagesAppropriate for tasks naturally expressed asoptimization of a cost function:• Unlike conventional single-shot methods,doesn't require user specified goals• Always guaranteed to return reachable solution• Favorable anytime characteristics• Feasible for real-time planning in high DOF problemsSimilar to Reinforcement Learning formalism, buttrades path optimality for realtime feasibility• RL can require many full-passes throughconfiguration space to converge to optimal policy• Handling continuous features requires discretization,tiling, or other appoaches• RL better suited for problems with sparse rewardlandscape, but optimizations offer a gradient (likeshaping reward) which allows fast heuristic searchwith RRT