Papers
arxiv:2604.15483

π_{0.7}: a Steerable Generalist Robotic Foundation Model with Emergent Capabilities

Published on Apr 24
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

A new robotic foundation model called π₀.₇ achieves strong out-of-the-box performance across diverse scenarios through diverse context conditioning during training, enabling zero-shot cross-embodiment generalization and challenging task execution.

AI-generated summary

We present a new robotic foundation model, called π_{0.7}, that can enable strong out-of-the-box performance in a wide range of scenarios. π_{0.7} can follow diverse language instructions in unseen environments, including multi-stage tasks with various kitchen appliances, provide zero-shot cross-embodiment generalization, for example enabling a robot to fold laundry without seeing the task before, and perform challenging tasks such as operating an espresso machine out of the box at a level of performance that matches much more specialized RL-finetuned models. The main idea behind π_{0.7} is to use diverse context conditioning during training. This conditioning information, contained in the prompt, makes it possible to steer the model precisely to perform many tasks with different strategies. It is conditioned not just on a language command that describes what it should do, but on additional multimodal information that also describes the manner or strategy in which it should do it, including metadata about task performance and subgoal images. This enables π_{0.7} to use very diverse data, including demonstrations, potentially suboptimal (autonomous) data including failures, and data from non-robot sources. Our experiments evaluate π_{0.7} across numerous tasks with multiple robot platforms, on tasks that require speed and dexterity, language following, and compositional task generalization.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.15483
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.15483 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.15483 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.15483 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.