MolmoAct Data Mixture
Collection
All datasets for the MolmoAct (Multimodal Open Language Model for Action) release. • 4 items • Updated • 18
image imagewidth (px) 256 256 | wrist imagewidth (px) 256 256 | conversations dict | annotation stringlengths 8 51 ⌀ |
|---|---|---|---|
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[145,42],[65,77],[79,153],[171,54],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[150,43],[62,12],[97,155],[171,54],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[143,42],[65,15],[97,155],[171,54],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[137,42],[63,15],[84,150],[170,57],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[137,43],[63,15],[84,150],[170,57],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[135,43],[66,15],[97,157],[170,57],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[131,44],[93,87],[97,157],[170,57],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[132,42],[100,84],[97,97],[174,59],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[135,43],[100,84],[97,97],[174,59],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[137,40],[78,137],[104,142],[174,59],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[140,42],[89,125],[104,142],[174,59],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[132,40],[84,125],[115,74],[170,57],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[131,40],[84,125],[115,74],[170,57],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[124,44],[78,135],[97,87],[170,57],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[127,43],[78,146],[97,87],[170,57],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[127,77],[71,145],[104,84],[174,56],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[123,81],[71,145],[104,84],[174,56],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[124,55],[73,148],[119,84],[174,56],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[123,77],[76,155],[119,84],[174,56],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[120,54],[78,152],[93,88],[170,56],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[115,52],[78,152],[93,88],[170,56],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[79,84],[79,153],[120,84],[170,56],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[108,52],[78,161],[120,84],[170,56],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[120,87],[79,157],[128,100],[170,59],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[119,55],[79,157],[128,100],[170,59],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[105,51],[79,161],[119,84],[170,59],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[101,56],[79,157],[119,84],[170,59],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[101,54],[79,66],[124,87],[174,54],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[102,57],[79,66],[124,87],[174,54],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[102,59],[78,164],[135,88],[174,54],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[102,59],[78,164],[135,88],[174,54],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[102,59],[78,161],[127,77],[174,56],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[102,60],[78,161],[127,77],[174,56],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[77,78],[78,161],[135,86],[174,56],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[77,78],[73,164],[135,86],[174,56],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[77,77],[78,71],[119,76],[170,59],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[77,77],[78,71],[119,76],[170,59],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[101,65],[78,71],[140,68],[170,59],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[99,66],[73,71],[140,68],[170,59],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[99,71],[73,71],[135,65],[170,57],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[71,77],[73,71],[135,65],[170,57],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[65,7],[78,71],[145,84],[170,57],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[68,7],[78,71],[145,84],[170,57],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[97,68],[77,71],[145,81],[170,66],[174,91]] | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. What is the action that the robot should take. To figure out the action that the robot should take to pick up the book and place it in the back compartment of the caddy, let's thin... | null | ||
{
"from": [
"human",
"gpt"
],
"value": [
"The task is pick up the book and place it in the back compartment of the caddy. Notice that the trajectory of the end effector is annotated on the first image. Based on the trajectory annotated on the image, along with other images from different camera views ... | [[78,110],[77,71],[145,81],[170,66],[174,91]] |