Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# PushT Dataset
|
| 2 |
+
|
| 3 |
+
Video + action data from the [gym-pusht](https://github.com/huggingface/gym-pusht) environment. Three splits:
|
| 4 |
+
|
| 5 |
+
| Split | Episodes | Avg steps/ep | Hours | Description |
|
| 6 |
+
|-------|----------|-------------|-------|-------------|
|
| 7 |
+
| `smooth/` | ~38,900 | 300 | ~324 hrs | Random smooth movement (Ornstein-Uhlenbeck process) |
|
| 8 |
+
| `goal/` | ~8,700 | 298 | ~73 hrs | Heuristic goal-directed policy (keypoint matching) |
|
| 9 |
+
| `expert/` | ~21,800 | 228 | ~138 hrs | Pretrained diffusion policy, ~74% success rate |
|
| 10 |
+
|
| 11 |
+
## File format
|
| 12 |
+
|
| 13 |
+
Each `.npz` file contains multiple episodes. Load with:
|
| 14 |
+
|
| 15 |
+
```python
|
| 16 |
+
import numpy as np
|
| 17 |
+
data = np.load("smooth/smooth_0000_00.npz", allow_pickle=True)
|
| 18 |
+
n = int(data["num_trajectories"]) # number of episodes in this file
|
| 19 |
+
for i in range(n):
|
| 20 |
+
frames = data[f"frames_{i}"] # (T+1, 96, 96, 3) uint8 — RGB pixel observations
|
| 21 |
+
actions = data[f"actions_{i}"] # (T, 2) float32 — agent target position [x, y] in [0, 512]
|
| 22 |
+
rewards = data[f"rewards_{i}"] # (T,) float32 — coverage ratio, 1.0 = solved
|
| 23 |
+
policy = str(data[f"policy_{i}"]) # "smooth", "goal", or "expert"
|
| 24 |
+
```
|
| 25 |
+
|
| 26 |
+
- `frames` has one more entry than `actions` (initial frame before first action)
|
| 27 |
+
- `frames[t]` is the observation *before* `actions[t]` is taken
|
| 28 |
+
- `frames[t+1]` is the observation *after* `actions[t]` is taken
|
| 29 |
+
- The environment runs at 10 Hz (0.1s per step)
|
| 30 |
+
- An episode is "solved" when `rewards[t] >= 0.95` (the T-block covers >95% of the goal)
|