AGORA Planner v1 β€” ANIMA Multi-Robot Task Planner

Part of the ANIMA Robotics Suite by Robot Flow Labs.

Overview

AGORA (Unified STEM Memory Framework) is a Wave-5 coordination module for multi-robot collaboration. This checkpoint is a LoRA fine-tuned Qwen2.5-1.5B-Instruct model trained to assign tasks to heterogeneous robot teams based on capabilities, battery, location, and failure history.

Paper

RoboOS-NeXT: Toward Lifelong, Scalable, and Robust Multi-Robot Collaboration (arXiv:2510.26536) Huajie Tan et al., October 2025

Training

Parameter Value
Base model Qwen/Qwen2.5-1.5B-Instruct
Method LoRA (r=16, alpha=32)
Training data 5000 synthetic planning examples
Eval data 200 examples
Epochs 3 (834 steps)
Batch size 6 per device Γ— 3 grad accum
Learning rate 2e-4 (cosine + 5% warmup)
Precision bf16
Hardware 3x NVIDIA L4 (23GB each)
Train loss 0.237
Train runtime 7338s (~2h)
Token accuracy 92%
Format valid rate 100%

Exported Formats

Format Path Size Use Case
SafeTensors pytorch/agora_planner_v1.safetensors 2.9 GB Fast loading, safe
PyTorch (.pth) pytorch/agora_planner_v1.pth 2.9 GB Training, fine-tuning
ONNX onnx/agora_planner_v1.onnx + .onnx.data 5.8 GB Cross-platform inference
TensorRT FP16 tensorrt/agora_planner_v1_trt_fp16.engine 3.4 GB Edge deployment (Jetson/L4)
TensorRT FP32 tensorrt/agora_planner_v1_trt_fp32.engine 6.7 GB Full precision inference

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "ilessio-aiflowlab/project_agora",
    subfolder="pytorch",
    trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(
    "ilessio-aiflowlab/project_agora",
    subfolder="pytorch",
    trust_remote_code=True,
)

prompt = """You are AGORA, a multi-robot task planner.
Given robots: bot_a (manipulator, 90% battery), bot_b (mobile base, 60% battery)
Tasks: pick_object (requires manipulation), navigate_to_door (requires navigation)
Assign each task to the best robot."""

inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Files

β”œβ”€β”€ README.md                              # This file
β”œβ”€β”€ pytorch/
β”‚   β”œβ”€β”€ agora_planner_v1.safetensors      # SafeTensors (2.9 GB)
β”‚   β”œβ”€β”€ agora_planner_v1.pth              # PyTorch weights (2.9 GB)
β”‚   β”œβ”€β”€ config.json                        # Model config
β”‚   β”œβ”€β”€ tokenizer.json                     # Tokenizer
β”‚   └── tokenizer_config.json
β”œβ”€β”€ onnx/
β”‚   β”œβ”€β”€ agora_planner_v1.onnx             # ONNX model (4 MB + external data)
β”‚   └── agora_planner_v1.onnx.data        # ONNX external weights (5.8 GB)
β”œβ”€β”€ tensorrt/
β”‚   β”œβ”€β”€ agora_planner_v1_trt_fp16.engine  # TRT FP16 (3.4 GB)
β”‚   └── agora_planner_v1_trt_fp32.engine  # TRT FP32 (6.7 GB)
β”œβ”€β”€ configs/
β”‚   β”œβ”€β”€ paper.toml                         # Paper-aligned config
β”‚   └── training.toml                      # Training config
β”œβ”€β”€ logs/
β”‚   β”œβ”€β”€ training_metrics.json              # Final metrics
β”‚   β”œβ”€β”€ planning_train.jsonl               # Training data (5000 examples)
β”‚   └── planning_eval.jsonl                # Eval data (200 examples)
└── scripts/
    β”œβ”€β”€ train_planner.py                   # LoRA training script
    β”œβ”€β”€ eval_planner.py                    # Evaluation script
    β”œβ”€β”€ generate_planning_data.py          # Synthetic data generator
    └── export_all.py                      # Export pipeline

License

Apache 2.0 β€” Robot Flow Labs / AIFLOW LABS LIMITED

Downloads last month

-

Downloads are not tracked for this model. How to track
Video Preview
loading

Model tree for ilessio-aiflowlab/project_agora

Adapter
(805)
this model

Paper for ilessio-aiflowlab/project_agora