YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
OpenVLA Fine-tuned Models - Phase Repository
Overview
This repository contains references to both fine-tuned OpenVLA models:
- LAMP Search Model: kavinrajkrupsurge/lampe-sim-data-openvla
- Keyboard Controlled Model: kavinrajkrupsurge/openvla-keyboard-controlled-finetuned
Models
LAMP Search Model
- Repository: kavinrajkrupsurge/lampe-sim-data-openvla
- Dataset: lampe_search_dataset/all
- Action Space: 4-DoF (Base, Joint2, Joint3, Joint4)
- Use Case: Search tasks in room environments
Keyboard Controlled Model
- Repository: kavinrajkrupsurge/openvla-keyboard-controlled-finetuned
- Datasets: keyboard_controlled_dataset/samples_000_024 + samples_025_049
- Action Space: 4-DoF (Base, Joint2, Joint3, Joint4)
- Use Case: General keyboard-controlled robot manipulation
Base Model
Both models are fine-tuned from: openvla/openvla-7b
Fine-tuning Configuration
- Method: LoRA (Low-Rank Adaptation)
- Batch Size: 4 (with gradient accumulation: 4)
- Effective Batch Size: 16
- Learning Rate: 5e-4
- LoRA Rank: 32
- LoRA Dropout: 0.0
Usage
Please refer to the individual model repositories for usage examples and loading instructions.
Training Scripts
Both model repositories include complete training scripts:
- LAMP Search: See LAMP Search Model Repo for training scripts
- Keyboard Controlled: See Keyboard Model Repo for training scripts
Each repository contains:
- Dataset conversion scripts
- Fine-tuning scripts
- Dataset configuration files
- Complete instructions for reproducing the training
Notes
- Both models use absolute joint positions (not deltas)
- Action normalization is handled automatically using
dataset_statistics.json - Each model has its own dataset statistics file
- All training was done with LoRA (Low-Rank Adaptation) for efficient fine-tuning
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support