YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

OpenVLA Fine-tuned Models - Phase Repository

Overview

This repository contains references to both fine-tuned OpenVLA models:

  1. LAMP Search Model: kavinrajkrupsurge/lampe-sim-data-openvla
  2. Keyboard Controlled Model: kavinrajkrupsurge/openvla-keyboard-controlled-finetuned

Models

LAMP Search Model

Keyboard Controlled Model

Base Model

Both models are fine-tuned from: openvla/openvla-7b

Fine-tuning Configuration

  • Method: LoRA (Low-Rank Adaptation)
  • Batch Size: 4 (with gradient accumulation: 4)
  • Effective Batch Size: 16
  • Learning Rate: 5e-4
  • LoRA Rank: 32
  • LoRA Dropout: 0.0

Usage

Please refer to the individual model repositories for usage examples and loading instructions.

Training Scripts

Both model repositories include complete training scripts:

Each repository contains:

  • Dataset conversion scripts
  • Fine-tuning scripts
  • Dataset configuration files
  • Complete instructions for reproducing the training

Notes

  • Both models use absolute joint positions (not deltas)
  • Action normalization is handled automatically using dataset_statistics.json
  • Each model has its own dataset statistics file
  • All training was done with LoRA (Low-Rank Adaptation) for efficient fine-tuning
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support