YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

sanjitkverma/UIGEN-X-4B-STD-FFT

STD-FFT (Standard to Diverse Full Fine-Tuning) Model

This repository contains a model trained using STD-FFT methodology on the Tesslate/UIGEN-X-FULL dataset.

Repository Structure

sanjitkverma/UIGEN-X-4B-STD-FFT/
β”œβ”€β”€ final_model/         # Final trained model (also at root for easy access)
β”œβ”€β”€ checkpoints/         # Training checkpoints
β”‚   β”œβ”€β”€ step-200/       # Checkpoint at step 200
β”‚   β”œβ”€β”€ step-400/       # Checkpoint at step 400
β”‚   β”œβ”€β”€ epoch-0.5/      # Checkpoint at epoch 0.5
β”‚   └── epoch-1.0/      # Checkpoint at epoch 1.0
└── inference_samples/   # Generated samples during training
    β”œβ”€β”€ step_0/         # Initial samples (before training)
    β”œβ”€β”€ step_200/       # Samples at step 200
    └── step_400/       # Samples at step 400

Training Configuration

  • Base Model: unsloth/Qwen3-4B-Thinking-2507
  • Dataset: Tesslate/UIGEN-X-FULL
  • Training Method: STD-FFT (Standard to Diverse with Full Fine-Tuning)
  • Max Sequence Length: 11000
  • Batch Size: 1 (with 8x gradient accumulation)
  • Checkpoints: Every 200 steps and 0.5 epochs

Usage

To use the final model:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("sanjitkverma/UIGEN-X-4B-STD-FFT")
tokenizer = AutoTokenizer.from_pretrained("sanjitkverma/UIGEN-X-4B-STD-FFT")

To use a specific checkpoint:

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load checkpoint at step 400
model = AutoModelForCausalLM.from_pretrained(
    "sanjitkverma/UIGEN-X-4B-STD-FFT",
    subfolder="checkpoints/step-400"
)
tokenizer = AutoTokenizer.from_pretrained(
    "sanjitkverma/UIGEN-X-4B-STD-FFT",
    subfolder="checkpoints/step-400"
)

Training Details

Training started: 2025-08-18T01:44:18.384004

This model was trained using Modal cloud infrastructure with STD (Standard to Diverse) loss scaling for improved generalization on UI generation tasks.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support