YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
sanjitkverma/UIGEN-X-4B-STD-FFT
STD-FFT (Standard to Diverse Full Fine-Tuning) Model
This repository contains a model trained using STD-FFT methodology on the Tesslate/UIGEN-X-FULL dataset.
Repository Structure
sanjitkverma/UIGEN-X-4B-STD-FFT/
βββ final_model/ # Final trained model (also at root for easy access)
βββ checkpoints/ # Training checkpoints
β βββ step-200/ # Checkpoint at step 200
β βββ step-400/ # Checkpoint at step 400
β βββ epoch-0.5/ # Checkpoint at epoch 0.5
β βββ epoch-1.0/ # Checkpoint at epoch 1.0
βββ inference_samples/ # Generated samples during training
βββ step_0/ # Initial samples (before training)
βββ step_200/ # Samples at step 200
βββ step_400/ # Samples at step 400
Training Configuration
- Base Model: unsloth/Qwen3-4B-Thinking-2507
- Dataset: Tesslate/UIGEN-X-FULL
- Training Method: STD-FFT (Standard to Diverse with Full Fine-Tuning)
- Max Sequence Length: 11000
- Batch Size: 1 (with 8x gradient accumulation)
- Checkpoints: Every 200 steps and 0.5 epochs
Usage
To use the final model:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("sanjitkverma/UIGEN-X-4B-STD-FFT")
tokenizer = AutoTokenizer.from_pretrained("sanjitkverma/UIGEN-X-4B-STD-FFT")
To use a specific checkpoint:
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load checkpoint at step 400
model = AutoModelForCausalLM.from_pretrained(
"sanjitkverma/UIGEN-X-4B-STD-FFT",
subfolder="checkpoints/step-400"
)
tokenizer = AutoTokenizer.from_pretrained(
"sanjitkverma/UIGEN-X-4B-STD-FFT",
subfolder="checkpoints/step-400"
)
Training Details
Training started: 2025-08-18T01:44:18.384004
This model was trained using Modal cloud infrastructure with STD (Standard to Diverse) loss scaling for improved generalization on UI generation tasks.
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support