Spaces:
Sleeping
Sleeping
File size: 1,022 Bytes
8499b98 b85a115 8892e63 8499b98 b85a115 8892e63 b85a115 8892e63 b85a115 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | ---
title: ML Model Trainer
emoji: 🤖
colorFrom: blue
colorTo: green
sdk: streamlit
sdk_version: 1.28.0
app_file: app.py
pinned: false
---
# 🤖 ML Model Trainer
Free tool to generate training scripts for fine-tuning open-source LLMs.
## Features
- **SFT** (Supervised Fine-Tuning) - Full model fine-tuning
- **DPO** (Direct Preference Optimization) - Preference alignment
- **LoRA** - Parameter-efficient fine-tuning
## Supported Models
| Size | Models |
|------|--------|
| Small (0.5-1.5B) | Qwen2.5-0.5B, Qwen2.5-1.5B, Llama-3.2-1B, Phi-3-mini, Gemma-2B |
| Medium (7B) | Qwen2.5-7B, Llama-3.2-3B, Mistral-7B |
## Public Datasets
- HuggingFaceH4/ultrachat_200k
- openai/gsm8k
- meta-math/MATH
- anthropic/hh-rlhf
## How to Use
1. Select model, training method, and dataset
2. Configure hyperparameters (epochs, learning rate, batch size)
3. Generate training script
4. Copy and run locally or on Hugging Face Jobs
## Requirements
```bash
pip install transformers trl torch datasets accelerate peft
``` |