SFTQwen3-8B-OpenRubrics-v1v2

Qwen3-8B full fine-tuned on OpenRubrics v1 + v2 combined (~100k+ examples) for evaluation rubric generation.

Training

  • Base model: Qwen/Qwen3-8B
  • Dataset: OpenRubrics v1 + v2 (~100k examples)
  • Epochs: 1
  • Learning rate: 8e-6 (cosine schedule)
  • Effective batch size: 128 (per-device=2, gradient accumulation=8, 8 GPUs)
  • Max sequence length: 3072

Task

Given a user prompt, generates a structured evaluation rubric in [Hard Rule] / [Principle] format for judging LLM response quality.

Evaluation

Downloads last month
17
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for chardizard/SFTQwen3-8B-OpenRubrics-v1v2

Finetuned
Qwen/Qwen3-8B
Finetuned
(1579)
this model