FF_3.1
FF_3.1 is a 2.02B parameter GPT-2 decoder-only language model trained from scratch with a multi-stage pipeline combining supervised fine-tuning, preference optimization, knowledge distillation, and instruction tuning.
Model Details
| Architecture | GPT-2 decoder-only |
| Parameters | 2.02B |
| Hidden size (d) | 2048 |
| Attention heads (h) | 16 |
| FFN size (ff) | 8192 |
| Layers (L) | 38 |
| Context length | 2048 |
| Tokenizer | GPT-2 BPE (vocab size: 50,257) |
| Precision | bfloat16 |
Training Pipeline
FF_3.1 was trained through a 5-stage pipeline:
- Pretraining β 90B tokens on a large English corpus
- SFT β 760K + 100K examples (OpenHermes-2.5 / NuminaMath / Eurus)
- DPO β 38,863 preference pairs
- Distillation v3 β 47K examples targeting MMLU + GSM8K + ARC benchmarks
- LoRA v4b β 10K examples for instruction following refinement
Evaluation
| Benchmark | Score |
|---|---|
| MMLU (5-shot) | 27.94% (+3.94 pp vs FF_3 baseline of 24%) |
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("francescofiamingo1/FF_3.1", torch_dtype="bfloat16")
tokenizer = AutoTokenizer.from_pretrained("francescofiamingo1/FF_3.1")
input_text = "Explain photosynthesis in simple terms."
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7, do_sample=True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Known Limitations
- Math reasoning is still weak β the model struggles with multi-step arithmetic and word problems
- Instruction count following is imprecise β the model may not reliably follow constraints like "list exactly 5 items"
What's Next
FF_3.2 will focus on:
- DPO with UltraFeedback dataset for improved preference alignment
- Improved math dataset for stronger quantitative reasoning
License
Apache 2.0
- Downloads last month
- -
Evaluation results
- 5-shot accuracy on MMLUself-reported27.940