Qwen3.5-4B Math Fine-Tuned (Nemotron-SFT-Math-v3)
This model is a fine-tuned version of Qwen3.5-4B, explicitly optimized for complex mathematical reasoning and Chain-of-Thought (CoT) problem solving. It was fine-tuned using the Nemotron-Math-v3 dataset with Parameter-Efficient Fine-Tuning (PEFT/LoRA).
Model Details
- Base Model:
Qwen/Qwen3.5-4B - Fine-Tuning Dataset:
nvidia/Nemotron-SFT-Math-v3 - Methodology: LoRA (Rank = 64, Alpha = 32 or Alpha = 16). The
lora_alphascaling is specifically tuned to prevent catastrophic forgetting, ensuring the model retains conversational abilities while significantly enhancing mathematical logic. - Quantization: Safetensor format (
F16) and GGUF formats (Q8_0)
Recommended Generation Parameters
Because this model leverages extensive Chain-of-Thought reasoning to solve math problems, the following generation parameters are highly recommended for the best performance:
{
"temperature": 1.0,
"top_p": 0.95,
"repetition_penalty": 1.1
}
Note: A repetition_penalty of 1.1 is crucial to prevent the base model from occasionally falling into infinite generation loops on extremely long context windows.
Use Cases
- Resolving complex math word problems (GSM8K).
- Higher-level mathematical reasoning (MATH, AIME).
- Step-by-step logic tracking and proofs.
- Downloads last month
- 37
Hardware compatibility
Log In to add your hardware
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support