YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Qwen3-4B-Thinking-2507-math
Fine-tuned Qwen3-4B-Thinking-2507 model optimized for mathematical problem solving.
Model Details
- Base Model: Qwen/Qwen3-4B-Thinking-2507
- Fine-tuning: GRPO (Group Relative Policy Optimization)
- Task: Mathematical reasoning with chain-of-thought
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "adeelahmad/Qwen3-4B-Thinking-2507-math"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="bfloat16", device_map="auto")
- Downloads last month
- 28
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support