π Overview
Llama-3.2-3B-Calculus-v2 is a specialized large language model fine-tuned for mathematical reasoning, specifically targeting Differential and Integral Calculus. Developed by Khurram Pervez, this model utilizes Chain-of-Thought (CoT) prompting to break down complex mathematical problems into logical, pedagogical steps.
The model was fine-tuned on an NVIDIA GeForce RTX 4060 Ti 16GB using 4-bit quantization (QLoRA) to maximize efficiency while maintaining high reasoning accuracy over 500 training steps.
π Key Features
- Step-by-Step Derivations: Optimized to explain the "why" behind each calculus rule.
- Rule-Based Reasoning: Trained to identify and apply the Product Rule, Chain Rule, and Integration by Parts.
- Calculus Specialist: Targeted performance on Taylor Series expansions, limits, and transcendental functions.
- Efficient Local AI: Designed to run on consumer-grade hardware with minimal VRAM.
π οΈ Training Technicalities
- Base Model:
unsloth/Llama-3.2-3B-Instruct - Fine-tuning Method: QLoRA (Rank: 32, Alpha: 32)
- Steps: 500 Steps
- Final Train Loss: 0.4789
- Optimizer: 8-bit AdamW
- Scheduler: Cosine Decay
- Hardware: Local Ubuntu Workstation (NVIDIA RTX 4060 Ti 16GB)
π Usage Instructions
Simple Inference
You can run this model using the unsloth library for 2x faster inference. We recommend a low temperature (0.1) for mathematical stability.
from unsloth import FastLanguageModel
import torch
# 1. Load Model and Tokenizer
model, tokenizer = FastLanguageModel.from_pretrained(
"Khurram123/Llama-3.2-3B-Calculus-v2",
max_seq_length = 2048,
load_in_4bit = True,
)
FastLanguageModel.for_inference(model)
# 2. Define Calculus Problem
problem = "Find the derivative of f(x) = x^2 * ln(x) step by step."
# 3. Apply Llama 3.2 Instruct Template
messages = [{"role": "user", "content": problem}]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt = True,
return_tensors = "pt"
).to("cuda")
# 4. Generate Solution
outputs = model.generate(
input_ids = inputs,
max_new_tokens = 1024,
temperature = 0.1
)
# 5. Decode Output
response = tokenizer.decode(outputs[0], skip_special_tokens = True)
print(response.split("assistant")[-1].strip())
π Dataset Reference
The model was fine-tuned using a filtered subset of the MathInstruct dataset, focusing specifically on calculus-related instructional pairs to enhance symbolic manipulation and logical derivation.
- Dataset Name: MathInstruct
- Paper: MathInstruct: A Compiled Institution Dataset for Mathematical Reasoning
- Source: TIGER-Lab (University of Waterloo, Ohio State University, et al.)
Citation
If you use this model or the underlying data, please cite the original MathInstruct paper:
@article{yue2023mathinstruct,
title={Mathinstruct: A compiled instruction dataset for mathematical reasoning},
author={Yue, Xiang and Qu, Xingwei and Zhang, Ge and Yao, Liang and Huo, Shijie and Sun, Wei and Caswell, Isaac and Xie, Wenhu and others},
journal={arXiv preprint arXiv:2309.04408},
year={2023}
}
- Downloads last month
- 459