--- library_name: transformers model_name: SmolLM2Prover tags: - text-generation - proof - cot - reasoning - math - calculus - logic - sft - trl - generated_from_trainer - finetune - symbioticai - convergentintel language: - en license: apache-2.0 datasets: - AI-MO/NuminaMath-1.5 base_model: - prithivMLmods/SmolLM2-CoT-360M pipeline_tag: text-generation --- # Model Card for SmolLM2Prover **SmolLM2Prover** is a specialized, fine-tuned version of [prithivMLmods/SmolLM2-CoT-360M](https://huggingface.co/prithivMLmods/SmolLM2-CoT-360M). While retaining the strong conversational abilities of its base model, this version has been specifically enhanced to excel at deep thinking, logical reasoning, and higher-level mathematics, with a focus on generating step-by-step proofs and explanations (Chain-of-Thought). The model was fine-tuned using multiple rounds of Supervised Fine-Tuning (SFT) with the [TRL](https://github.com/huggingface/trl) library on a curated dataset, enhancing its ability to follow complex instructions and reason through problems. ## Model Details * **Base Model:** [prithivMLmods/SmolLM2-CoT-360M](https://huggingface.co/prithivMLmods/SmolLM2-CoT-360M) * **Fine-tuning Library:** TRL (Transformer Reinforcement Learning) * **Specialization:** Mathematical reasoning, proof generation, Chain-of-Thought (CoT) * **Training Data:** Fine-tuned on `AI-MO/NuminaMath-1.5` and an additional ~1 million tokens of custom-formatted reasoning data. ## How to Use This model is intended to be used for text generation tasks that require logical reasoning or advanced conversation. ### Using the Pipeline The easiest way to use the model is with the `transformers` pipeline. ```python from transformers import pipeline import torch model_id = "reaperdoesntknow/SMOLM2Prover" prompt = "Prove that the derivative of f(x) = x^2 is f'(x) = 2x using the limit definition of a derivative." generator = pipeline( "text-generation", model=model_id, torch_dtype=torch.bfloat16, # Or torch.float16 if bfloat16 is not available device_map="auto" ) # Using a chat format for better instruction following messages = [ {"role": "user", "content": f"You are a helpful math assistant. Please solve the following problem step-by-step.\n\n{prompt}"} ] output = generator(messages, max_new_tokens=512, return_full_text=False) print(output[0]["generated_text"]) Manual Usage For more control, you can use AutoModelForCausalLM and AutoTokenizer directly. from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "reaperdoesntknow/SMOLM2Prover" prompt = "Prove that the derivative of f(x) = x^2 is f'(x) = 2x using the limit definition of a derivative." tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, # Or torch.float16 device_map="auto" ) # Apply the chat template for proper formatting messages = [ {"role": "user", "content": f"You are a helpful math assistant. Please solve the following problem step-by-step.\n\n{prompt}"} ] tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device) outputs = model.generate(tokenized_chat, max_new_tokens=512) decoded_output = tokenizer.decode(outputs[0], skip_special_tokens=True) # Print only the generated part print(decoded_output.split("assistant\n")[-1]) ``` ### Training The model underwent several rounds of Supervised Fine-Tuning (SFT) using TRL's SFTTrainer. * Training Data: The primary dataset used was AI-MO/NuminaMath-1.5, augmented with approximately 1 million additional tokens. This data was formatted with a specific prompt structure designed to elicit step-by-step, chain-of-thought reasoning from the model. * Process: The iterative SFT approach allowed for progressive refinement of the model's reasoning capabilities. ## Framework Versions * Transformers: 4.56.0 * Pytorch: 2.8.0+cu126 * TRL: 0.22.2 * Datasets: 4.0.0 * Tokenizers: 0.22.0 ### Intended Use This model is a versatile tool suitable for a range of applications, from everyday conversation to complex problem-solving. * Primary Use Cases (Specialized Skills): * Educational tools for higher-level mathematics and logic. * Automated proof generation and verification. * Step-by-step problem-solving assistants for complex topics. * Serving as a "thinking" component for applications requiring deep reasoning. * General Use Cases: * General-purpose conversation and advanced chatbot applications. * Complex instruction-following tasks. * Content generation that requires logical consistency. Limitations and Bias * Mathematical Accuracy: While highly capable, the model can still make errors or "hallucinate" incorrect steps or solutions in complex mathematical proofs. All outputs, especially for critical applications, should be verified by a human expert. * Domain Performance: The model's performance is most reliable on problems similar to its training data. While it is designed to handle higher levels of math and deep thinking, its accuracy in novel or esoteric domains should be carefully evaluated. * Inherited Bias: This model inherits any biases present in the base model (SmolLM2-CoT-360M) and the training datasets. ### Acknowledgements You're doing great! ## Discrepancy Calculus Foundation This model is part of the [Convergent Intelligence LLC: Research Division](https://huggingface.co/reaperdoesntknow) portfolio. All models in this portfolio are developed under the Discrepancy Calculus (DISC) framework — a measure-theoretic approach to understanding and controlling the gap between what a model *should* produce and what it *actually* produces. DISC treats training singularities (loss plateaus, mode collapse, catastrophic forgetting) not as failures to be smoothed over, but as **structural signals** that reveal the geometry of the learning problem. Key concepts: - **Discrepancy Operator (D):** Measures the gap between expected and observed behavior at each training step - **Jump Sets:** Boundaries where model behavior changes discontinuously — these are *features*, not bugs - **Ghost Imprinting:** Teacher knowledge that transfers to student models through weight-space topology rather than explicit distillation signal For the full mathematical treatment, see [Discrepancy Calculus: Foundations and Core Theory](https://huggingface.co/reaperdoesntknow/Discrepancy_Calculus) (DOI: 10.57967/hf/8194). **Citation chain:** [Structure Over Scale](https://huggingface.co/reaperdoesntknow/Structure-Over-Scale) (DOI: 10.57967/hf/8165) → [Three Teachers to Dual Cognition](https://huggingface.co/reaperdoesntknow/DualMind_Methodolgy) (DOI: 10.57967/hf/8184) → [Discrepancy Calculus](https://huggingface.co/reaperdoesntknow/Discrepancy_Calculus) (DOI: 10.57967/hf/8194) ## Citations If you use TRL in your work, please cite the library: @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{[https://github.com/huggingface/trl](https://github.com/huggingface/trl)}} } --- ## Convergent Intelligence Portfolio *Part of the [Standalone Models](https://huggingface.co/reaperdoesntknow) by [Convergent Intelligence LLC: Research Division](https://huggingface.co/reaperdoesntknow)* ### Related Models | Model | Downloads | Format | |-------|-----------|--------| | [SMOLM2Prover-GGUF](https://huggingface.co/reaperdoesntknow/SMOLM2Prover-GGUF) | 150 | GGUF | | [DeepReasoning_1R](https://huggingface.co/reaperdoesntknow/DeepReasoning_1R) | 16 | HF | | [SAGI](https://huggingface.co/reaperdoesntknow/SAGI) | 3 | HF | | [S-AGI](https://huggingface.co/reaperdoesntknow/S-AGI) | 0 | HF | ### Top Models from Our Lab | Model | Downloads | |-------|-----------| | [Qwen3-1.7B-Thinking-Distil](https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Thinking-Distil) | 501 | | [LFM2.5-1.2B-Distilled-SFT](https://huggingface.co/reaperdoesntknow/LFM2.5-1.2B-Distilled-SFT) | 342 | | [Qwen3-1.7B-Coder-Distilled-SFT](https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Coder-Distilled-SFT) | 302 | | [Qwen3-0.6B-Distilled-30B-A3B-Thinking-SFT-GGUF](https://huggingface.co/reaperdoesntknow/Qwen3-0.6B-Distilled-30B-A3B-Thinking-SFT-GGUF) | 203 | | [Qwen3-1.7B-Coder-Distilled-SFT-GGUF](https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Coder-Distilled-SFT-GGUF) | 194 | **Total Portfolio: 41 models | 2,781 total downloads** *Last updated: 2026-03-28 12:56 UTC* --- ## From the Convergent Intelligence Portfolio **[DistilQwen Collection](https://huggingface.co/collections/reaperdoesntknow/distilqwen-69bf40ec669117e3f069ef1c)** — Our only BF16 series. Proof-weighted distillation from Qwen3-30B-A3B → 1.7B and 0.6B on H100. Three teacher variants (Instruct, Thinking, Coder), nine models, 2,788 combined downloads. The rest of the portfolio proves structure beats scale on CPU. This collection shows what happens when you give the methodology real hardware. Top model: [Qwen3-1.7B-Coder-Distilled-SFT](https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Coder-Distilled-SFT) — 508 downloads Full methodology: [Structure Over Scale (DOI: 10.57967/hf/8165)](https://doi.org/10.57967/hf/8165) *Convergent Intelligence LLC: Research Division*