🌟 GGUF VERSİON

"https://huggingface.co/mradermacher/Qwen3.5-27B-kimi-k2.5-Reasoning-Distilled-GGUF"

🌟 Qwen3.5-27B-kimi-k2.5-high-Reasoning-Distilled

Build Environment Upgrades:

  • Fine-tuning Framework: **Unsloth
  • Core Dependencies: **Transformers
  • It does not disable thinking mode by default, and allowing the agent to run continuously for over 9 minutes without interruption.

🗺️ Training Pipeline Overview

Base Model (Qwen3.5-27B)
 │
 ▼
Supervised Fine-Tuning (SFT) + LoRA
 │
 ▼
Final Model (kimi-k2.5-Distilled,text-only)

🔹 Supervised Fine-Tuning (SFT)

  • Objective: To inject high-density reasoning logic and establish a strict format for problem-solving involving an internal thinking state prior to outputting the final response.
  • Methodology: We utilized Unsloth for highly efficient memory and compute optimization. A critical component of this stage is the train_on_responses_only strategy, masking instructions so the loss is purely calculated over the generation of the <think> sequences and the subsequent solutions.
  • Format Enforcement: All training samples were systematically normalized so the model strictly abides by the structure <think> {internal reasoning} </think>\n {final answer}.

📚 All Datasets Used

The dataset consists of high-quality, and large sclae 450k:

Dataset Name Description / Purpose
Ali-Yaser/KIMI-K2.5-450000x-ShareGPT Provides comprehensive kimi-k2.5 high reasoning trajectories.

🌟 Core Skills & Capabilities

  1. Modular & Structured Thinking: Inheriting traits from high-level reasoning, the model demonstrates confident parsing of the prompt, establishing an outlined plan in its <think> block sequentially rather than exploratory "trial-and-error" self-doubt.
  2. **The dataset includes 450,000 examples distilled from a model with 1 trillion parameters, such as Kimi K2.5. While 60% of this dataset relates to programming languages like Python, C++, and Rust, 10% covers mathematics and 30% covers other topics. This fine-tuned model is capable of producing better code.

⚠️ waring

  • Hallucination Risk: While reasoning is strong, the model remains an autoregressive LLM; external facts provided during the thinking sequence may occasionally contain hallucinations if verifying real-world events.
Downloads last month
112
Safetensors
Model size
27B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Ali-Yaser/Qwen3.5-27B-kimi-k2.5-Reasoning-Distilled

Base model

Qwen/Qwen3.5-27B
Finetuned
(26)
this model
Quantizations
2 models