This model is a pruned variant of ananayarora/Kimi-K2.5-BF16 that retains the first 2 layer(s) of the original 61 layer(s) architecture. It is intended for pipeline testing and performance research rather than production use.
Made with โค๏ธ by Model Pruner
Kimi K2.5 โ BF16 Safetensors
BF16 (bfloat16) conversion of Moonshot AI's Kimi K2.5.
Converted from the official native INT4 weights so that LlamaFactory + KTransformers can run LoRA SFT directly without a per-run conversion step.
| Spec | Value |
|---|---|
| Source | moonshotai/Kimi-K2.5 |
| Format | safetensors (BF16) |
| Shards | 64 |
| Total size | ~1913 GB |
| Architecture | MoE โ 1T total, 32B active |
| Context | 256K tokens |
Usage with LlamaFactory + KTransformers
model_name_or_path: ananayarora/Kimi-K2.5-BF16
stage: sft
finetuning_type: lora
bf16: true
use_kt: true
License
Same as the base model: Modified MIT.
- Downloads last month
- 21
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support