granite-4.0-h-tiny-DISTILL-OPUS-4.5-think

This model is a fine-tuned version of ibm-granite/granite-4.0-h-tiny trained on high-reasoning conversational data from Claude Opus 4.5.

Model Details

  • Base Model: ibm-granite/granite-4.0-h-tiny
  • Fine-tuning Dataset: TeichAI/claude-4.5-opus-high-reasoning-250x
  • Context Length: 1048576 tokens
  • Special Feature: Thinking/Reasoning with <think> tags

Quantized Versions (GGUF)

🔗 GGUF versions available here: granite-4.0-h-tiny-DISTILL-OPUS-4.5-think-GGUF

Format Size Use Case
Q2_K Smallest Low memory, reduced quality
Q4_K_M Recommended Best balance
Q5_K_M Good Higher quality
Q8_0 Large Near lossless
F16 Largest Original precision

Usage

Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("glogwa68/granite-4.0-h-tiny-DISTILL-OPUS-4.5-think")
tokenizer = AutoTokenizer.from_pretrained("glogwa68/granite-4.0-h-tiny-DISTILL-OPUS-4.5-think")

messages = [{"role": "user", "content": "Hello, how are you?"}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Ollama (GGUF)

ollama run hf.co/glogwa68/granite-4.0-h-tiny-DISTILL-OPUS-4.5-think-GGUF:Q4_K_M

llama.cpp

llama-cli --hf-repo glogwa68/granite-4.0-h-tiny-DISTILL-OPUS-4.5-think-GGUF --hf-file granite-4.0-h-tiny-distill-opus-4.5-think-q4_k_m.gguf -p "Hello"

Training Details

  • Epochs: 2
  • Learning Rate: 2e-5
  • Batch Size: 1 (with gradient accumulation)
  • Precision: FP16
  • Hardware: Multi-GPU with DeepSpeed ZeRO-3

License

Apache 2.0

Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for glogwa68/granite-4.0-h-tiny-DISTILL-OPUS-4.5-think

Finetuned
(9)
this model
Quantizations
1 model

Dataset used to train glogwa68/granite-4.0-h-tiny-DISTILL-OPUS-4.5-think