Medina-Qwen3-14B-OpenClaw
A LoRA fine-tune of TeichAI/Qwen3-14B-Claude-4.5-Opus-High-Reasoning-Distill trained on OpenClaw tool-call data — optimized for agentic reasoning with structured tool invocation.
The base model is itself a Claude 4.5 Opus high-reasoning distillation of Qwen3-14B, making this a compact but capable agent model that runs comfortably on consumer hardware (M3 MacBook Pro, single 24GB GPU).
GGUF Downloads
| Quantization | Size | Use case |
|---|---|---|
| Q4_K_M | 8.4 GB | ✅ Recommended — runs on M3 MacBook, 16GB+ RAM |
| Q8_0 | 14.6 GB | Near-lossless, 24GB VRAM or 32GB unified memory |
Training Details
| Parameter | Value |
|---|---|
| Base model | TeichAI/Qwen3-14B-Claude-4.5-Opus-High-Reasoning-Distill |
| Training GPU | NVIDIA RTX 4090 (24GB) |
| Framework | Unsloth + TRL SFTTrainer |
| Dataset | OpenClaw tool-call examples (250 examples) |
| Epochs | 3 |
| LoRA rank | r=32, alpha=64, rsLoRA=True |
| LoRA dropout | 0.05 |
| LoRA targets | q/k/v/o/gate/up/down proj |
| Context window | 4096 tokens |
| Batch size | 2 (effective: 16 with grad accum) |
| Learning rate | 2e-4 (cosine schedule, 5% warmup) |
| Quantization | 4-bit NF4 during training |
| Optimizer | AdamW 8-bit |
What It Does
This adapter teaches the model the OpenClaw tool-calling format — a structured XML-style invocation pattern used by the OpenClaw AI agent platform:
<function_calls>
<invoke name="TOOL_NAME">
<parameter name="PARAM_NAME">value</parameter>
</invoke>
</function_calls>
Supported tools in training data: exec, read, write, edit, web_search, web_fetch, browser, memory_search, memory_get, message, cron, nodes, image, pdf, sessions_spawn, session_status
Usage with llama.cpp / Ollama
# Ollama (Q4_K_M)
ollama run hf.co/peterjohannmedina/Medina-Qwen3-14B-OpenClaw:Q4_K_M
# llama.cpp direct
./llama-cli -m Medina-Qwen3-14B-OpenClaw-Q4_K_M.gguf \
--ctx-size 4096 -p "You are an AI assistant with access to tools..."
Usage with Transformers (LoRA adapter)
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
base = AutoModelForCausalLM.from_pretrained(
"TeichAI/Qwen3-14B-Claude-4.5-Opus-High-Reasoning-Distill",
torch_dtype=torch.bfloat16,
device_map="auto",
)
model = PeftModel.from_pretrained(base, "peterjohannmedina/Medina-Qwen3-14B-OpenClaw")
tokenizer = AutoTokenizer.from_pretrained("peterjohannmedina/Medina-Qwen3-14B-OpenClaw")
Companion Model
For a larger, more capable version trained on the same dataset:
- Medina-Qwen3.5-27B-OpenClaw (Q4_K_M: 15.4 GB, Q8_0: 26.6 GB)
License
Apache 2.0 — same as the base model.
- Downloads last month
- 399
4-bit
8-bit