Medina-Qwen3-14B-OpenClaw

A LoRA fine-tune of TeichAI/Qwen3-14B-Claude-4.5-Opus-High-Reasoning-Distill trained on OpenClaw tool-call data — optimized for agentic reasoning with structured tool invocation.

The base model is itself a Claude 4.5 Opus high-reasoning distillation of Qwen3-14B, making this a compact but capable agent model that runs comfortably on consumer hardware (M3 MacBook Pro, single 24GB GPU).


GGUF Downloads

Quantization Size Use case
Q4_K_M 8.4 GB ✅ Recommended — runs on M3 MacBook, 16GB+ RAM
Q8_0 14.6 GB Near-lossless, 24GB VRAM or 32GB unified memory

Training Details

Parameter Value
Base model TeichAI/Qwen3-14B-Claude-4.5-Opus-High-Reasoning-Distill
Training GPU NVIDIA RTX 4090 (24GB)
Framework Unsloth + TRL SFTTrainer
Dataset OpenClaw tool-call examples (250 examples)
Epochs 3
LoRA rank r=32, alpha=64, rsLoRA=True
LoRA dropout 0.05
LoRA targets q/k/v/o/gate/up/down proj
Context window 4096 tokens
Batch size 2 (effective: 16 with grad accum)
Learning rate 2e-4 (cosine schedule, 5% warmup)
Quantization 4-bit NF4 during training
Optimizer AdamW 8-bit

What It Does

This adapter teaches the model the OpenClaw tool-calling format — a structured XML-style invocation pattern used by the OpenClaw AI agent platform:

<function_calls>
<invoke name="TOOL_NAME">
<parameter name="PARAM_NAME">value</parameter>
</invoke>
</function_calls>

Supported tools in training data: exec, read, write, edit, web_search, web_fetch, browser, memory_search, memory_get, message, cron, nodes, image, pdf, sessions_spawn, session_status


Usage with llama.cpp / Ollama

# Ollama (Q4_K_M)
ollama run hf.co/peterjohannmedina/Medina-Qwen3-14B-OpenClaw:Q4_K_M

# llama.cpp direct
./llama-cli -m Medina-Qwen3-14B-OpenClaw-Q4_K_M.gguf \
  --ctx-size 4096 -p "You are an AI assistant with access to tools..."

Usage with Transformers (LoRA adapter)

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

base = AutoModelForCausalLM.from_pretrained(
    "TeichAI/Qwen3-14B-Claude-4.5-Opus-High-Reasoning-Distill",
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
model = PeftModel.from_pretrained(base, "peterjohannmedina/Medina-Qwen3-14B-OpenClaw")
tokenizer = AutoTokenizer.from_pretrained("peterjohannmedina/Medina-Qwen3-14B-OpenClaw")

Companion Model

For a larger, more capable version trained on the same dataset:


License

Apache 2.0 — same as the base model.

Downloads last month
399
GGUF
Model size
15B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

4-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for peterjohannmedina/Medina-Qwen3-14B-OpenClaw