Configuration Parsing Warning:In tokenizer_config.json: "tokenizer_config.chat_template" must be one of [string, array]
The Model hertz-hwang/Qwen3.5-27B-OpenClaw-mlx-6.5bit was converted to MLX format from peterjohannmedina/Medina-Qwen3.5-27B-OpenClaw-Merged using mlx-lm version 0.31.1.
Medina-Qwen3.5-27B-OpenClaw
A LoRA fine-tune of Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled trained on OpenClaw tool-call data — optimized for agentic reasoning with structured tool invocation.
The base model is a Claude 4.6 Opus reasoning distillation of Qwen3.5-27B. This fine-tune adds structured tool-calling capability in the OpenClaw XML format, making it suitable for local agentic deployments.
What It Does
This adapter teaches the model the OpenClaw tool-calling format — a structured XML-style invocation pattern used by the OpenClaw AI agent platform:
<function_calls>
<invoke name="TOOL_NAME">
<parameter name="PARAM_NAME">value</parameter>
</invoke>
</function_calls>
Supported tools in training data: exec, read, write, edit, web_search, web_fetch, browser, memory_search, memory_get, message, cron, nodes, image, pdf, sessions_spawn, session_status
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("hertz-hwang/Qwen3.5-27B-OpenClaw-mlx-6.5bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 812
6-bit
Model tree for hertz-hwang/Qwen3.5-27B-OpenClaw-mlx-6.5bit
Base model
Qwen/Qwen3.5-27B