qwen-3b-ccmcp-v1

Qwen 3B trained for Claude Chrome MCP tool calling

Attribution: Built with Qwen

Model Description

Fine-tuned for MCP (Model Context Protocol) tool calling with the Claude Chrome extension. The model generates tool calls for browser automation tasks.

Training Details

  • Base Model: Qwen/Qwen2.5-Coder-3B-Instruct
  • Method: LoRA fine-tuning on Apple Silicon (MLX)
  • Dataset: 1,782 MCP browser automation examples
  • Validation Loss: 0.077
  • Iterations: 1000
  • Naming Convention: {base}-{size}-ccmcp-{version}
    • ccmcp = Claude Chrome MCP

Files

  • adapters.safetensors - LoRA adapter weights
  • adapter_config.json - LoRA configuration
  • qwen-3b-ccmcp-v1-f16.gguf - GGUF F16 format for llama.cpp/Ollama
  • checkpoints/ - Training checkpoints

Usage

With MLX (Apple Silicon)

from mlx_lm import load, generate
from mlx_lm.sample_utils import make_sampler

model, tokenizer = load(
    "mlx-community/Qwen2.5-Coder-3B-Instruct-4bit",
    adapter_path="pierretokns/qwen-3b-ccmcp-v1"
)

messages = [
    {"role": "system", "content": "You are a browser automation assistant with MCP tools."},
    {"role": "user", "content": "Go to google.com"}
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
sampler = make_sampler(temp=0.1)

response = generate(model, tokenizer, prompt=prompt, max_tokens=150, sampler=sampler)
print(response)

With Ollama

# Download GGUF from this repo
# Create Modelfile:
cat > Modelfile << 'EOF'
FROM ./qwen-3b-ccmcp-v1-f16.gguf
PARAMETER num_ctx 8192
PARAMETER temperature 0.1
SYSTEM "You are a browser automation assistant with MCP tools."
EOF

# Create and run
ollama create qwen-3b-ccmcp-v1 -f Modelfile
ollama run qwen-3b-ccmcp-v1 "Go to google.com"

With Claude Code + Ollama

ANTHROPIC_BASE_URL=http://localhost:11434 \
ANTHROPIC_AUTH_TOKEN=ollama \
ANTHROPIC_API_KEY=ollama \
claude --model qwen-3b-ccmcp-v1

MCP Tools

The model was trained on 16 MCP browser automation tools: navigate, read_page, find, computer, form_input, get_page_text, screenshot, javascript_tool, tabs_context_mcp, tabs_create_mcp, gif_creator, upload_image, read_console_messages, read_network_requests, shortcuts_list, shortcuts_execute

License

This model inherits the license from the base model. See Qwen/Qwen2.5-Coder-3B-Instruct for license details.

Full license: https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct/blob/main/LICENSE

Downloads last month
1
MLX
Hardware compatibility
Log In to add your hardware

Quantized

GGUF
Model size
3B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for pierretokns/qwen-3b-ccmcp-v1

Base model

Qwen/Qwen2.5-3B
Adapter
(25)
this model