# 🤖 MCP-Agent-1.7B > The first open-source small language model fine-tuned to natively speak the **Model Context Protocol (MCP)**. ## What Is This? MCP-Agent-1.7B is [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) fine-tuned with LoRA to: - 🔧 **Call tools** using MCP protocol (JSON-RPC format) - 🔗 **Plan multi-step tool chains** with DAG dependencies - ❓ **Ask clarifying questions** when info is missing (instead of hallucinating) - 🛡️ **Refuse dangerous requests** (e.g., "delete all files") ## Training | Detail | Value | |--------|-------| | Base model | [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) (2B params) | | Method | LoRA SFT (rank=16, all-linear) | | Dataset | [muhammadtlha944/mcp-agent-training-data](https://huggingface.co/datasets/muhammadtlha944/mcp-agent-training-data) (16.5K examples) | | Epochs | 3 | | Learning rate | 2e-4 (cosine schedule) | | Effective batch size | 16 | | Precision | fp16 | | Cost | **$0** (trained on free Kaggle/Colab GPU) | ## 🚀 Train It Yourself (FREE) ### Option 1: Kaggle (Recommended — 30 free GPU hrs/week) 1. Go to [kaggle.com/code](https://kaggle.com/code) → **New Notebook** 2. **Settings** → Accelerator → **GPU T4×2** → Internet → **ON** 3. Copy-paste the code from [`MCP_Agent_1_7B_Kaggle.ipynb`](./MCP_Agent_1_7B_Kaggle.ipynb) 4. Run all cells → wait ~2 hours → model auto-pushes to Hub ### Option 2: Google Colab (Backup — free T4) 1. Open [colab.research.google.com](https://colab.research.google.com) 2. **Runtime** → Change runtime type → **T4 GPU** 3. Copy-paste from [`MCP_Agent_1_7B_Training.ipynb`](./MCP_Agent_1_7B_Training.ipynb) 4. Run all cells → wait ~2 hours ## Quick Start ```python from transformers import pipeline pipe = pipeline("text-generation", model="muhammadtlha944/MCP-Agent-1.7B") messages = [ {"role": "system", "content": "You are an MCP agent with tools: github_search, read_file, shell_exec."}, {"role": "user", "content": "Find all Python files that import pandas"} ] response = pipe(messages, max_new_tokens=512) print(response[0]['generated_text'][-1]['content']) ``` ## Research Background This project is informed by: - **STAR Framework** ([arxiv 2602.03022](https://arxiv.org/abs/2602.03022)) — proved Qwen3-1.7B beats Llama-3.1-8B at function calling - **TinyAgent** ([arxiv 2409.00608](https://arxiv.org/abs/2409.00608)) — proved 1.1B models can match GPT-4 at focused tool calling - **LoRA Without Regret** — all-linear LoRA matches full fine-tuning quality ## License Apache 2.0 (same as base model) ## Author Built by [Muhammad Talha](https://huggingface.co/muhammadtlha944) — learning ML by building real projects.