Qwen3.5-2B Excel Assistant β GGUF Q4_K_M
A lightweight Qwen3.5-2B model fine-tuned on ~2,000 Excel instructionβresponse pairs and quantized to GGUF Q4_K_M β shrinking from 2.7 GB down to 1.27 GB (52.96% smaller) for fast, fully local inference via Ollama or llama.cpp. Drop-in alternative to the larger 4B variant when running on resource-constrained hardware.
Companion model: Qwen3-4B Excel FT (higher accuracy) Training dataset: Nikhil1581/excel_dataset
Model Details
| Property | Value |
|---|---|
| Base model | Qwen3.5 2B Instruct |
| Fine-tuning method | QLoRA (via Unsloth) |
| Training data | ~1,200 Alpaca-format Excel instruction pairs |
| Quantization | GGUF Q4_K_M |
| Task | Excel / spreadsheet instruction following |
| Language | English |
| License | MIT |
What It Does
This model is specialized for Excel and spreadsheet tasks, including:
- Writing and explaining Excel formulas (
VLOOKUP,INDEX/MATCH,SUMIFS, array formulas) - Data analysis guidance (pivot tables, conditional formatting, data validation)
- VBA macro writing and debugging
- Financial modelling and worksheet structuring
- Step-by-step spreadsheet troubleshooting
Quick Start
Ollama (recommended)
# Pull and run directly
ollama run hf.co/Nikhil1581/qwen3.5-2b.Q4_K_M-excel_fine_tuning
llama.cpp
# Download the model file
wget https://huggingface.co/Nikhil1581/qwen3.5-2b.Q4_K_M-excel_fine_tuning/resolve/main/qwen3.5-2b-excel.Q4_K_M.gguf
# Run inference
./llama-cli -m qwen3.5-2b-excel.Q4_K_M.gguf \
--chat-template qwen \
-p "How do I use VLOOKUP to find a value across two sheets?" \
-n 512
Python β llama-cpp-python
from llama_cpp import Llama
llm = Llama(
model_path="qwen3.5-2b-excel.Q4_K_M.gguf",
n_ctx=2048,
chat_format="qwen",
)
response = llm.create_chat_completion(
messages=[
{
"role": "system",
"content": "You are an expert Excel assistant. Provide clear, accurate spreadsheet help."
},
{
"role": "user",
"content": "Write a formula to calculate the running total of sales in column B, starting from B2."
}
]
)
print(response["choices"][0]["message"]["content"])
Example Prompts & Responses
Formula writing
User: How do I sum only the cells in column C where column A equals "Sales" and column B is greater than 100?
Model: Use SUMIFS:
=SUMIFS(C:C, A:A, "Sales", B:B, ">"&100)
This sums all values in column C where column A contains exactly "Sales" AND
column B is greater than 100. The "&" concatenates the operator with the number.
VBA
User: Write a VBA macro to highlight all cells in a selection that contain negative values in red.
Model: Sub HighlightNegatives()
Dim cell As Range
For Each cell In Selection
If IsNumeric(cell.Value) And cell.Value < 0 Then
cell.Interior.Color = RGB(255, 0, 0)
End If
Next cell
End Sub
Troubleshooting
User: My VLOOKUP returns #N/A even though the value exists. What's wrong?
Model: The most common causes are:
1. Leading/trailing spaces β use TRIM(lookup_value) to clean the search term
2. Number stored as text β format the column consistently
3. Exact match vs. approximate match β ensure the 4th argument is FALSE (or 0) for exact match
4. Lookup column isn't the leftmost column in your table_array
Try: =VLOOKUP(TRIM(A2), Sheet2!$A:$C, 2, FALSE)
Evaluation Results
Evaluated on a held-out set of 751 Excel instruction prompts spanning formulas, VBA, financial modelling, and spreadsheet concepts. Pass/fail graded by keyword matching against expected outputs.
| Model | Correct | Total | Accuracy |
|---|---|---|---|
| Qwen3.5-2B base (untuned) | 167 | 751 | 22.2% |
| Qwen3.5-2B Excel FT (this model) | 621 | 751 | 82.7% |
| Improvement | +454 | β | +60.5 pp |
Fine-tuning delivers a 3.7Γ accuracy gain over the base model on domain-specific Excel tasks.
Training Details
| Property | Value |
|---|---|
| Framework | Unsloth + HuggingFace Transformers |
| Method | QLoRA (4-bit quantized LoRA) |
| Training steps | 200 |
| Dataset size | ~2,000 Alpaca-format instruction pairs |
| Starting loss | 1.1366 (step 25) |
| Final loss | 0.2526 (step 200) |
| Total loss reduction | 77.8% |
| Pre-quantization size | 2.7 GB |
| Post-quantization size | 1.27 GB |
| Size reduction | 52.96% (Q4_K_M) |
Training Loss Curve
Loss
1.20 βββ
β ββ
1.00 β β
β ββ
0.80 β ββ
β βββ
0.60 β βββ
β βββ
0.40 β ββββββ
β βββ
0.20 β ββ
βββββββββββββββββββββββββββββββ Steps
25 50 75 100 125 150 175 200
Step-by-Step Loss
| Step | Train Loss | Ξ from previous |
|---|---|---|
| 25 | 1.1366 | β |
| 50 | 0.8256 | β0.3110 (β27.4%) |
| 75 | 0.7333 | β0.0923 (β11.2%) |
| 100 | 0.4733 | β0.2600 (β35.5%) |
| 125 | 0.4592 | β0.0141 (β3.0%) |
| 150 | 0.4402 | β0.0190 (β4.1%) |
| 175 | 0.2561 | β0.1841 (β41.8%) |
| 200 | 0.2526 | β0.0035 (β1.4%) |
The sharpest drops occur at step 50β100 (β35.5%) and step 150β175 (β41.8%), indicating the model learned core formula syntax early and then refined nuanced task understanding in the final quarter of training. Loss plateaus briefly between steps 100β150 before a strong final descent, consistent with LoRA adapters settling into domain-specific knowledge.
For higher accuracy on complex tasks (array formulas, advanced VBA, financial modelling), use the 4B variant linked above.
Hardware Requirements
| Setup | Requirement |
|---|---|
| Model file size | 1.27 GB (down from 2.7 GB pre-quantization) |
| CPU only | 4 GB RAM |
| Recommended | 8 GB RAM / 4 GB VRAM |
Limitations
- Optimized for English-language Excel tasks; non-English function names (e.g., German
SVERWEIS) may not perform as well - Complex multi-sheet workbook reasoning may require the larger 4B model
- Not intended for general-purpose chat; best results come from Excel-specific prompts
- Model may occasionally produce plausible-looking but incorrect formulas β always verify in your spreadsheet
Related Resources
- Qwen3-4B Excel FT (larger, higher accuracy)
- excel_dataset β training data
- ARIA β AI Desktop Assistant (GitHub)
Citation
If you use this model in research or a project, a mention is appreciated:
Nikhil Bisht (2026). Qwen3.5-2B Excel Fine-Tune (GGUF Q4_K_M).
HuggingFace. https://huggingface.co/Nikhil1581/qwen3.5-2b.Q4_K_M-excel_fine_tuning
Built by @Nikhil1581 Β· MIT License
- Downloads last month
- 223
4-bit