Qwen3-4B-Instruct-2507 β Excel Fine-Tune (Q4_K_M)
A QLoRA fine-tuned version of Qwen/Qwen3-4B-Instruct-2507, specialized for Excel and spreadsheet tasks. Quantized to GGUF Q4_K_M for local deployment via Ollama or LM Studio.
Model Details
| Property | Value |
|---|---|
| Base model | Qwen/Qwen3-4B-Instruct-2507 |
| Parameters | 4B |
| Fine-tune method | QLoRA (4-bit) |
| Dataset size | ~1,200 samples |
| Quantization | GGUF Q4_K_M |
| File size | 2.32 GB (34.5% reduction from 3.55 GB) |
| License | MIT |
Evaluation Results
Manually evaluated on a 30-prompt benchmark spanning standard and advanced Excel tasks:
| Category | Prompts | Accuracy |
|---|---|---|
| Standard (formulas, lookups, data analysis, basic VBA) | 25 | 96% |
| Advanced (array formulas, VBA macros, financial modelling) | 5 | 80% |
| Overall | 30 | ~93% |
Evaluation was performed via manual testing. A response was marked correct if it produced a working, usable formula or macro without requiring correction.
Training Performance
| Step | Training Loss |
|---|---|
| 25 | 1.0435 |
| 50 | 0.7119 |
| 75 | 0.6409 |
| 100 | 0.4136 |
| 125 | 0.4048 |
| 150 | 0.3663 |
| 175 | 0.2248 |
| 200 | 0.2225 |
- Loss reduction: 78.7% (1.04 β 0.22) over 200 steps
- Framework: Unsloth (2x faster training pipeline)
- Trainable parameters: ~132K (QLoRA adapters only)
Intended Use
This model is fine-tuned to assist with Excel and spreadsheet workflows:
- Writing and explaining Excel formulas (
VLOOKUP,INDEX/MATCH,XLOOKUP, array formulas, etc.) - Debugging broken formulas
- Data analysis with Excel (pivot tables, conditional formatting, data validation)
- VBA macro generation and explanation
- Converting between Excel functions and Python/Pandas equivalents
- Step-by-step spreadsheet task guidance
Quick Start
Ollama
ollama pull Nikhil1581/qwen3-4b-instruct-2507.Q4_K_M-excel-finetuning-1.2kdataset
ollama run Nikhil1581/qwen3-4b-instruct-2507.Q4_K_M-excel-finetuning-1.2kdataset
LM Studio
- Search for
Nikhil1581/qwen3-4b-instruct-2507in the model browser - Download the
Q4_K_Mvariant - Load and chat
llama.cpp
./llama-cli -m qwen3-4b-excel-q4_k_m.gguf \
--chat-template chatml \
-p "You are an Excel expert assistant." \
-i
Example Prompts
Formula help:
User: How do I look up a value in column A and return the corresponding value from column C?
Debugging:
User: My VLOOKUP returns #N/A even though the value exists. Why?
VBA:
User: Write a VBA macro to loop through all sheets and highlight cells greater than 1000 in red.
Data analysis:
User: How do I calculate a running total in Excel without using a helper column?
Training Details
- Fine-tune type: QLoRA (4-bit quantized LoRA)
- LoRA rank: 16
- LoRA alpha: 32
- Target modules:
q_proj,k_proj,v_proj,o_proj,gate_proj,up_proj,down_proj - Dataset: ~1,200 Excel/spreadsheet instruction-response pairs
- Sequence length: 1,024 tokens
- Epochs: 3
- Steps: 200
- Optimizer: paged_adamw_8bit
- Hardware: T4 GPU (Google Colab)
- Framework: HuggingFace TRL + PEFT + Unsloth
Limitations
- Focused on Excel β general coding or math reasoning may be weaker than the base model
- Dataset is English-only
- Q4_K_M quantization may reduce precision on very complex multi-step formula chains
- Not tested on Google Sheets or LibreOffice Calc (though most formulas transfer)
- Evaluation was manual (25 standard + 5 advanced prompts) β not a formal benchmark
Recommended Inference Settings
temperature: 0.3
top_p: 0.9
repeat_penalty: 1.1
num_predict: 512
Low temperature (0.3) is recommended to keep formula syntax accurate.
Author
Nikhil1581 β HuggingFace Profile
Acknowledgements
- Qwen Team for the base model
- Unsloth for the 2x faster training framework
- HuggingFace TRL for the fine-tuning pipeline
- llama.cpp for GGUF quantization
- Downloads last month
- 4
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for Nikhil1581/qwen3-4b-instruct-2507.Q4_K_M-excel-finetuning-1.2kdataset
Base model
Qwen/Qwen3-4B-Instruct-2507