Qwen2.5-7B-Instruct_Function_Calling_xLAM-GGUF
GGUF quantized versions of ermiaazarkhalili/Qwen2.5-7B-Instruct_Function_Calling_xLAM for use with llama.cpp, Ollama, LM Studio, and other GGUF-compatible tools.
Available Quantizations
| File | Quantization | Quality | Use Case |
|---|---|---|---|
qwen2.5-7b-instruct_function_calling_xlam-q4_k_m.gguf |
Q4_K_M | Good | Recommended - Best balance of quality and size |
qwen2.5-7b-instruct_function_calling_xlam-q5_k_m.gguf |
Q5_K_M | Better | Higher quality, moderate size increase |
qwen2.5-7b-instruct_function_calling_xlam-q8_0.gguf |
Q8_0 | Best | Highest quality quantization |
Download Specific Quantization
Using huggingface-cli
# Download Q4_K_M (recommended)
huggingface-cli download ermiaazarkhalili/Qwen2.5-7B-Instruct_Function_Calling_xLAM-GGUF qwen2.5-7b-instruct_function_calling_xlam-q4_k_m.gguf --local-dir ./models
# Download Q5_K_M (higher quality)
huggingface-cli download ermiaazarkhalili/Qwen2.5-7B-Instruct_Function_Calling_xLAM-GGUF qwen2.5-7b-instruct_function_calling_xlam-q5_k_m.gguf --local-dir ./models
# Download Q8_0 (best quality)
huggingface-cli download ermiaazarkhalili/Qwen2.5-7B-Instruct_Function_Calling_xLAM-GGUF qwen2.5-7b-instruct_function_calling_xlam-q8_0.gguf --local-dir ./models
# Download all quantizations
huggingface-cli download ermiaazarkhalili/Qwen2.5-7B-Instruct_Function_Calling_xLAM-GGUF --local-dir ./models
Using wget
# Q4_K_M
wget https://huggingface.co/ermiaazarkhalili/Qwen2.5-7B-Instruct_Function_Calling_xLAM-GGUF/resolve/main/qwen2.5-7b-instruct_function_calling_xlam-q4_k_m.gguf
# Q5_K_M
wget https://huggingface.co/ermiaazarkhalili/Qwen2.5-7B-Instruct_Function_Calling_xLAM-GGUF/resolve/main/qwen2.5-7b-instruct_function_calling_xlam-q5_k_m.gguf
# Q8_0
wget https://huggingface.co/ermiaazarkhalili/Qwen2.5-7B-Instruct_Function_Calling_xLAM-GGUF/resolve/main/qwen2.5-7b-instruct_function_calling_xlam-q8_0.gguf
Usage
Ollama
# Pull specific quantization
ollama pull hf.co/ermiaazarkhalili/Qwen2.5-7B-Instruct_Function_Calling_xLAM-GGUF:Q4_K_M
# Or create from local file
cat > Modelfile << EOF
FROM ./qwen2.5-7b-instruct_function_calling_xlam-q4_k_m.gguf
EOF
ollama create qwen2.5-7b-instruct_function_calling_xlam -f Modelfile
ollama run qwen2.5-7b-instruct_function_calling_xlam
llama.cpp
# Run with llama-cli
./llama-cli -m qwen2.5-7b-instruct_function_calling_xlam-q4_k_m.gguf -p "Your prompt here" -n 256
# Run as server
./llama-server -m qwen2.5-7b-instruct_function_calling_xlam-q4_k_m.gguf --host 0.0.0.0 --port 8080
llama-cpp-python
from llama_cpp import Llama
llm = Llama(
model_path="qwen2.5-7b-instruct_function_calling_xlam-q4_k_m.gguf",
n_ctx=2048,
n_gpu_layers=-1 # Use all GPU layers
)
output = llm(
"What is machine learning?",
max_tokens=256,
temperature=0.7,
)
print(output['choices'][0]['text'])
LM Studio
- Download the desired GGUF file from this repository
- Open LM Studio and navigate to the Models tab
- Click "Add Model" and select the downloaded GGUF file
- Load the model and start chatting
GPT4All
- Download the Q4_K_M GGUF file
- Open GPT4All and go to Settings > Models
- Add the GGUF file path
- Select the model and start using
Original Model
This is a quantized version of ermiaazarkhalili/Qwen2.5-7B-Instruct_Function_Calling_xLAM. See the original model card for:
- Training details and methodology
- Dataset information
- Performance metrics
- Full usage examples with Transformers
Conversion Details
| Property | Value |
|---|---|
| Source Model | ermiaazarkhalili/Qwen2.5-7B-Instruct_Function_Calling_xLAM |
| Conversion Date | 2026-04-09 |
| Quantizations | Q4_K_M, Q5_K_M, Q8_0 |
| Converter | llama.cpp |
License
Same license as the original model. See ermiaazarkhalili/Qwen2.5-7B-Instruct_Function_Calling_xLAM for details.
Converted using the Slurm Model Trainer skill
Citation
If you use this model in your research or applications, please cite:
@misc{azarkhalili2026_qwen2_5_7b_instruct_function_calling_xlam_gguf,
author = {Azarkhalili, Behrooz},
title = {Qwen2.5-7B-Instruct_Function_Calling_xLAM-GGUF},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/ermiaazarkhalili/Qwen2.5-7B-Instruct_Function_Calling_xLAM-GGUF}
}
To generate a citable DOI, click "Cite this model" on the model page.
Training Method
Trained using SFT (Supervised Fine-Tuning) on Salesforce xLAM Function Calling 60K.
Base (merged) model: Qwen2.5-7B-Instruct_Function_Calling_xLAM
Acknowledgments
- Hugging Face TRL Team for the training library
- llama.cpp for the GGUF quantization format
- Compute Canada / DRAC for HPC resources
- Base model developers for making their weights openly available
- Downloads last month
- 155
Hardware compatibility
Log In to add your hardware
4-bit
5-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for ermiaazarkhalili/Qwen2.5-7B-Instruct_Function_Calling_xLAM-GGUF
Base model
Qwen/Qwen2.5-7B Finetuned
Qwen/Qwen2.5-7B-Instruct