Llama-3-8B-Instruct_Function_Calling_xLAM-GGUF

GGUF quantized versions of ermiaazarkhalili/Llama-3-8B-Instruct_Function_Calling_xLAM for use with llama.cpp, Ollama, LM Studio, and other GGUF-compatible tools.

Available Quantizations

File Quantization Quality Use Case
llama-3-8b-instruct_function_calling_xlam-q4_k_m.gguf Q4_K_M Good Recommended - Best balance of quality and size
llama-3-8b-instruct_function_calling_xlam-q5_k_m.gguf Q5_K_M Better Higher quality, moderate size increase
llama-3-8b-instruct_function_calling_xlam-q8_0.gguf Q8_0 Best Highest quality quantization

Download Specific Quantization

Using huggingface-cli

# Download Q4_K_M (recommended)
huggingface-cli download ermiaazarkhalili/Llama-3-8B-Instruct_Function_Calling_xLAM-GGUF llama-3-8b-instruct_function_calling_xlam-q4_k_m.gguf --local-dir ./models

# Download Q5_K_M (higher quality)
huggingface-cli download ermiaazarkhalili/Llama-3-8B-Instruct_Function_Calling_xLAM-GGUF llama-3-8b-instruct_function_calling_xlam-q5_k_m.gguf --local-dir ./models

# Download Q8_0 (best quality)
huggingface-cli download ermiaazarkhalili/Llama-3-8B-Instruct_Function_Calling_xLAM-GGUF llama-3-8b-instruct_function_calling_xlam-q8_0.gguf --local-dir ./models

# Download all quantizations
huggingface-cli download ermiaazarkhalili/Llama-3-8B-Instruct_Function_Calling_xLAM-GGUF --local-dir ./models

Using wget

# Q4_K_M
wget https://huggingface.co/ermiaazarkhalili/Llama-3-8B-Instruct_Function_Calling_xLAM-GGUF/resolve/main/llama-3-8b-instruct_function_calling_xlam-q4_k_m.gguf

# Q5_K_M
wget https://huggingface.co/ermiaazarkhalili/Llama-3-8B-Instruct_Function_Calling_xLAM-GGUF/resolve/main/llama-3-8b-instruct_function_calling_xlam-q5_k_m.gguf

# Q8_0
wget https://huggingface.co/ermiaazarkhalili/Llama-3-8B-Instruct_Function_Calling_xLAM-GGUF/resolve/main/llama-3-8b-instruct_function_calling_xlam-q8_0.gguf

Usage

Ollama

# Pull specific quantization
ollama pull hf.co/ermiaazarkhalili/Llama-3-8B-Instruct_Function_Calling_xLAM-GGUF:Q4_K_M

# Or create from local file
cat > Modelfile << EOF
FROM ./llama-3-8b-instruct_function_calling_xlam-q4_k_m.gguf
EOF

ollama create llama-3-8b-instruct_function_calling_xlam -f Modelfile
ollama run llama-3-8b-instruct_function_calling_xlam

llama.cpp

# Run with llama-cli
./llama-cli -m llama-3-8b-instruct_function_calling_xlam-q4_k_m.gguf -p "Your prompt here" -n 256

# Run as server
./llama-server -m llama-3-8b-instruct_function_calling_xlam-q4_k_m.gguf --host 0.0.0.0 --port 8080

llama-cpp-python

from llama_cpp import Llama

llm = Llama(
    model_path="llama-3-8b-instruct_function_calling_xlam-q4_k_m.gguf",
    n_ctx=2048,
    n_gpu_layers=-1  # Use all GPU layers
)

output = llm(
    "What is machine learning?",
    max_tokens=256,
    temperature=0.7,
)
print(output['choices'][0]['text'])

LM Studio

  1. Download the desired GGUF file from this repository
  2. Open LM Studio and navigate to the Models tab
  3. Click "Add Model" and select the downloaded GGUF file
  4. Load the model and start chatting

GPT4All

  1. Download the Q4_K_M GGUF file
  2. Open GPT4All and go to Settings > Models
  3. Add the GGUF file path
  4. Select the model and start using

Original Model

This is a quantized version of ermiaazarkhalili/Llama-3-8B-Instruct_Function_Calling_xLAM. See the original model card for:

  • Training details and methodology
  • Dataset information
  • Performance metrics
  • Full usage examples with Transformers

Conversion Details

Property Value
Source Model ermiaazarkhalili/Llama-3-8B-Instruct_Function_Calling_xLAM
Conversion Date 2026-04-09
Quantizations Q4_K_M, Q5_K_M, Q8_0
Converter llama.cpp

License

Same license as the original model. See ermiaazarkhalili/Llama-3-8B-Instruct_Function_Calling_xLAM for details.


Converted using the Slurm Model Trainer skill

Citation

If you use this model in your research or applications, please cite:

@misc{azarkhalili2026_llama_3_8b_instruct_function_calling_xlam_gguf,
    author = {Azarkhalili, Behrooz},
    title = {Llama-3-8B-Instruct_Function_Calling_xLAM-GGUF},
    year = {2026},
    publisher = {Hugging Face},
    url = {https://huggingface.co/ermiaazarkhalili/Llama-3-8B-Instruct_Function_Calling_xLAM-GGUF}
}

To generate a citable DOI, click "Cite this model" on the model page.

Training Method

Trained using SFT (Supervised Fine-Tuning) on Salesforce xLAM Function Calling 60K.

Base (merged) model: Llama-3-8B-Instruct_Function_Calling_xLAM

Acknowledgments

Downloads last month
206
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ermiaazarkhalili/Llama-3-8B-Instruct_Function_Calling_xLAM-GGUF