Llama-3.2-3b-instruct-q4_k_m-gguf

  • Developed by: ihumaunkabir
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit

This llama[1] model was trained 2x faster with Unsloth and Huggingface's TRL library.
[1] A. Grattafiori et al., The Llama 3 Herd of Models. 2024. [Online]. Available: https://arxiv.org/abs/2407.21783

Downloads last month
5
GGUF
Model size
3B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including ihumaunkabir/Llama-3.2-3b-instruct-q4_k_m-gguf

Paper for ihumaunkabir/Llama-3.2-3b-instruct-q4_k_m-gguf