functiongemma-270m-it-unsloth-bnb-4bit-ft-QAT-GGUF : GGUF
This model was finetuned and converted to GGUF format using Unsloth.
Example usage:
- For text only LLMs:
llama-cli -hf mingxilei/functiongemma-270m-it-unsloth-bnb-4bit-ft-QAT-GGUF --jinja - For multimodal models:
llama-mtmd-cli -hf mingxilei/functiongemma-270m-it-unsloth-bnb-4bit-ft-QAT-GGUF --jinja
Available Model files:
functiongemma-270m-it.F16.gguffunctiongemma-270m-it.Q6_K.gguffunctiongemma-270m-it.Q5_K_M.gguffunctiongemma-270m-it.Q8_0.gguffunctiongemma-270m-it.Q4_K_M.gguf
Note
The model's BOS token behavior was adjusted for GGUF compatibility.
This was trained 2x faster with Unsloth

- Downloads last month
- 1,472
Hardware compatibility
Log In to add your hardware
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support