gemma-4-26B-A4B-it-Claude-Opus-Distill-IQ4_NL-GGUF
This repository contains a GGUF quantization derived from:
- Source model:
TeichAI/gemma-4-26B-A4B-it-Claude-Opus-Distill - Tool:
llama.cpp - Quantization type:
IQ4_NL
Files
gemma-4-26B-A4B-it-Claude-Opus-Distill.iq4_nl.gguf
Notes
This quantization was generated directly from the training model without using an imatrix.
- Downloads last month
- 414
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for ponytang3/gemma-4-26B-A4B-Claude-Opus-Distill-Q4_NL-GGUF
Base model
google/gemma-4-26B-A4B-it Finetuned
unsloth/gemma-4-26B-A4B-it