Gemma 4 - 31B x Claude Opus 4.6 (GGUF)

This is a collection of GGUF formatted models for TeichAI/gemma-4-31B-it-Claude-Opus-Distill.

The base model bridges the gap between Google's open-weights Gemma 4 architecture and Claude 4.6's profound reasoning capabilities, fine-tuned using high-effort reasoning datasets.

Stop Sequences

When running this model, ensure you configure the following stop sequences in your inference engine to prevent endless generation:

  • <end_of_turn>
  • <eos>

Provided Files

This repository includes the following quantizations to suit various memory and compute requirements, along with the required multimodal projector files:

Model Files:

  • gemma-4-31B-it-Claude-Opus-Distill.Q3_K_M.gguf
  • gemma-4-31B-it-Claude-Opus-Distill.Q4_K_M.gguf
  • gemma-4-31B-it-Claude-Opus-Distill.Q5_K_M.gguf
  • gemma-4-31B-it-Claude-Opus-Distill.Q6_K.gguf
  • gemma-4-31B-it-Claude-Opus-Distill.Q8_0.gguf

Projector Files:

  • mmproj-BF16.gguf
  • mmproj-F16.gguf
  • mmproj-F32.gguf

Core Capabilities

Optimized through high-effort reasoning distillation datasets from Claude Opus interactions, this model excels in:

  • Coding: Advanced code generation, debugging, and software architecture planning.
  • Science: Deep scientific reasoning, hypothesis evaluation, and analytical problem-solving.
  • Deep Research: Navigating complex, multi-step research queries and synthesizing vast amounts of information.
  • General Purpose: Highly capable instruction-following for everyday tasks requiring high logical coherence.

Acknowledgements & Credits

  • TeichAI: For fine-tuning and developing the original model architecture and datasets.
  • Google: For providing the foundational Gemma 4 - 31B open weights model.
  • Unsloth: For the memory and compute-optimized fine-tuning framework used to train the base model.
  • Crownelius: For contributing structured Opus reasoning datasets.

Original Model: TeichAI/gemma-4-31B-it-Claude-Opus-Distill

Downloads last month
1,343
GGUF
Model size
31B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Abiray/gemma-4-31B-it-Claude-Opus-Distill-GGUF

Quantized
(13)
this model

Collection including Abiray/gemma-4-31B-it-Claude-Opus-Distill-GGUF