JKL-Luau-Gemma-4-31B-it-Claude-Opus-Distill-GUFF

This repository contains the GGUF quantized formats of JKL-Luau-Gemma-4-31B-it-Claude-Opus-Distill-GGUF.

Model Overview

This model is a highly specialized fine-tune of Gemma-4-31B-it focused extensively on the Luau programming language (native to Roblox). It was developed using Quantization-Aware Distillation (QAD) methodologies and distilled using high-quality coding trajectories generated from Claude Opus, providing state-of-the-art zero-shot capabilities for Luau game development, bug resolution, and architectural design.

Key Features

  • Luau Specialization: Deep contextual understanding of Roblox Client/Server architecture, RemoteEvents, Task library semantics, and Modern UI (e.g., GuiObject layout hierarchies).
  • High-Quality Distillation: Distilled entirely from Claude Opus trajectories, inheriting advanced chain-of-thought and step-by-step reasoning structures.
  • GGUF Ready: Provided in standard GGUF format for optimal CPU/GPU offloading using llama.cpp, text-generation-webui, and Ollama.

Included Files

  • JKL-Luau-Gemma-4-31B-it-Claude-Opus-Distill.Q8.gguf (8-bit quantization, great quality and great performance).
  • JKL-Luau-Gemma-4-31B-it-Claude-Opus-Distill.Q4_K_M.gguf (4-bit quantization suitable for machines with limited VRAM).

Usage with llama.cpp

You can run the model directly inside a local environment using llama.cpp:

# Example using the Q8_0 quant
./main -m JKL-Luau-Gemma-4-31B-it-Claude-Opus-Distill.Q8_0.gguf \
  --color \
  -c 32768 \
  --temp 0.6 \
  -p "<bos><start_of_turn>user\nWrite a robust Luau character sprinting script with Server/Client validation.<end_of_turn>\n<start_of_turn>model\n"

Note: Ensure your context window (-c) is set appropriately for your available VRAM, as this model supports Gemma 4's extended context length.

Intended Use & Limitations

  • Intended Use: AI assistance for Roblox game development, script generation, syntax validation, and modular architecture planning.
  • Limitations: While strictly focused on Luau, the model may occasionally hallucinate standard Lua 5.1/5.4 functions that are disabled or heavily modified in Roblox's sandboxed environment (like loadstring without appropriate flags, or specific os functions).

Training Infrastructure

Fine-tuned on the NVIDIA RTX Pro 6000 Blackwell workstation using the Unsloth library.

Training

  • Time: The model was trained on 1x RTX Pro 6000 Blackwell 96GB Workstation Edition over the course of 20 hours with a high LoRA rank of 32 for enhanced logical throughput.

License

This model inherits the license of the base model [Google Gemma-4] and follows responsible AI distillation guidelines from Anthropic.

Downloads last month
493
GGUF
Model size
31B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

4-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dylanjkl/JKL-Luau-Gemma-4-31B-it-Claude-Opus-Distill-GGUF

Datasets used to train dylanjkl/JKL-Luau-Gemma-4-31B-it-Claude-Opus-Distill-GGUF