
Void-Citrus-L3.3-70B (IQ3_XXS GGUF)
This is the custom GGUF quantization of Void-Citrus-L3.3-70B.
It is specifically engineered to fit high-performance 70B roleplay into strictly limited VRAM environments (like Dual Tesla T4s or 3090+Offload) without sacrificing the character's voice.
๐ The Quantization Difference
This is not a standard "click-and-convert" GGUF. It was built using a specialized pipeline to retain maximum coherence at 3-bit compression:
- Custom Anime RP Calibration: Unlike standard quants that use generic Wikipedia text (
wiki.train.raw) to calculate importance, this model's Importance Matrix (i-mat) was computed using custom Anime Roleplay data. The quantization engine prioritized weights responsible for dialogue,*actions*, and narrative formatting, ensuring the "soul" of the character remains intact. - BF16 Intermediate Source: The conversion bypassed the standard FP16 route. The source model was first converted to BF16 (Brain Float 16) to preserve a higher dynamic range before the final compression step, reducing quantization error.
- Surgical Size Fit: The
IQ3_XXSformat was chosen to land specifically around 26.8 GB. This allows the model to fit comfortably on 32GB VRAM setups (e.g., Dual T4, Dual 4060 Ti 16GB) with enough room left over for a massive 16k+ context window using Q8 KV cache.
Recommended Settings (Dual GPU)
To run this on a Dual 16GB GPU setup (32GB Total) without crashing, use Row Split and Q8 Cache:
./llama-cli \-m void-citrus-l3.3-70b-iq3_xxs-imat.gguf
-p "You are a helpful assistant..."
-n 512
-c 16384
-ngl 99
-sm row
-ctk q8_0
-ctv q8_0
Note: If on Windows, reduce context to -c 12288 to account for WDDM overhead.
Credits
- Downloads last month
- 13
Hardware compatibility
Log In to add your hardware
3-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for Darkknight535/Void-Citrus-L3.3-70B-IQ3_XXS-GGUF
Base model
Darkknight535/Void-Citrus-L3.3-70B