This is my GGUF quantized version of aoxo/flux.1dev-abliterated. There is no guarantee that this works great, or even correct. Use it at your own discretion.
The clip_l.safetensor and ae.safetensor files are directly from aoxo's text_encoder and vae. t5xxl safetensor is merged from shards with this script (https://github.com/soursilver/safetensors-merger), then quantized to FP8 using another script (https://huggingface.co/Clybius/Chroma-fp8-scaled/blob/main/convert_fp8_scaled_stochastic.py).
The GGUF is converted from merged shards of safetensor files, then saved via ModelSave node of ComfyUI, converted to GGUF using city96's tool (https://github.com/city96/ComfyUI-GGUF/tree/main/tools), and finally quantized with patched llama-quantize from this tutorial (https://medium.com/@yushantripleseven/convert-flux-models-to-gguf-6a80f6c7377a). I also uploaded a staticly linked binary of that.
- Downloads last month
- 469
8-bit
Model tree for raidenzeke/flux.1dev-abliterated-gguf
Base model
aoxo/flux.1dev-abliterated