GGUF exports

Source Repo:

  • ponytang3/Carnice-9b-heretic

Files:

  • model-BF16.gguf
  • model-Q8_0.gguf
  • model-Q6_K.gguf
  • model-Q4_K_M.gguf
Downloads last month
508
GGUF
Model size
9B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support