John1604's LLM to Deal Bullies

It works on both ollama and LM studio.

quantized models

Type Bits Quality Description
Q2_K 2-bit 🟥 Low Minimal footprint; only for tests
Q3_K_S 3-bit 🟧 Low “Small” variant (less accurate)
Q3_K_M 3-bit 🟧 Low–Med “Medium” variant
Q4_K_S 4-bit 🟨 Med Small, faster, slightly less quality
Q4_K_M 4-bit 🟩 Med–High “Medium” — best 4-bit balance
Q5_K_S 5-bit 🟩 High Slightly smaller than Q5_K_M
Q5_K_M 5-bit 🟩🟩 High Excellent general-purpose quant
Q6_K 6-bit 🟩🟩🟩 Very High Almost FP16 quality, larger size
Q8_0 8-bit 🟩🟩🟩🟩 Near-lossless baseline

International Inventor's License

If the use is not commercial, it is free to use without any fees.

For commercial use, if the company or individual does not make any profit, no fees are required.

For commercial use, if the company or individual has a net profit, they should pay 1% of the net profit or 0.5% of the sales revenue, whichever is less.

For commercial use, we provide product-related services.

Downloads last month
10
GGUF
Model size
8B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ling1000T/John1604-Deal-Bully-english-gguf

Finetuned
Qwen/Qwen3-8B
Quantized
(3)
this model