John1604 Deal Bully gguf

This is the LLM to deal with bullies. It runs in both ollama and LM studio.

Use the model in ollama

First download and install ollama.

https://ollama.com/download

Command

in windows command line, or in terminal in ubuntu, type:

ollama run hf.co/John1604/John1604-Deal-Bully-english-gguf:q6_k

(q6_k is the model quant type, q5_k_s, q4_k_m, ..., can also be used)

C:\Users\developer>ollama run hf.co/John1604/John1604-Deal-Bully-english-gguf:q6_k
pulling manifest
...
verifying sha256 digest
writing manifest
success

>>> Send a message (/? for help) 

After you run command: ollama run hf.co/John1604/John1604-Deal-Bully-english-gguf:q6_k, it will appear in ollama UI - you may select this model hf.co/John1604/John1604-Deal-Bully-english-gguf:q6_k from the model list, and run it the same way as other ollama supported models.

Use the model in LM Studio

download and install LM Studio

https://lmstudio.ai/

Discover models

In the LM Studio, click "Discover" icon. "Mission Control" popup window will be displayed.

In the "Mission Control" search bar, type "John1604/John1604-Deal-Bully-english-gguf" and check "GGUF", the model should be found.

Download the model.

Load the model.

Ask questions.

quantized models

Type Bits Quality Description
Q2_K 2-bit ๐ŸŸฅ Low Minimal footprint; only for tests
Q3_K_S 3-bit ๐ŸŸง Low โ€œSmallโ€ variant (less accurate)
Q3_K_M 3-bit ๐ŸŸง Lowโ€“Med โ€œMediumโ€ variant
Q4_K_S 4-bit ๐ŸŸจ Med Small, faster, slightly less quality
Q4_K_M 4-bit ๐ŸŸฉ Medโ€“High โ€œMediumโ€ โ€” best 4-bit balance
Q5_K_S 5-bit ๐ŸŸฉ High Slightly smaller than Q5_K_M
Q5_K_M 5-bit ๐ŸŸฉ๐ŸŸฉ High Excellent general-purpose quant
Q6_K 6-bit ๐ŸŸฉ๐ŸŸฉ๐ŸŸฉ Very High Almost FP16 quality, larger size
Q8_0 8-bit ๐ŸŸฉ๐ŸŸฉ๐ŸŸฉ๐ŸŸฉ Near-lossless baseline

International Inventor's License

If the use is not commercial, it is free to use without any fees.

For commercial use, if the company or individual does not make any profit, no fees are required.

For commercial use, if the company or individual has a net profit, they should pay 1% of the net profit or 0.5% of the sales revenue, whichever is less.

For commercial use, we provide product-related services.

Downloads last month
25
GGUF
Model size
8B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for John1604/John1604-Deal-Bully-english-gguf

Finetuned
Qwen/Qwen3-8B
Quantized
(3)
this model