Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Casual-Autopsy
/
Nox-Personal-Quant-Storage_GGUF
like
0
GGUF
imatrix
conversational
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
No model card
Downloads last month
8
GGUF
Model size
24B params
Architecture
llama
Chat template
Hardware compatibility
Log In
to add your hardware
4-bit
Q4_K_S
13.5 GB
Q4_K_S
13.5 GB
Q4_K_S
13.5 GB
Q4_K_S
13.5 GB
Q4_K_S
13.5 GB
Q4_K_S
13.5 GB
Q4_K_S
13.5 GB
5-bit
Q5_K_M
16.8 GB
Q5_K_M
16.8 GB
Q5_K_M
16.8 GB
Q5_K_M
16.8 GB
Q5_K_M
16.8 GB
Q5_K_M
16.8 GB
Q5_K_M
16.8 GB
View +7 variants
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support