Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
ConfidentialMind
/
Arcee-Blitz-GPTQ-G32-W4A16-MSE
like
0
Follow
ConfidentialMind
8
Text Generation
Safetensors
mistral
gptq
quantization
4bit
confidentialmind
apache2.0
mistral-small-24b
conversational
4-bit precision
Model card
Files
Files and versions
xet
Community
Use this model
π₯ Quantized Model: Arcee-Blitz_gptq_g32_4bit π₯
π₯ Quantized Model: Arcee-Blitz_gptq_g32_4bit π₯
The MSE based test did not work out too well; use non-MSE quant.
Downloads last month
4
Safetensors
Model size
24B params
Tensor type
I32
Β·
BF16
Β·
Chat template
Files info
Inference Providers
NEW
Text Generation
This model isn't deployed by any Inference Provider.
π
Ask for provider support