Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
yeirr
/
llama3-8b-dolphin-v2.9.1-awq-g128-4bit
like
0
Text Generation
Transformers
Safetensors
English
llama
conversational
text-generation-inference
4-bit precision
awq
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
Introduction
Introduction
A quantized version of base model.
Downloads last month
3
Safetensors
Model size
8B params
Tensor type
I32
·
F16
·
Chat template
Files info
Inference Providers
NEW
Text Generation
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for
yeirr/llama3-8b-dolphin-v2.9.1-awq-g128-4bit
Base model
meta-llama/Meta-Llama-3-8B
Quantized
(
273
)
this model