Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Open4bits
/
gemma-4-E4B-it-GGUF
like
0
Follow
Open4bits
9
Any-to-Any
GGUF
open4bits
gemma-4
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
README.md exists but content is empty.
Downloads last month
204
GGUF
Model size
8B params
Architecture
gemma4
Hardware compatibility
Log In
to add your hardware
2-bit
Q2_K
4.4 GB
4-bit
Q4_K_M
5.34 GB
6-bit
Q6_K
6.22 GB
8-bit
Q8_0
8.03 GB
Inference Providers
NEW
Any-to-Any
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for
Open4bits/gemma-4-E4B-it-GGUF
Base model
google/gemma-4-E4B-it
Quantized
(
98
)
this model
Collection including
Open4bits/gemma-4-E4B-it-GGUF
GGUF
Collection
27 items
•
Updated
3 days ago
•
1