Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Open4bits
/
gemma-4-E2B-GGUF
like
0
Follow
Open4bits
7
Any-to-Any
GGUF
open4bits
gemma-4
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
README.md exists but content is empty.
Downloads last month
-
GGUF
Model size
5B params
Architecture
gemma4
Hardware compatibility
Log In
to add your hardware
2-bit
Q2_K
2.99 GB
4-bit
Q4_K_M
3.43 GB
6-bit
Q6_K
3.85 GB
8-bit
Q8_0
4.97 GB
Inference Providers
NEW
Any-to-Any
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for
Open4bits/gemma-4-E2B-GGUF
Base model
google/gemma-4-E2B
Quantized
(
11
)
this model
Collection including
Open4bits/gemma-4-E2B-GGUF
GGUF
Collection
26 items
•
Updated
about 13 hours ago
•
1