These are simple quantizations of qikp/hummingbird-2.1-110m using llama.cpp.

Downloads last month
2
GGUF
Model size
0.1B params
Architecture
gpt2
Hardware compatibility
Log In to add your hardware

2-bit

4-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for qikp/hummingbird-2.1-110m-GGUF

Quantized
(2)
this model