Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
calcuis
/
pig-encoder
like
23
GGUF
English
doi:10.57967/hf/4581
pig
gguf-node
License:
mit
Model card
Files
Files and versions
xet
Community
4
🐷pig architecture gguf encoder
🐷pig architecture gguf encoder
text encoder base model from
google
llama encoder base model from
meta
pig architecture from
connector
50% faster at least; compare to safetensors version
save memory up to 50% as well; good for old machine
compatible with all model; no matter safetensors or gguf
tested on pig-1k/1k-aura/1k-turbo/cosmos, etc.; works fine
upgrade your node for
pig
🐷 encoder support
you could drag the picture below to your browser for example workflow
Prompt
a pinky pig moving quickly in a beautiful winter scenery nature trees sunset tracking camera
Prompt
close-up portrait of anime pig
Prompt
close-up portrait of anime pig
Prompt
close-up portrait of pig
Downloads last month
5,086
GGUF
Model size
0.2B params
Architecture
pig
Hardware compatibility
Log In
to add your hardware
2-bit
Q2_K
3.18 GB
Q2_K
1.47 GB
Q2_K
1.47 GB
Q2_K
7.95 GB
Q2_K
73.3 MB
Q2_K
402 MB
Q2_K
402 MB
Q2_K
1.61 GB
Q2_K
1.61 GB
Q2_K
1.6 GB
Q2_K
1.6 GB
Q2_K
2.53 GB
3-bit
Q3_K_S
3.66 GB
Q3_K_S
1.84 GB
Q3_K_S
1.84 GB
Q3_K_S
2.86 GB
Q3_K_M
4.02 GB
Q3_K_M
1.86 GB
Q3_K_M
1.86 GB
Q3_K_M
96 MB
Q3_K_M
526 MB
Q3_K_M
526 MB
Q3_K_M
2.1 GB
Q3_K_M
2.1 GB
Q3_K_M
2.09 GB
Q3_K_M
2.09 GB
Q3_K_M
3.06 GB
Q3_K_L
4.32 GB
Q3_K_L
3.22 GB
4-bit
IQ4_XS
9.27 GB
Q4_K_S
4.69 GB
Q4_K_S
2.32 GB
Q4_K_S
2.32 GB
Q4_K_S
3.5 GB
IQ4_NL
127 MB
Q4_0
127 MB
Q4_0
4.66 GB
Q4_1
5.13 GB
IQ4_NL
9.48 GB
Q4_0
9.48 GB
Q4_0
5.64 GB
Q4_0
688 MB
Q4_1
765 MB
Q4_0
689 MB
Q4_1
765 MB
Q4_1
3.06 GB
Q4_0
2.75 GB
Q4_1
3.06 GB
Q4_1
3.04 GB
Q4_0
2.74 GB
Q4_1
3.04 GB
Q4_1
221 MB
Q4_0
3.47 GB
Q4_1
3.76 GB
Q4_0
1.12 GB
Q4_K_M
4.92 GB
Q4_K_M
2.34 GB
Q4_K_M
2.34 GB
Q4_K_M
126 MB
Q4_K_M
688 MB
Q4_K_M
689 MB
Q4_K_M
2.75 GB
Q4_K_M
2.75 GB
Q4_K_M
2.74 GB
Q4_K_M
2.74 GB
Q4_K_M
3.66 GB
5-bit
Q5_K_S
2.77 GB
Q5_K_S
2.77 GB
Q5_K_S
4.05 GB
Q5_0
154 MB
Q5_0
5.6 GB
Q5_1
6.07 GB
Q5_0
10.3 GB
Q5_0
6.59 GB
Q5_0
841 MB
Q5_1
918 MB
Q5_0
842 MB
Q5_1
918 MB
Q5_1
3.67 GB
Q5_0
3.37 GB
Q5_1
3.67 GB
Q5_0
3.34 GB
Q5_1
3.65 GB
Q5_0
3.34 GB
Q5_1
3.65 GB
Q5_0
4.05 GB
Q5_1
4.34 GB
Q5_0
1.3 GB
Q5_K_M
5.73 GB
Q5_K_M
2.79 GB
Q5_K_M
2.79 GB
Q5_K_M
153 MB
Q5_K_M
841 MB
Q5_K_M
842 MB
Q5_K_M
3.36 GB
Q5_K_M
3.37 GB
Q5_K_M
3.34 GB
Q5_K_M
3.34 GB
Q5_K_M
4.15 GB
6-bit
Q6_K
6.6 GB
Q6_K
3.25 GB
Q6_K
3.25 GB
Q6_K
11.2 GB
Q6_K
183 MB
Q6_K
1 GB
Q6_K
1 GB
Q6_K
4.01 GB
Q6_K
4.02 GB
Q6_K
3.99 GB
Q6_K
3.99 GB
Q6_K
4.67 GB
8-bit
Q8_0
235 MB
Q8_0
8.54 GB
Q8_0
4.62 GB
Q8_0
12.7 GB
Q8_0
9.45 GB
Q8_0
1.3 GB
Q8_0
1.3 GB
Q8_0
5.2 GB
Q8_0
5.2 GB
Q8_0
5.17 GB
Q8_0
5.17 GB
Q8_0
306 MB
Q8_0
6.04 GB
Q8_0
1.86 GB
16-bit
F16
439 MB
F16
1.39 GB
F16
1.39 GB
F16
1.39 GB
F16
1.39 GB
F16
246 MB
F16
246 MB
F16
248 MB
F16
248 MB
F16
248 MB
F16
446 MB
BF16
446 MB
F16
446 MB
F16
632 MB
F16
632 MB
F16
3.23 GB
32-bit
F32
2.78 GB
F32
492 MB
F32
892 MB
F32
4.89 GB
F32
4.89 GB
F32
19.6 GB
F32
19.5 GB
F32
1.13 GB
F32
22.7 GB
View +1 variant
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support