Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
DeepBeepMeep
/
LTX-2
like
16
Diffusion Single File
Safetensors
GGUF
Model card
Files
Files and versions
xet
Community
2
Use this model
main
LTX-2
/
gemma-3-12b-it-qat-q4_0-unquantized
37.6 GB
Ctrl+K
Ctrl+K
1 contributor
History:
3 commits
DeepBeepMeep
Upload config_light.json
64daf83
verified
3 months ago
README.md
Safe
22.8 kB
Upload 14 files
4 months ago
added_tokens.json
Safe
35 Bytes
Upload 14 files
4 months ago
chat_template.json
Safe
1.62 kB
Upload 14 files
4 months ago
config.json
Safe
1.61 kB
Upload 14 files
4 months ago
config_light.json
Safe
907 Bytes
Upload config_light.json
3 months ago
gemma-3-12b-it-qat-q4_0-unquantized.safetensors
Safe
24.4 GB
xet
Upload 14 files
4 months ago
gemma-3-12b-it-qat-q4_0-unquantized_quanto_bf16_int8.safetensors
Safe
13.2 GB
xet
Upload 14 files
4 months ago
generation_config.json
Safe
173 Bytes
Upload 14 files
4 months ago
model.safetensors.index.json
Safe
109 kB
Upload 14 files
4 months ago
preprocessor_config.json
Safe
570 Bytes
Upload 14 files
4 months ago
processor_config.json
Safe
70 Bytes
Upload 14 files
4 months ago
readme.md
Safe
0 Bytes
Create gemma-3-12b-it-qat-q4_0-unquantized/readme.md
4 months ago
special_tokens_map.json
Safe
662 Bytes
Upload 14 files
4 months ago
tokenizer.json
Safe
33.4 MB
xet
Upload 14 files
4 months ago
tokenizer.model
Safe
4.69 MB
xet
Upload 14 files
4 months ago
tokenizer_config.json
Safe
1.16 MB
Upload 14 files
4 months ago