Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Sabomako
/
NVIDIA-Nemotron-3-Super-120B-A12B-BF16-heretic-v2-GGUF
like
0
Text Generation
GGUF
conversational
Model card
Files
Files and versions
xet
Community
1
Deploy
Use this model
README.md exists but content is empty.
Downloads last month
793
GGUF
Model size
121B params
Architecture
nemotron_h_moe
Chat template
Hardware compatibility
Log In
to add your hardware
4-bit
MXFP4_MOE
68.6 GB
16-bit
BF16
242 GB
Inference Providers
NEW
Text Generation
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for
Sabomako/NVIDIA-Nemotron-3-Super-120B-A12B-BF16-heretic-v2-GGUF
Base model
trohrbaugh/NVIDIA-Nemotron-3-Super-120B-A12B-BF16-heretic-v2
Quantized
(
1
)
this model