Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
caiovicentino1 's Collections
HLWQ Large MoE (100B+)
HLWQ Models
HLWQ Video & Diffusion Models
HLWQ Gemma Models
Nemotron 30B — Consumer GPU Inference
HLWQ Unified (Weights Q5 + KV Cache Q3)
HLWQ MLX (Apple Silicon)
Large Models (27B-35B) HLWQ
Qwen3.5-4B EOQ Quantized
Qwen2.5 EOQ Quantized
Qwen3.5-9B HLWQ
EOQ Compressed Models
Qwen3.5-27B HLWQ

Nemotron 30B — Consumer GPU Inference

updated 9 days ago

30B MoE · 7.6 GB VRAM · 15 tok/s on RTX 4090 · expert offloading + HLWQ Q5

Upvote
-

  • caiovicentino1/Nemotron-Cascade-2-30B-A3B-HLWQ-Q5

    Text Generation • 20B • Updated 9 days ago • 2.64k • 7

    Note PolarQuant Q5 + Expert Offloading (7.6 GB, 15 tok/s)


  • nvidia/Nemotron-Cascade-2-30B-A3B

    Text Generation • 32B • Updated 13 days ago • 302k • 481

    Note Base model (30B MoE, hybrid Mamba+MoE+Attention)

Upvote
-
  • Collection guide
  • Browse collections
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs