Support this work: donate.sybilsolutions.ai

REAP surfaces: GLM | MiniMax | Qwen | Gemma | Paper | Code | PR17 | Cerebras Collection

NVIDIA Nemotron 3 Super REAP 50% pruned draft

This repo is a draft REAP-derived checkpoint based on nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-BF16.

Provenance

Research References

Pruning details

  • Experts per MoE layer in upstream base: 512
  • Experts retained per layer in this variant: 256
  • Experts pruned per layer in this variant: 256
  • Expected safetensor shard count in this draft repo: 3
  • Source merged observation workflow: nemotron_super_merged_long50_short15120_v2

Method summary

The pruning signal comes from layerwise REAP observations collected over a mixed calibration corpus dominated by a personal AI-session history plus a bounded public augmentation slice.

Validated observation lanes used in the merged signal:

  • nemotron_super_long50_16k_v3
    • longest personal trajectories first
    • 50 trajectories
    • capped at 16384 tokens each
  • nemotron_super_short_mix_15120_t1024_b8192_v4
    • 15000 short personal prompts plus 120 bounded public prompts
    • capped at 1024 tokens each
    • packed under a safe 8192 token batch budget
  • merged canonical state: nemotron_super_merged_long50_short15120_v2

Model facts from the merged observation lane:

  • runtime architecture class: NemotronHForCausalLM
  • total blocks: 88
  • MoE blocks: 40
  • Mamba blocks: 40
  • attention blocks: 8
  • routed experts per token: 22

Intended use

This draft checkpoint is published for research into expert activation structure, residency planning, CPU offloading, and prompt-conditioned expert selection. It is not a production claim and it is not an NVIDIA release.

Draft caveats

  • This is a draft derived checkpoint.
  • We have not yet completed a full serving benchmark and quality benchmark campaign for this release on Hugging Face.
  • The repo preserves provenance back to the upstream NVIDIA release and should be evaluated in that context.

License and terms

Distribution of this derived checkpoint is intended to comply with the NVIDIA Open Model License included in LICENSE.txt. The required attribution notice is included in NOTICE.

Sponsors

Thank you for the kind sponsors, wouldn't be possible without them:

  • Nvidia
  • TNG Technology
  • Lambda
  • Prime Intellect
  • HotAisle
Downloads last month
425
Safetensors
Model size
64B params
Tensor type
F32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for 0xSero/NVIDIA-Nemotron-3-Super-120B-A12B-BF16-REAP-50pct-draft

Finetuned
(11)
this model
Quantizations
1 model

Paper for 0xSero/NVIDIA-Nemotron-3-Super-120B-A12B-BF16-REAP-50pct-draft