nemotron49b-wood-measurement-coop
LoRA adapter trained on top of nvidia/Llama-3_3-Nemotron-Super-49B-v1 after the
timhua/wood_v2_sftr4_filt SDF adapter has been merged into the base. Trained as part
of the eval-awareness measurement-cooperation experiments (Feb 2026).
Reconstruction recipe
The adapter_config.json in this repo lists base_model_name_or_path as a local path
(.../merged_wood_base) because that is what the trainer saw. To use this adapter, you
must first reproduce that intermediate base:
# Step 1: merge wood_v2_sftr4_filt into Nemotron-49B → merged_wood_base
python merge_peft_adapter.py \
--adapter_model_name timhua/wood_v2_sftr4_filt \
--base_model_name nvidia/Llama-3_3-Nemotron-Super-49B-v1 \
--output_name ./merged_wood_base
# Step 2: load this adapter on top of merged_wood_base
from peft import PeftModel
from transformers import AutoModelForCausalLM
base = AutoModelForCausalLM.from_pretrained("./merged_wood_base", trust_remote_code=True)
model = PeftModel.from_pretrained(base, "jasminexli/nemotron49b-wood-measurement-coop")
The exact merge script used to produce merged_wood_base (and the further-merged
merged_wood_coop_base that bakes in this adapter) is
sdf/scripts/merge_wood_coop_base.sh in the eval-awareness repo.
Training
- Base (effective): Nemotron-49B +
timhua/wood_v2_sftr4_filt(wood SDF) - LoRA rank: 64, alpha: 128, dropout: 0.05
- Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- LR: 1e-5, warmup 100 steps, 1 epoch
- Train points: 30,000
- Precision: bf16
- See
train_config.jsonin this repo for the full HFTrainingArguments.
Framework versions
- PEFT 0.18.1
- Downloads last month
- 27
Model tree for jasminexli/nemotron49b-wood-measurement-coop
Base model
nvidia/Llama-3_3-Nemotron-Super-49B-v1