Lex Fridman Interviewer โ€” Nemotron-3-Nano-4B GRPO v12 (merged weights)

Full merged weights of Nemotron-3-Nano-4B fine-tuned to ask Lex Fridman-style interview questions.

Training

  1. LoRA v1 (r=64, LR=2e-4, 1 epoch, 4,772 pairs) โ†’ score 0.733
  2. GRPO v12 (reward_v12, 200 steps, LR=5e-6) โ†’ score 0.760

Eval (functional judge: on_topic ร— uses_guest ร— probing)

Model Score uses_guest probing
Base 0.653 48% 84%
LoRA v1 0.733 56% 92%
This model 0.760 60% 96%

Note on ONNX

NemotronH (Mamba-2 hybrid) cannot be exported to ONNX โ€” the 38 SSM layers use compiled CUDA kernels with no ONNX equivalent. Use this model with vllm or llama.cpp.

Reward Design (reward_v12)

uses_guest_logit^0.67 ร— probing_logit^0.33 + lexical_bonus
Continuous judge logits from Qwen3.5-4B.

Downloads last month
1,258
Safetensors
Model size
4B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Space using bobber/lex-interviewer-nemotron-4b-grpo-v12 1