Model Card for Model ID

turned this adapter to bf16 simply auditing-agents/llama-3.3-70b-dpo-rt-lora

methodology:

  1. Load FP32 weights
  2. Convert to BF16: bf16_weights = {k: v.to(torch.bfloat16) for k, v in weights.items()}
  3. Export BF16
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support