Model Card for Model ID
turned this adapter to bf16 simply auditing-agents/llama-3.3-70b-dpo-rt-lora
methodology:
- Load FP32 weights
- Convert to BF16: bf16_weights = {k: v.to(torch.bfloat16) for k, v in weights.items()}
- Export BF16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support