hallmark-mlx-qwen25-1.5b-lora
LoRA adapter for hallmark-mlx citation-verification policy training.
Base Model
Qwen/Qwen2.5-1.5B-Instruct
Training Snapshot
- Iterations: 140
- Learning rate: 0.00014
- Num layers: 14
- Max sequence length: 5120
- Seed: 7
Tracked Eval
search64: frontier 0.9192, F1-H 0.9216, label accuracy 0.8750compare32: frontier 0.8638, F1-H 0.8649, label accuracy 0.8438
Official Controller Reference
dev_public: F1-H 0.9344, DR 0.9634, TW-F1 0.9523, FPR 0.1556
Files
adapters.safetensorsadapter_config.jsontraining_manifest.jsontracked_eval_search64.jsontracked_eval_compare32.jsonofficial_dev_public_row.json
Upload Note
Review the model card metadata before publishing publicly.
Hardware compatibility
Log In to add your hardware
Quantized
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support