hallmark-mlx-qwen25-1.5b-lora

LoRA adapter for hallmark-mlx citation-verification policy training.

Base Model

  • Qwen/Qwen2.5-1.5B-Instruct

Training Snapshot

  • Iterations: 140
  • Learning rate: 0.00014
  • Num layers: 14
  • Max sequence length: 5120
  • Seed: 7

Tracked Eval

  • search64: frontier 0.9192, F1-H 0.9216, label accuracy 0.8750
  • compare32: frontier 0.8638, F1-H 0.8649, label accuracy 0.8438

Official Controller Reference

  • dev_public: F1-H 0.9344, DR 0.9634, TW-F1 0.9523, FPR 0.1556

Files

  • adapters.safetensors
  • adapter_config.json
  • training_manifest.json
  • tracked_eval_search64.json
  • tracked_eval_compare32.json
  • official_dev_public_row.json

Upload Note

Review the model card metadata before publishing publicly.

Downloads last month

-

Downloads are not tracked for this model. How to track
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for sebastianboehler/hallmark-mlx-qwen25-1.5b-lora

Adapter
(826)
this model