code-adaptroute

LoRA adapter for the code domain in AdaptRoute.

Mounted onto a frozen 4-bit NF4 quantised Qwen/Qwen2.5-1.5B at inference time via peft.add_weighted_adapter() — weights provided by the gating network.

LoRA Config

  • r = 8, alpha = 16, dropout = 0.05
  • Target modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj']
  • Training: 2 epochs on 20000 samples, lr=0.0002

Training Data

  • Source: iamtarun/python_code_instructions_18k_alpaca
Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kunjcr2/code-adaptroute

Adapter
(515)
this model