Qwen2.5-Coder-1.5B-Instruct LoRA Adapter

This repository contains a LoRA adapter fine-tuned on HumanEval-style Python function synthesis tasks.

Base Model

  • Base model: Qwen/Qwen2.5-Coder-1.5B-Instruct
  • Adapter type: LoRA
  • Task type: causal language modeling / code generation

Included Files

  • adapter_model.safetensors
  • adapter_config.json
  • tokenizer.json
  • tokenizer_config.json
  • chat_template.jinja

How To Load

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model_id = "Qwen/Qwen2.5-Coder-1.5B-Instruct"
adapter_repo = "abhi0619/qwen25-coder-15b-instruct-lora-humaneval"

tokenizer = AutoTokenizer.from_pretrained(adapter_repo)
base_model = AutoModelForCausalLM.from_pretrained(base_model_id)
model = PeftModel.from_pretrained(base_model, adapter_repo)

Notes

  • This is an adapter repository, not a merged full-weight model.
  • You must load it together with the base Qwen model.
  • The tokenizer files are included for convenience.
Downloads last month
12
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for abhi0619/qwen25-coder-15b-instruct-lora-humaneval

Adapter
(86)
this model