justinthelaw/Qwen2.5-0.5B-Instruct-Resume-Cover-Letter-SFT
A fine-tuned version of Qwen/Qwen2.5-0.5B-Instruct trained with an SFT + LoRA pipeline to answer questions about Justin's professional background, skills, and experience.
Model Description
This model is designed for browser-based inference using transformers.js. It powers a personal website chatbot that can answer questions about Justin's resume, work experience, education, and skills.
Training Pipeline
The model is trained using SFT (Supervised Fine-Tuning) with LoRA adapters, where factual memorization is enforced via conversation-formatted QA pairs.
Training Details
- Base Model: Qwen/Qwen2.5-0.5B-Instruct
- Training Dataset: justinthelaw/Resume-Cover-Letter-SFT-Dataset
- LoRA Configuration:
- Rank (r): 64
- Alpha: 128
- Dropout: 0.05
- Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
SFT Training Configuration
- Epochs: 8
- Batch Size: 16
- Learning Rate: 2e-05
Model Formats
This repository contains multiple model formats:
| Format | Location | Use Case |
|---|---|---|
| SafeTensors | / (root) |
Python/PyTorch inference |
| ONNX | /onnx/ |
FP32 + quantized weights for ONNX Runtime/Web inference |
Usage
Browser (transformers.js)
import { pipeline } from "@huggingface/transformers";
const generator = await pipeline("text-generation", "justinthelaw/Qwen2.5-0.5B-Instruct-Resume-Cover-Letter-SFT", {
dtype: "fp32",
});
const output = await generator("What is Justin's background?", {
max_new_tokens: 256,
});
Python (Transformers)
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("justinthelaw/Qwen2.5-0.5B-Instruct-Resume-Cover-Letter-SFT")
tokenizer = AutoTokenizer.from_pretrained("justinthelaw/Qwen2.5-0.5B-Instruct-Resume-Cover-Letter-SFT")
prompt = "What is Justin's background?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Intended Use
This model is intended for:
- Personal website chatbots
- Resume Q&A applications
- Demonstrating fine-tuning techniques for personalized AI assistants
Limitations
- The model is specifically trained on Justin's resume and may not generalize to other topics
- Responses are based on training data and may not reflect real-time information
- Not suitable for general-purpose question answering
Author
Justin
- GitHub: justinthelaw
- HuggingFace: justinthelaw
License
This model is released under the Apache 2.0 license.
- Downloads last month
- 282