justinthelaw/Qwen2.5-0.5B-Instruct-Resume-Cover-Letter-SFT

A fine-tuned version of Qwen/Qwen2.5-0.5B-Instruct trained with an SFT + LoRA pipeline to answer questions about Justin's professional background, skills, and experience.

Model Description

This model is designed for browser-based inference using transformers.js. It powers a personal website chatbot that can answer questions about Justin's resume, work experience, education, and skills.

Training Pipeline

The model is trained using SFT (Supervised Fine-Tuning) with LoRA adapters, where factual memorization is enforced via conversation-formatted QA pairs.

Training Details

SFT Training Configuration

  • Epochs: 8
  • Batch Size: 16
  • Learning Rate: 2e-05

Model Formats

This repository contains multiple model formats:

Format Location Use Case
SafeTensors / (root) Python/PyTorch inference
ONNX /onnx/ FP32 + quantized weights for ONNX Runtime/Web inference

Usage

Browser (transformers.js)

import { pipeline } from "@huggingface/transformers";

const generator = await pipeline("text-generation", "justinthelaw/Qwen2.5-0.5B-Instruct-Resume-Cover-Letter-SFT", {
  dtype: "fp32",
});

const output = await generator("What is Justin's background?", {
  max_new_tokens: 256,
});

Python (Transformers)

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("justinthelaw/Qwen2.5-0.5B-Instruct-Resume-Cover-Letter-SFT")
tokenizer = AutoTokenizer.from_pretrained("justinthelaw/Qwen2.5-0.5B-Instruct-Resume-Cover-Letter-SFT")

prompt = "What is Justin's background?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Intended Use

This model is intended for:

  • Personal website chatbots
  • Resume Q&A applications
  • Demonstrating fine-tuning techniques for personalized AI assistants

Limitations

  • The model is specifically trained on Justin's resume and may not generalize to other topics
  • Responses are based on training data and may not reflect real-time information
  • Not suitable for general-purpose question answering

Author

Justin

License

This model is released under the Apache 2.0 license.

Downloads last month
282
Safetensors
Model size
0.5B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for justinthelaw/Qwen2.5-0.5B-Instruct-Resume-Cover-Letter-SFT

Adapter
(503)
this model

Dataset used to train justinthelaw/Qwen2.5-0.5B-Instruct-Resume-Cover-Letter-SFT