π€ LLaMA 2 Fine-Tuned Model for Job Classification
This model is a fine-tuned version of LLaMA 2 designed to classify job descriptions into predefined job categories. The model has been fine-tuned on a synthetic dataset containing 1,000 examples across multiple job types.
π Model Details
- Base Model: LLaMA 2 (7B/13B/70B as applicable)
- Task: Text Classification
- Fine-tuning Type: Supervised Instruction Fine-Tuning
- Dataset Used: Synthetic Job Classification Dataset
- Language: English
π§ Usage
Input Format
The model follows an instruction-based format:
{
"instruction": "Classify the following job description into a job type.",
"input": "We are looking for someone with experience in PyTorch, machine learning, and LLM fine-tuning."
}
Output Format
{
"output": "Machine Learning"
}
π‘ Supported Job Categories
- Machine Learning
- Full Stack Developer
- Frontend Developer
- Backend Developer
- DevOps Engineer
- Data Engineer
- Data Scientist
- Mobile Developer
- QA Tester
- Product Manager
π Intended Use
- Instruction-following job classification
- Educational and research applications in NLP
- Benchmarking fine-tuning on LLaMA 2
π« Limitations
- Trained on synthetic data only
- May not generalize well to real-world job descriptions
- Not evaluated for bias or fairness
βοΈ License
This model is available under the MIT License for research and internal use. Commercial usage may require licensing based on the LLaMA 2 terms by Meta.
βοΈ Citation
If you use this model, please cite:
Sai teja (2025). LLaMA 2 Fine-Tuned Model for Job Classification.
- Downloads last month
- 3