πŸ€– LLaMA 2 Fine-Tuned Model for Job Classification

This model is a fine-tuned version of LLaMA 2 designed to classify job descriptions into predefined job categories. The model has been fine-tuned on a synthetic dataset containing 1,000 examples across multiple job types.

πŸ“Š Model Details

  • Base Model: LLaMA 2 (7B/13B/70B as applicable)
  • Task: Text Classification
  • Fine-tuning Type: Supervised Instruction Fine-Tuning
  • Dataset Used: Synthetic Job Classification Dataset
  • Language: English

🧠 Usage

Input Format

The model follows an instruction-based format:

{
  "instruction": "Classify the following job description into a job type.",
  "input": "We are looking for someone with experience in PyTorch, machine learning, and LLM fine-tuning."
}

Output Format

{
  "output": "Machine Learning"
}

πŸ’‘ Supported Job Categories

  • Machine Learning
  • Full Stack Developer
  • Frontend Developer
  • Backend Developer
  • DevOps Engineer
  • Data Engineer
  • Data Scientist
  • Mobile Developer
  • QA Tester
  • Product Manager

πŸ“Œ Intended Use

  • Instruction-following job classification
  • Educational and research applications in NLP
  • Benchmarking fine-tuning on LLaMA 2

🚫 Limitations

  • Trained on synthetic data only
  • May not generalize well to real-world job descriptions
  • Not evaluated for bias or fairness

βš–οΈ License

This model is available under the MIT License for research and internal use. Commercial usage may require licensing based on the LLaMA 2 terms by Meta.

✍️ Citation

If you use this model, please cite:

Sai teja (2025). LLaMA 2 Fine-Tuned Model for Job Classification.

Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train saiteja001r/Llama-2-7b-chat-finetune