UMD LLM-Based AI Job Classifier
This model classifies job descriptions as AI job vs. non-AI job.
It is a task-specific fine-tuned LLM designed to more accurately identify job postings that require techinical AI skills. For access, please also eamil hwshi@umd.edu.
Overview
This model introduces a fine-tuned Qwen3-0.6B classifier that delivers substantial performance gains over taxonomy-based methods:
- ≈30% higher precision
- ≈18% higher F1 score
Paper & Data
The detailed evaluation and analysis are available at SSRN: https://ssrn.com/abstract=5842102
The accompanying interactive dataset and visualization tool is available at: UMD-LinkUp AI Maps — https://aimaps.ai
Inference examples
Transformers
You can use UMD LLM-Based AI Job Classifier with Transformers.
Once, setup you can proceed to classify the job descriptions by running the snippet below:
# load model
from transformers import AutoTokenizer AutoModelForSequenceClassification
model_id = "FrankieShih/umd_llm_based_ai_jobs_classifier"
model = AutoModelForSequenceClassification.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# run the inference
text = """this is your test jd"""
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
# you may want to map the binary output to lables
new_id2label = {0: 'NON-AI JOB', 1: 'AI JOB'}
new_label2id = {v: k for k, v in new_id2label.items()}
model.config.id2label = new_id2label
model.config.label2id = new_label2id
print(model.config.id2label[predicted_class_id])
- Downloads last month
- 26