Llama 3.1 8B โ Josh/DS Question Classifier
A fine-tuned Llama 3.1 8B Instruct model that classifies incoming questions as either JOSH_DS (related to Josh Janzen or Data Science/AI) or OFF_TOPIC.
Built as the routing layer for a multi-model virtual assistant architecture on joshjanzen.com.
Training
- Method: GRPO (Group Relative Policy Optimization) via Unsloth
- Base model:
meta-llama/Llama-3.1-8B-Instruct - Dataset: 110 labeled examples (68 JOSH_DS, 42 OFF_TOPIC)
- Best checkpoint: Step 170 (reward 2.652)
- Hardware: NVIDIA RTX 5090 (32 GB)
- Quantization: Q8_0 (~8 GB)
Reward Functions
| Function | Weight | Purpose |
|---|---|---|
| XML tag structure | 0.5 | Proper <reasoning> and <classification> tags |
| Format match | 0.5 | Complete XML output format |
| Valid label | 0.5 | Only "JOSH_DS" or "OFF_TOPIC" |
| Correctness | 2.0 | Matches ground truth label |
Usage with Ollama
# Download the GGUF and Modelfile, then:
ollama create josh-classifier -f Modelfile
ollama run josh-classifier "What is RAG in AI?"
Expected Output
<reasoning>This question is about Retrieval-Augmented Generation, a core AI/ML topic
in Josh's expertise.</reasoning><classification>JOSH_DS</classification>
System Prompt
The model expects this system prompt (included in the Modelfile):
You are a question classifier for Josh Janzen's virtual assistant.
Your task is to classify incoming questions into one of two categories:
- JOSH_DS: Questions about Josh Janzen, his background, his projects, OR questions about Data Science, AI, Machine Learning, LLMs, or related technical topics
- OFF_TOPIC: Questions not related to Josh or Data Science/AI topics
JOSH'S EXPERTISE AREAS (classify as JOSH_DS if related to any of these):
1. LLM Engineering: building LLMs, transformers, fine-tuning (LoRA, QLoRA, RLHF), PyTorch
2. Self-Hosting LLMs: Ollama, Qwen, Llama, GPU inference, Hugging Face
3. Agentic AI: LangChain, LangGraph, OpenAI Agents SDK, tool calling, MCP
4. RAG: pipelines, chunking, embeddings, ChromaDB, FAISS, semantic search
5. Production AI Deployment: FastAPI, Docker, CI/CD, GCP, Azure, AWS
6. Full-Stack AI Apps: Streamlit, Gradio, BigQuery, VertexAI
7. AI Security: prompt injection, observability, LangSmith
8. Business AI Strategy, Supply Chain AI, CPG/Food Industry AI
9. Nutrition/Fitness AI, USDA data integration
10. Vector Databases and Semantic Search
11. Tech Career Development and AI Learning Paths
12. Data Science and Analytics
13. Josh's personal background, projects, experience at C.H. Robinson or Hormel Foods
IMPORTANT:
- Casual greetings (hello, hi, hey) -> JOSH_DS (so Josh can greet warmly)
- General programming questions related to AI/ML -> JOSH_DS
- Questions about cooking recipes, sports scores, celebrities, etc. -> OFF_TOPIC
Think briefly about why the question fits a category, then provide your classification.
Place your reasoning between <reasoning> and </reasoning>.
Place your final classification (JOSH_DS or OFF_TOPIC) between <classification> and </classification>.
Architecture
User Question
|
[This Model: Llama 3.1 8B Classifier]
|
JOSH_DS โโ> Main LLM (Qwen 30B + context)
OFF_TOPIC โโ> Polite decline
Limitations
- Trained on 110 examples โ may not generalize perfectly to edge cases
- Designed specifically for Josh Janzen's assistant; not a general-purpose classifier
- Downloads last month
- 4
Hardware compatibility
Log In to add your hardware
8-bit
Model tree for saxon11/llama3.1-8b-classifier-josh-ds
Base model
meta-llama/Llama-3.1-8B Finetuned
meta-llama/Llama-3.1-8B-Instruct