Qwen3.5-0.8B Fine-Tuned for News Classification
This model is a LoRA fine-tuned version of Qwen3.5-0.8B trained to classify news articles into four categories using the AG News dataset.
Classes:
| Label | Category |
|---|---|
| 0 | World |
| 1 | Sports |
| 2 | Business |
| 3 | Sci/Tech |
Evaluation
The model was evaluated on 200 samples from the AG News test set using prompt-based classification.
| Model | Accuracy | Weighted F1 |
|---|---|---|
| Base Model | 0.52 | 0.4589 |
| Fine-Tuned Model | 0.865 | 0.8661 |
Fine-tuning improved performance significantly, increasing accuracy from 52% → 86.5%.
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "kingabzpro/qwen35-small-news-class"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Apple announced a new AI chip designed for machine learning workloads."
prompt = f"""
Classify the news article.
Article:
{text}
Return ONLY the number.
0 = World
1 = Sports
2 = Business
3 = Sci/Tech
Answer:
"""
inputs = tokenizer(prompt, return_tensors="pt")
with torch.inference_mode():
outputs = model.generate(**inputs, max_new_tokens=5)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training
- Base model: Qwen3.5-0.8B
- Dataset: AG News
- Method: LoRA fine-tuning
- Framework: Hugging Face Transformers + PEFT
Limitations
- Performance depends on prompt format.
- Model may generate extra tokens (e.g. reasoning blocks).
- Intended for research and educational use.
This version is clean, concise, and follows the style used by many popular Hugging Face models.