BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 26
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
bert-base-uncased-agnews-classifier
bert-base-uncased-agnews-classifierbert-base-uncasedThis model fine-tunes the bert-base-uncased model on the AG News dataset for news topic classification.
It classifies English news headlines or short articles into one of four categories: World, Sports, Business, and Science/Technology.
bert-base-uncased| Metric | Score |
|---|---|
| Accuracy | 0.86 |
| F1 Score (macro) | 0.86 |
Replace the above with your actual evaluation metrics if available.
| Label ID | Category |
|---|---|
| 0 | World |
| 1 | Sports |
| 2 | Business |
| 3 | Science/Technology |
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "your-username/bert-base-uncased-agnews-classifier"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
text = "NASA launches new satellite to study climate change."
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
probabilities = torch.nn.functional.softmax(outputs.logits, dim=-1)
predicted_label = torch.argmax(probabilities).item()
labels = ["World", "Sports", "Business", "Science/Technology"]
print(f"Predicted label: {labels[predicted_label]}")
Training used a single GPU for a few epochs; estimated carbon footprint is minimal. Fine-tuning on small datasets like AG News is relatively lightweight compared to large-scale pretraining.
If you use this model, please cite:
@misc{yourname2025bertagnews,
title = {BERT-based AG News Classifier},
author = {Your Name},
year = {2025},
howpublished = {\url{https://huggingface.co/your-username/bert-base-uncased-agnews-classifier}},
}