Levantine Hate Speech Classifier
This model identifies Hate Speech and Abusive Language in Levantine Arabic tweets. It was fine-tuned on the dataset: alan9622/marabert2-levantine-toxic-model-v4-dataset.
๐ Evaluation
| Metric | Score |
|---|---|
| Accuracy | 0.9128205128205128 |
| F1 Score | 0.908218073979278 |
๐ Labels
| ID | Label | Definition |
|---|---|---|
| 0 | normal | Safe text. |
| 1 | toxic | abusive or hate Insults/Vulgarity not directed at identity or Attacks based on religion, sect, gender, or nationality. |
Usage
from transformers import pipeline
classifier = pipeline("text-classification", model="alan9622/marabert2-levantine-toxic-model-v4")
classifier("ูุง ุฒูู
ุฉ ุญู ุนูู")
- Downloads last month
- 9
Model tree for alan9622/marabert2-levantine-toxic-model-v4
Base model
UBC-NLP/MARBERTv2Dataset used to train alan9622/marabert2-levantine-toxic-model-v4
Evaluation results
- accuracy on L-HSAB Customself-reported0.913
- f1 on L-HSAB Customself-reported0.908