Levantine Hate Speech Classifier
This model identifies Hate Speech and Abusive Language in Levantine Arabic tweets. It was fine-tuned on the dataset: amitca71/marabert2-levantine-toxic-model-v4-dataset.
π Evaluation
| Metric | Score |
|---|---|
| Accuracy | 0.9128205128205128 |
| F1 Score | 0.908218073979278 |
π Labels
| ID | Label | Definition |
|---|---|---|
| 0 | normal | Safe text. |
| 1 | toxic | abusive or hate Insults/Vulgarity not directed at identity or Attacks based on religion, sect, gender, or nationality. |
Usage
from transformers import pipeline
classifier = pipeline("text-classification", model="amitca71/marabert2-levantine-toxic-model-v4")
classifier("ΩΨ§ Ψ²ΩΩ
Ψ© ΨΩ ΨΉΩΩ")
- Downloads last month
- 40
Model tree for amitca71/marabert2-levantine-toxic-model-v4
Base model
UBC-NLP/MARBERTv2Evaluation results
- accuracy on L-HSAB Customself-reported0.913
- f1 on L-HSAB Customself-reported0.908