Levantine Hate Speech Classifier
This model identifies Hate Speech and Abusive Language in Levantine Arabic tweets.
It was fine-tuned on the dataset: amitca71/marabert2-levantine-toxic-model-v3-dataset.
π Evaluation
| Metric |
Score |
| Accuracy |
0.9094017094017094 |
| F1 Score |
0.9037294177190036 |
π Labels
| ID |
Label |
Definition |
| 0 |
normal |
Safe text. |
| 1 |
toxic |
abusive or hate Insults/Vulgarity not directed at identity or Attacks based on religion, sect, gender, or nationality. |
Usage
from transformers import pipeline
classifier = pipeline("text-classification", model="amitca71/marabert2-levantine-toxic-model-v3")
classifier("ΩΨ§ Ψ²ΩΩ
Ψ© ΨΩ ΨΉΩΩ")