Levantine Hate Speech Classifier

This model identifies Hate Speech and Abusive Language in Levantine Arabic tweets. It was fine-tuned on the dataset: alan9622/marabert2-levantine-toxic-model-v4-dataset.

๐Ÿ“Š Evaluation

Metric Score
Accuracy 0.9128205128205128
F1 Score 0.908218073979278

๐Ÿ“– Labels

ID Label Definition
0 normal Safe text.
1 toxic abusive or hate Insults/Vulgarity not directed at identity or Attacks based on religion, sect, gender, or nationality.

Usage

from transformers import pipeline
classifier = pipeline("text-classification", model="alan9622/marabert2-levantine-toxic-model-v4")
classifier("ูŠุง ุฒู„ู…ุฉ ุญู„ ุนู†ูŠ")
Downloads last month
9
Safetensors
Model size
0.2B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for alan9622/marabert2-levantine-toxic-model-v4

Finetuned
(39)
this model

Dataset used to train alan9622/marabert2-levantine-toxic-model-v4

Evaluation results