Levantine Hate Speech Classifier

This model identifies Hate Speech and Abusive Language in Levantine Arabic tweets. It was fine-tuned on the dataset: amitca71/marabert2-levantine-toxic-model-v4-dataset.

πŸ“Š Evaluation

Metric Score
Accuracy 0.9128205128205128
F1 Score 0.908218073979278

πŸ“– Labels

ID Label Definition
0 normal Safe text.
1 toxic abusive or hate Insults/Vulgarity not directed at identity or Attacks based on religion, sect, gender, or nationality.

Usage

from transformers import pipeline
classifier = pipeline("text-classification", model="amitca71/marabert2-levantine-toxic-model-v4")
classifier("يا Ψ²Ω„Ω…Ψ© Ψ­Ω„ ΨΉΩ†ΩŠ")
Downloads last month
40
Safetensors
Model size
0.2B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for amitca71/marabert2-levantine-toxic-model-v4

Finetuned
(39)
this model

Evaluation results