Levantine Hate Speech Classifier

This model identifies Hate Speech and Abusive Language in Levantine Arabic tweets. It was fine-tuned on the dataset: amitca71/marabert2-levantine-toxic-model-v3-dataset.

πŸ“Š Evaluation

Metric Score
Accuracy 0.9094017094017094
F1 Score 0.9037294177190036

πŸ“– Labels

ID Label Definition
0 normal Safe text.
1 toxic abusive or hate Insults/Vulgarity not directed at identity or Attacks based on religion, sect, gender, or nationality.

Usage

from transformers import pipeline
classifier = pipeline("text-classification", model="amitca71/marabert2-levantine-toxic-model-v3")
classifier("يا Ψ²Ω„Ω…Ψ© Ψ­Ω„ ΨΉΩ†ΩŠ")
Downloads last month
1
Safetensors
Model size
0.2B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Evaluation results