Model Card for Model ID

The model classifies social media texts as either cyberbullying or non cyberbullying.

Model Details

Model Description

The model classifies social media texts as either cyberbullying or non cyberbullying. It was built by finetuning a Roberta base transformer and training it on the UC Berkeley measuring-hate-speech dataset. The model has an accuracy, f1 and recall of 92%. It was trained using the k-fold cross validation method The model performs well on explicit hate/harm but misses implicit, coded and slang based harm. A follow up project is in place/on going to reduce the bias in the model.

  • Developed by: Dianah Naiga

  • Model type: transformer

  • Language(s) (NLP): Python

  • Finetuned from model [optional]: Roberta base

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: a demo is available on my hugging face space.

Uses

The model is to be used for study and research purposes. It is a very good model for research towards bias reduction in AI models. The model tends to have racist, homophobic, and sexist tendencies reflecting the bias in the dataset it was trained on.

Direct Use

The model can be used directly as it is already finetuned. You just need to load both the model and the tokeniser.

[More Information Needed]

Downstream Use [optional]

[More Information Needed]

Out-of-Scope Use

The model will not work well for implicit harm, sarcasm, coded language and modern slang.

[More Information Needed]

Bias, Risks, and Limitations

The model was trained on an explicit harm dataset and therefore isnt very good at detecting implicit harm The model tends to have racist, homophobic, and sexist tendencies reflecting the bias in the dataset it was trained on.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

Training Details

Training Data

[More Information Needed]

Training Procedure

The model was trained using k-fold cross validation method

Preprocessing [optional]

Tokenisation and space removal was done.

[More Information Needed]

Training Hyperparameters

  • Training regime: [More Information Needed]

Speeds, Sizes, Times [optional]

[More Information Needed]

Evaluation

Testing Data, Factors & Metrics

Testing Data

[More Information Needed]

Factors

[More Information Needed]

Metrics

[More Information Needed]

Results

The model has an accuracy of 92% [More Information Needed]

Summary

Model Examination [optional]

[More Information Needed]

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: double GPU
  • Hours used: 8
  • Cloud Provider: Kaggle
  • Compute Region: United Kingdom
  • Carbon Emitted: [More Information Needed]

Technical Specifications [optional]

Model Architecture and Objective

[More Information Needed]

Compute Infrastructure

[More Information Needed]

Hardware

[More Information Needed]

Software

[More Information Needed]

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Model Card Authors [optional]

[More Information Needed]

Model Card Contact

[More Information Needed]

Downloads last month
50
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Lobrima/kfoldRobertaForCyberbullyingDetection

Finetuned
(2199)
this model

Dataset used to train Lobrima/kfoldRobertaForCyberbullyingDetection

Space using Lobrima/kfoldRobertaForCyberbullyingDetection 1

Paper for Lobrima/kfoldRobertaForCyberbullyingDetection