Datasets:
hate_speech_score model
Is there a ready to use model or code script to compute hate_speech_score (exactly like the dataset does), given annotator's labelling of the 10 survey items? Where can I find such a model or code script? Thanks!
We have a trained model built on top of ModernBert: https://huggingface.co/ucberkeley-dlab/mhs-scorer-modernbert-large
Try it out and let me know if you run into any issues.
Thank you so much for getting back to me Pratik. I have taken a look at this model and it appears that it takes text as its input to output a hate speech score. I am more so looking to compute hate speech scores when the input is annotators' labels for the 10 survey items. I understand that the score is derived using a Rasch / IRT model, but I was wondering if there was code available to reproduce that scoring procedure from the raw annotator responses. Thank you so much for your time!
This is a bit trickier. You'd need two things: 1) the annotators would need to rate a subset of the reference comments so that they would be "linked" to the original annotator pool - this is important for estimating their severity in the IRT model 2) you'd need to re-run the fit using FACETS, which is a proprietary software for fitting IRT models. We'd prefer to use an open-source software, but there hasn't been a viable candidate for this specific use case. If you do end up using FACETS, let me know, and I'm happy to help.
There is another possibility of perhaps learning a mapping between the item responses and the hate speech score (with severity as an input) using a neural network, but we do not have a model like that trained yet.