metadata
model-index:
- name: poltextlab/xlm-roberta-large-i5-binary-codebook-v16
results:
- task:
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: N/A
- name: F1-Score
type: f1
value: 78%
tags:
- text-classification
- pytorch
metrics:
- precision
- recall
- f1-score
language:
- en
base_model:
- xlm-roberta-large
pipeline_tag: text-classification
library_name: transformers
license: cc-by-4.0
extra_gated_prompt: >-
Our models are intended for academic projects and academic research only.If
you are not affiliated with an academic institution, please reach out to us at
huggingface [at] poltextlab [dot] com for further inquiry.If we cannot clearly
determine your academic affiliation and use case based on your form data, your
request may be rejected. Please allow us a few business days to manually
review subscriptions.
extra_gated_fields:
Name: text
Country: country
Institution: text
Institution Email: text
Please specify your academic use case: text
xlm-roberta-large-i5-binary-codebook-v16
How to use the model
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
pipe = pipeline(
model="poltextlab/xlm-roberta-large-i5-binary-codebook-v16",
task="text-classification",
tokenizer=tokenizer,
use_fast=False,
token="<your_hf_read_only_token>"
)
text = "<text_to_classify>"
pipe(text)
Classification Report
Overall Performance:
- Accuracy: N/A
- Macro Avg: Precision: 0.78, Recall: 0.78, F1-score: 0.78
- Weighted Avg: Precision: 0.78, Recall: 0.78, F1-score: 0.78
Per-Class Metrics:
| Label | Precision | Recall | F1-score | Support |
|---|---|---|---|---|
| (0) Not illiberal | 0.8 | 0.8 | 0.8 | 30 |
| (1) Illiberal | 0.76 | 0.76 | 0.76 | 25 |
Inference platform
This model is used by the CAP Babel Machine, an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the CAP Babel Machine.
Debugging and issues
This architecture uses the sentencepiece tokenizer. In order to run the model before transformers==4.27 you need to install it manually.