roberta-base-japanese-char-wwm-ironic-component-extraction
Overview
Japanese RoBERTa model fine-tuned for ironic component extraction (token classification).
Based on ku-nlp/roberta-base-japanese-char-wwm.
Labels
| ID | Label |
|---|---|
| 0 | O |
| 1 | B-OBJ |
| 2 | I-OBJ |
| 3 | B-POS |
| 4 | I-POS |
| 5 | B-NEG |
| 6 | I-NEG |
| 7 | B-MOD |
| 8 | I-MOD |
| 9 | B-HNF |
| 10 | I-HNF |
| 11 | B-COL |
| 12 | I-COL |
License
CC-BY-SA 4.0 (inherited from base model)
- Downloads last month
- 52
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support