xlm-large-finetuned-ner-chim-v1
This model is a fine-tuned version of FacebookAI/xlm-roberta-large on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.4924
- Accuracy: 0.8744
- Precision: 0.5688
- Recall: 0.6623
- F1: 0.6089
- Date Precision: 0.5116
- Date Recall: 0.55
- Date F1-score: 0.5301
- Habitat Precision: 0.5397
- Habitat Recall: 0.7158
- Habitat F1-score: 0.6154
- Id Feature Precision: 0.2267
- Id Feature Recall: 0.3864
- Id Feature F1-score: 0.2857
- Location Precision: 0.8264
- Location Recall: 0.8380
- Location F1-score: 0.8322
- Organization Precision: 0.5844
- Organization Recall: 0.6923
- Organization F1-score: 0.6338
- Species Precision: 0.7241
- Species Recall: 0.7911
- Species F1-score: 0.7561
- Micro avg Precision: 0.6042
- Micro avg Recall: 0.7188
- Micro avg F1-score: 0.6565
- Macro avg Precision: 0.5688
- Macro avg Recall: 0.6623
- Macro avg F1-score: 0.6089
- Weighted avg Precision: 0.6350
- Weighted avg Recall: 0.7188
- Weighted avg F1-score: 0.6717
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Date Precision | Date Recall | Date F1-score | Habitat Precision | Habitat Recall | Habitat F1-score | Id Feature Precision | Id Feature Recall | Id Feature F1-score | Location Precision | Location Recall | Location F1-score | Organization Precision | Organization Recall | Organization F1-score | Species Precision | Species Recall | Species F1-score | Micro avg Precision | Micro avg Recall | Micro avg F1-score | Macro avg Precision | Macro avg Recall | Macro avg F1-score | Weighted avg Precision | Weighted avg Recall | Weighted avg F1-score |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| No log | 1.0 | 19 | 1.0316 | 0.7125 | 0.0929 | 0.1250 | 0.1045 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0202 | 0.0568 | 0.0298 | 0.0833 | 0.0563 | 0.0672 | 0.0 | 0.0 | 0.0 | 0.4537 | 0.6370 | 0.5299 | 0.2497 | 0.2756 | 0.2620 | 0.0929 | 0.1250 | 0.1045 | 0.2023 | 0.2756 | 0.2312 |
| No log | 2.0 | 38 | 0.6220 | 0.8236 | 0.3499 | 0.4191 | 0.3781 | 0.2258 | 0.175 | 0.1972 | 0.5495 | 0.6421 | 0.5922 | 0.1299 | 0.2273 | 0.1653 | 0.4922 | 0.6690 | 0.5672 | 0.1 | 0.0923 | 0.096 | 0.6017 | 0.7089 | 0.6509 | 0.4434 | 0.5485 | 0.4904 | 0.3499 | 0.4191 | 0.3781 | 0.4498 | 0.5485 | 0.4924 |
| No log | 3.0 | 57 | 0.4711 | 0.8651 | 0.5132 | 0.6122 | 0.5552 | 0.4524 | 0.475 | 0.4634 | 0.5606 | 0.7789 | 0.6520 | 0.1883 | 0.3295 | 0.2397 | 0.6951 | 0.8028 | 0.7451 | 0.5231 | 0.5231 | 0.5231 | 0.6598 | 0.7637 | 0.7079 | 0.5508 | 0.6828 | 0.6098 | 0.5132 | 0.6122 | 0.5552 | 0.5724 | 0.6828 | 0.6206 |
| No log | 4.0 | 76 | 0.4253 | 0.8796 | 0.5476 | 0.6170 | 0.5778 | 0.4390 | 0.45 | 0.4444 | 0.6195 | 0.7368 | 0.6731 | 0.2374 | 0.375 | 0.2907 | 0.7229 | 0.8451 | 0.7792 | 0.5410 | 0.5077 | 0.5238 | 0.7256 | 0.7877 | 0.7553 | 0.6022 | 0.6981 | 0.6466 | 0.5476 | 0.6170 | 0.5778 | 0.6191 | 0.6981 | 0.6545 |
| No log | 5.0 | 95 | 0.4567 | 0.8723 | 0.5562 | 0.6714 | 0.6045 | 0.5 | 0.675 | 0.5745 | 0.5 | 0.7053 | 0.5852 | 0.2282 | 0.3864 | 0.2869 | 0.8194 | 0.8944 | 0.8552 | 0.6190 | 0.6 | 0.6094 | 0.6707 | 0.7671 | 0.7157 | 0.5827 | 0.7175 | 0.6431 | 0.5562 | 0.6714 | 0.6045 | 0.6094 | 0.7175 | 0.6563 |
| No log | 6.0 | 114 | 0.4334 | 0.8779 | 0.5864 | 0.6673 | 0.6214 | 0.5 | 0.525 | 0.5122 | 0.6018 | 0.7158 | 0.6538 | 0.2434 | 0.4205 | 0.3083 | 0.8188 | 0.8592 | 0.8385 | 0.6522 | 0.6923 | 0.6716 | 0.7021 | 0.7911 | 0.7440 | 0.6136 | 0.7258 | 0.6650 | 0.5864 | 0.6673 | 0.6214 | 0.6403 | 0.7258 | 0.6782 |
| No log | 7.0 | 133 | 0.4722 | 0.8699 | 0.5698 | 0.6672 | 0.6121 | 0.4894 | 0.575 | 0.5287 | 0.5161 | 0.6737 | 0.5845 | 0.2292 | 0.375 | 0.2845 | 0.8392 | 0.8451 | 0.8421 | 0.6282 | 0.7538 | 0.6853 | 0.7170 | 0.7808 | 0.7475 | 0.6054 | 0.7161 | 0.6561 | 0.5698 | 0.6672 | 0.6121 | 0.6345 | 0.7161 | 0.6705 |
| No log | 8.0 | 152 | 0.4774 | 0.8748 | 0.5678 | 0.6668 | 0.6105 | 0.5 | 0.575 | 0.5349 | 0.5403 | 0.7053 | 0.6119 | 0.2171 | 0.375 | 0.2750 | 0.8264 | 0.8380 | 0.8322 | 0.6184 | 0.7231 | 0.6667 | 0.7046 | 0.7842 | 0.7423 | 0.5975 | 0.7175 | 0.6520 | 0.5678 | 0.6668 | 0.6105 | 0.6284 | 0.7175 | 0.6676 |
| No log | 9.0 | 171 | 0.4906 | 0.8728 | 0.5653 | 0.6721 | 0.6109 | 0.5 | 0.575 | 0.5349 | 0.5349 | 0.7263 | 0.6161 | 0.2303 | 0.3977 | 0.2917 | 0.8207 | 0.8380 | 0.8293 | 0.5897 | 0.7077 | 0.6434 | 0.7165 | 0.7877 | 0.7504 | 0.5993 | 0.7230 | 0.6554 | 0.5653 | 0.6721 | 0.6109 | 0.6304 | 0.7230 | 0.6708 |
| No log | 10.0 | 190 | 0.4924 | 0.8744 | 0.5688 | 0.6623 | 0.6089 | 0.5116 | 0.55 | 0.5301 | 0.5397 | 0.7158 | 0.6154 | 0.2267 | 0.3864 | 0.2857 | 0.8264 | 0.8380 | 0.8322 | 0.5844 | 0.6923 | 0.6338 | 0.7241 | 0.7911 | 0.7561 | 0.6042 | 0.7188 | 0.6565 | 0.5688 | 0.6623 | 0.6089 | 0.6350 | 0.7188 | 0.6717 |
Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- 1
Model tree for quanxuantruong/xlm-large-finetuned-ner-chim-v1
Base model
FacebookAI/xlm-roberta-large