xlm-base-CONTINUE-finetuned-ner-chim-v1
This model is a fine-tuned version of quanxuantruong/xlm-base-finetuned-ner-chim-v1 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.7630
- Accuracy: 0.8230
- Precision: 0.3548
- Recall: 0.3874
- F1: 0.3628
- Date Precision: 0.1852
- Date Recall: 0.125
- Date F1-score: 0.1493
- Habitat Precision: 0.2424
- Habitat Recall: 0.1684
- Habitat F1-score: 0.1988
- Id Feature Precision: 0.3478
- Id Feature Recall: 0.2727
- Id Feature F1-score: 0.3057
- Location Precision: 0.6286
- Location Recall: 0.9296
- Location F1-score: 0.75
- Organization Precision: 0.0
- Organization Recall: 0.0
- Organization F1-score: 0.0
- Species Precision: 0.7246
- Species Recall: 0.8288
- Species F1-score: 0.7732
- Micro avg Precision: 0.5935
- Micro avg Recall: 0.5803
- Micro avg F1-score: 0.5868
- Macro avg Precision: 0.3548
- Macro avg Recall: 0.3874
- Macro avg F1-score: 0.3628
- Weighted avg Precision: 0.5012
- Weighted avg Recall: 0.5803
- Weighted avg F1-score: 0.5319
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Date Precision | Date Recall | Date F1-score | Habitat Precision | Habitat Recall | Habitat F1-score | Id Feature Precision | Id Feature Recall | Id Feature F1-score | Location Precision | Location Recall | Location F1-score | Organization Precision | Organization Recall | Organization F1-score | Species Precision | Species Recall | Species F1-score | Micro avg Precision | Micro avg Recall | Micro avg F1-score | Macro avg Precision | Macro avg Recall | Macro avg F1-score | Weighted avg Precision | Weighted avg Recall | Weighted avg F1-score |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| No log | 1.0 | 7 | 1.2386 | 0.6705 | 0.0263 | 0.0157 | 0.0196 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1481 | 0.0909 | 0.1127 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0094 | 0.0034 | 0.0050 | 0.0520 | 0.0125 | 0.0201 | 0.0263 | 0.0157 | 0.0196 | 0.0219 | 0.0125 | 0.0158 |
| No log | 2.0 | 14 | 1.0307 | 0.7273 | 0.1127 | 0.1145 | 0.1122 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2857 | 0.2273 | 0.2532 | 0.3220 | 0.4014 | 0.3574 | 0.0 | 0.0 | 0.0 | 0.0683 | 0.0582 | 0.0628 | 0.1895 | 0.1302 | 0.1544 | 0.1127 | 0.1145 | 0.1122 | 0.1258 | 0.1302 | 0.1266 |
| No log | 3.0 | 21 | 0.8690 | 0.7781 | 0.2275 | 0.2553 | 0.2375 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2875 | 0.2614 | 0.2738 | 0.6181 | 0.8662 | 0.7214 | 0.0 | 0.0 | 0.0 | 0.4591 | 0.4041 | 0.4299 | 0.4740 | 0.3657 | 0.4128 | 0.2275 | 0.2553 | 0.2375 | 0.3423 | 0.3657 | 0.3491 |
| No log | 4.0 | 28 | 0.7881 | 0.8158 | 0.2994 | 0.3409 | 0.3132 | 0.0435 | 0.025 | 0.0317 | 0.0909 | 0.0526 | 0.0667 | 0.3239 | 0.2614 | 0.2893 | 0.6161 | 0.9155 | 0.7365 | 0.0 | 0.0 | 0.0 | 0.7219 | 0.7911 | 0.7549 | 0.5735 | 0.5402 | 0.5563 | 0.2994 | 0.3409 | 0.3132 | 0.4670 | 0.5402 | 0.4960 |
| No log | 5.0 | 35 | 0.7630 | 0.8230 | 0.3548 | 0.3874 | 0.3628 | 0.1852 | 0.125 | 0.1493 | 0.2424 | 0.1684 | 0.1988 | 0.3478 | 0.2727 | 0.3057 | 0.6286 | 0.9296 | 0.75 | 0.0 | 0.0 | 0.0 | 0.7246 | 0.8288 | 0.7732 | 0.5935 | 0.5803 | 0.5868 | 0.3548 | 0.3874 | 0.3628 | 0.5012 | 0.5803 | 0.5319 |
Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- 1
Model tree for quanxuantruong/xlm-base-CONTINUE-finetuned-ner-chim-v1
Base model
FacebookAI/xlm-roberta-base