xlm-base-finetuned-ner-chim-v1

This model is a fine-tuned version of FacebookAI/xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4451
  • Accuracy: 0.8726
  • Precision: 0.5728
  • Recall: 0.6473
  • F1: 0.6063
  • Date Precision: 0.5135
  • Date Recall: 0.475
  • Date F1-score: 0.4935
  • Habitat Precision: 0.5620
  • Habitat Recall: 0.7158
  • Habitat F1-score: 0.6296
  • Id Feature Precision: 0.2202
  • Id Feature Recall: 0.2727
  • Id Feature F1-score: 0.2437
  • Location Precision: 0.7530
  • Location Recall: 0.8803
  • Location F1-score: 0.8117
  • Organization Precision: 0.6471
  • Organization Recall: 0.6769
  • Organization F1-score: 0.6617
  • Species Precision: 0.7412
  • Species Recall: 0.8630
  • Species F1-score: 0.7975
  • Micro avg Precision: 0.6326
  • Micro avg Recall: 0.7368
  • Micro avg F1-score: 0.6807
  • Macro avg Precision: 0.5728
  • Macro avg Recall: 0.6473
  • Macro avg F1-score: 0.6063
  • Weighted avg Precision: 0.6353
  • Weighted avg Recall: 0.7368
  • Weighted avg F1-score: 0.6816

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Accuracy Precision Recall F1 Date Precision Date Recall Date F1-score Habitat Precision Habitat Recall Habitat F1-score Id Feature Precision Id Feature Recall Id Feature F1-score Location Precision Location Recall Location F1-score Organization Precision Organization Recall Organization F1-score Species Precision Species Recall Species F1-score Micro avg Precision Micro avg Recall Micro avg F1-score Macro avg Precision Macro avg Recall Macro avg F1-score Weighted avg Precision Weighted avg Recall Weighted avg F1-score
No log 1.0 19 1.3285 0.6067 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
No log 2.0 38 0.8474 0.7597 0.2115 0.2502 0.2281 0.0 0.0 0.0 0.1860 0.1684 0.1768 0.1429 0.1477 0.1453 0.3439 0.3803 0.3612 0.0 0.0 0.0 0.5964 0.8048 0.6851 0.4368 0.4404 0.4386 0.2115 0.2502 0.2281 0.3508 0.4404 0.3891
No log 3.0 57 0.6548 0.8122 0.3355 0.4133 0.3702 0.0 0.0 0.0 0.5221 0.6211 0.5673 0.125 0.1477 0.1354 0.5956 0.7676 0.6708 0.1 0.1077 0.1037 0.6703 0.8356 0.7439 0.5076 0.5983 0.5493 0.3355 0.4133 0.3702 0.4812 0.5983 0.5333
No log 4.0 76 0.5435 0.8410 0.4820 0.5110 0.4879 0.5 0.275 0.3548 0.5607 0.6316 0.5941 0.2079 0.2386 0.2222 0.6813 0.8732 0.7654 0.2188 0.2154 0.2171 0.7232 0.8322 0.7739 0.5825 0.6551 0.6167 0.4820 0.5110 0.4879 0.5730 0.6551 0.6080
No log 5.0 95 0.4966 0.8569 0.5571 0.5952 0.5709 0.6207 0.45 0.5217 0.5727 0.6632 0.6146 0.2135 0.2159 0.2147 0.7062 0.8803 0.7837 0.5152 0.5231 0.5191 0.7143 0.8390 0.7717 0.6192 0.6981 0.6562 0.5571 0.5952 0.5709 0.6099 0.6981 0.6489
No log 6.0 114 0.4664 0.8632 0.5565 0.6074 0.5767 0.5806 0.45 0.5070 0.5739 0.6947 0.6286 0.1875 0.2386 0.21 0.7022 0.8803 0.7813 0.5645 0.5385 0.5512 0.7300 0.8425 0.7822 0.6120 0.7078 0.6564 0.5565 0.6074 0.5767 0.6147 0.7078 0.6560
No log 7.0 133 0.4499 0.8696 0.5695 0.6272 0.5947 0.5 0.425 0.4595 0.5798 0.7263 0.6449 0.2277 0.2614 0.2434 0.7619 0.9014 0.8258 0.6094 0.6 0.6047 0.7381 0.8493 0.7898 0.6375 0.7258 0.6788 0.5695 0.6272 0.5947 0.6350 0.7258 0.6762
No log 8.0 152 0.4453 0.8688 0.5753 0.6279 0.5978 0.5455 0.45 0.4932 0.5862 0.7158 0.6445 0.2130 0.2614 0.2347 0.7560 0.8944 0.8194 0.6094 0.6 0.6047 0.7417 0.8459 0.7904 0.6350 0.7230 0.6762 0.5753 0.6279 0.5978 0.6368 0.7230 0.6760
No log 9.0 171 0.4459 0.8731 0.5746 0.6313 0.5996 0.5143 0.45 0.48 0.5776 0.7053 0.6351 0.2190 0.2614 0.2383 0.7619 0.9014 0.8258 0.6190 0.6 0.6094 0.7560 0.8699 0.8089 0.6428 0.7327 0.6848 0.5746 0.6313 0.5996 0.6425 0.7327 0.6836
No log 10.0 190 0.4451 0.8726 0.5728 0.6473 0.6063 0.5135 0.475 0.4935 0.5620 0.7158 0.6296 0.2202 0.2727 0.2437 0.7530 0.8803 0.8117 0.6471 0.6769 0.6617 0.7412 0.8630 0.7975 0.6326 0.7368 0.6807 0.5728 0.6473 0.6063 0.6353 0.7368 0.6816

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
1
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for quanxuantruong/xlm-base-finetuned-ner-chim-v1

Finetuned
(3893)
this model
Finetunes
1 model