BERTimbau finetuning - Portuguese Idioms
Collection
Finetuning BERTimbau for token classification of portuguese idioms, with K-fold cross-validation • 5 items • Updated
This model is a fine-tuned version of neuralmind/bert-base-portuguese-cased on the None dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|---|---|---|---|---|---|---|---|
| 0.3848 | 1.0 | 23 | 0.1953 | 0.0 | 0.0 | 0.0 | 0.9517 |
| 0.1468 | 2.0 | 46 | 0.1107 | 0.0 | 0.0 | 0.0 | 0.9530 |
| 0.0644 | 3.0 | 69 | 0.0828 | 0.4 | 0.3077 | 0.3478 | 0.9710 |
| 0.025 | 4.0 | 92 | 0.0588 | 0.6 | 0.6923 | 0.6429 | 0.9834 |
| 0.0133 | 5.0 | 115 | 0.0710 | 0.6154 | 0.6154 | 0.6154 | 0.9807 |
| 0.005 | 6.0 | 138 | 0.0665 | 0.6429 | 0.6923 | 0.6667 | 0.9820 |
| 0.0031 | 7.0 | 161 | 0.0747 | 0.6429 | 0.6923 | 0.6667 | 0.9820 |
| 0.0024 | 8.0 | 184 | 0.0821 | 0.6154 | 0.6154 | 0.6154 | 0.9793 |
Base model
neuralmind/bert-base-portuguese-cased