roberta-large-nsp-1000000-1e-06-32-stsb-lr2e-05-bs32
This model is a fine-tuned version of mhr2004/roberta-large-nsp-1000000-1e-06-32 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0175
- Pearson: 0.9197
- Spearman: 0.9199
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearman |
|---|---|---|---|---|---|
| 0.0378 | 1.0 | 180 | 0.0204 | 0.8831 | 0.8870 |
| 0.0309 | 2.0 | 360 | 0.0175 | 0.9099 | 0.9075 |
| 0.0272 | 3.0 | 540 | 0.0195 | 0.9053 | 0.9077 |
| 0.0207 | 4.0 | 720 | 0.0184 | 0.9118 | 0.9097 |
| 0.0188 | 5.0 | 900 | 0.0167 | 0.9094 | 0.9114 |
| 0.0168 | 6.0 | 1080 | 0.0205 | 0.9157 | 0.9175 |
| 0.0134 | 7.0 | 1260 | 0.0141 | 0.9192 | 0.9184 |
| 0.0127 | 8.0 | 1440 | 0.0249 | 0.9148 | 0.9130 |
| 0.0117 | 9.0 | 1620 | 0.0167 | 0.9206 | 0.9202 |
| 0.011 | 10.0 | 1800 | 0.0146 | 0.9173 | 0.9186 |
| 0.0098 | 11.0 | 1980 | 0.0152 | 0.9176 | 0.9177 |
| 0.0097 | 12.0 | 2160 | 0.0199 | 0.9207 | 0.9211 |
| 0.0085 | 13.0 | 2340 | 0.0147 | 0.9210 | 0.9204 |
| 0.0082 | 14.0 | 2520 | 0.0158 | 0.9210 | 0.9200 |
| 0.0078 | 15.0 | 2700 | 0.0175 | 0.9197 | 0.9199 |
Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.1
- Downloads last month
- 1
Model tree for mhr2004/roberta-large-nsp-1000000-1e-06-32-stsb-lr2e-05-bs32
Base model
mhr2004/roberta-large-nsp-1000000-1e-06-32