DanSumT5-base-finetuned-test_6887-finetuned-test_1006
This model is a fine-tuned version of emilstabil/DanSumT5-base-finetuned-test_6887 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 2.3820
- Rouge1: 32.4141
- Rouge2: 8.6351
- Rougel: 18.809
- Rougelsum: 29.928
- Gen Len: 126.58
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|---|---|---|---|---|---|---|---|---|
| No log | 1.0 | 100 | 2.4957 | 31.4378 | 7.7648 | 17.9498 | 28.7898 | 126.46 |
| No log | 2.0 | 200 | 2.4718 | 31.3437 | 7.8788 | 17.9692 | 29.1104 | 126.8 |
| No log | 3.0 | 300 | 2.4465 | 31.6265 | 7.994 | 18.0599 | 28.8198 | 126.42 |
| No log | 4.0 | 400 | 2.4250 | 31.6974 | 8.0066 | 18.3127 | 29.2615 | 126.65 |
| 2.4645 | 5.0 | 500 | 2.4195 | 31.8113 | 7.9783 | 18.2827 | 29.1518 | 126.93 |
| 2.4645 | 6.0 | 600 | 2.4089 | 31.803 | 8.3958 | 18.5282 | 29.352 | 125.56 |
| 2.4645 | 7.0 | 700 | 2.4022 | 32.1102 | 8.3678 | 18.6127 | 29.5849 | 126.29 |
| 2.4645 | 8.0 | 800 | 2.3916 | 31.4499 | 7.9365 | 18.3485 | 29.047 | 126.95 |
| 2.4645 | 9.0 | 900 | 2.3893 | 32.5308 | 8.4278 | 18.4242 | 29.8826 | 126.75 |
| 2.2217 | 10.0 | 1000 | 2.3895 | 31.8799 | 7.924 | 18.3784 | 29.2475 | 126.55 |
| 2.2217 | 11.0 | 1100 | 2.3864 | 31.5731 | 8.1294 | 18.6812 | 29.3378 | 126.01 |
| 2.2217 | 12.0 | 1200 | 2.3843 | 32.1814 | 8.5896 | 18.9555 | 29.8291 | 126.04 |
| 2.2217 | 13.0 | 1300 | 2.3807 | 32.1 | 8.5707 | 18.7538 | 29.7206 | 126.29 |
| 2.2217 | 14.0 | 1400 | 2.3809 | 32.3983 | 8.7199 | 18.9381 | 30.0217 | 126.58 |
| 2.1181 | 15.0 | 1500 | 2.3820 | 32.4141 | 8.6351 | 18.809 | 29.928 | 126.58 |
Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0
- Datasets 2.12.0
- Tokenizers 0.13.3
- Downloads last month
- 2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support