syncopation-transformer-metric-disentagled

This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0888

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 5
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss
0.2002 1.0 4350 0.1469
0.0081 2.0 8700 0.1376
0.1583 3.0 13050 0.1229
0.13 4.0 17400 0.1156
0.1405 5.0 21750 0.1053
0.0125 6.0 26100 0.1015
0.0147 7.0 30450 0.0969
0.0023 8.0 34800 0.0973
0.0914 9.0 39150 0.0944
0.0287 10.0 43500 0.0958
0.0355 11.0 47850 0.0943
0.029 12.0 52200 0.0972
0.0927 13.0 56550 0.0910
0.0487 14.0 60900 0.0958
0.5498 15.0 65250 0.0974
0.0859 16.0 69600 0.0990
0.0321 17.0 73950 0.0944
0.0314 18.0 78300 0.0932
0.0017 19.0 82650 0.0946
0.0043 20.0 87000 0.0919
0.0617 21.0 91350 0.0942
0.0026 22.0 95700 0.0912
0.1839 23.0 100050 0.0906
0.0059 24.0 104400 0.0903
0.0469 25.0 108750 0.0964
0.0632 26.0 113100 0.0935
0.3662 27.0 117450 0.0911
0.0097 28.0 121800 0.0929
0.0441 29.0 126150 0.0903
0.0297 30.0 130500 0.0904
0.0791 31.0 134850 0.0919
0.0286 32.0 139200 0.0932
0.4319 33.0 143550 0.0894
0.1246 34.0 147900 0.0902
0.0876 35.0 152250 0.0882
0.0029 36.0 156600 0.0907
0.0065 37.0 160950 0.0907
0.027 38.0 165300 0.0905
0.0006 39.0 169650 0.0895
0.0024 40.0 174000 0.0881
0.0646 41.0 178350 0.0886
0.0041 42.0 182700 0.0886
0.2318 43.0 187050 0.0890
0.1015 44.0 191400 0.0889
0.0168 45.0 195750 0.0880
0.012 46.0 200100 0.0900
0.4302 47.0 204450 0.0880
0.0059 48.0 208800 0.0889
0.0298 49.0 213150 0.0883
0.0221 50.0 217500 0.0888

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
-
Safetensors
Model size
2.8M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support