pedramyamini/distilbert-base-multilingual-cased-finetuned-mobile-banks-cafebazaar2lr-10epochs

This model is a fine-tuned version of distilbert-base-multilingual-cased on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.2307
  • Validation Loss: 1.2090
  • Epoch: 9

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 4e-05, 'decay_steps': 26740, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
0.7428 0.7046 0
0.6810 0.6903 1
0.6372 0.6907 2
0.5881 0.6988 3
0.5246 0.7630 4
0.4511 0.8687 5
0.3801 0.9356 6
0.3200 1.0440 7
0.2676 1.1470 8
0.2307 1.2090 9

Framework versions

  • Transformers 4.21.3
  • TensorFlow 2.8.2
  • Datasets 2.4.0
  • Tokenizers 0.12.1
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support