clauseguard-legal-bert
This model is a fine-tuned version of nlpaueb/legal-bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0238
- Micro F1: 0.7604
- Macro F1: 0.7794
- Precision: 0.7374
- Recall: 0.7849
- F1 Limitation of l: 0.625
- F1 Unilateral term: 0.8101
- F1 Unilateral chan: 0.7143
- F1 Content removal: 0.75
- F1 Contract by usi: 0.8
- F1 Choice of law: 0.8889
- F1 Jurisdiction: 0.9412
- F1 Arbitration: 0.7059
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.1
- num_epochs: 20
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Micro F1 | Macro F1 | Precision | Recall | F1 Limitation of l | F1 Unilateral term | F1 Unilateral chan | F1 Content removal | F1 Contract by usi | F1 Choice of law | F1 Jurisdiction | F1 Arbitration |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0.0879 | 1.0 | 346 | 0.0753 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0390 | 2.0 | 692 | 0.0381 | 0.1864 | 0.0766 | 0.8667 | 0.1044 | 0.4565 | 0.1562 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0313 | 3.0 | 1038 | 0.0273 | 0.7069 | 0.6811 | 0.7328 | 0.6827 | 0.7188 | 0.7788 | 0.6792 | 0.5116 | 0.72 | 0.7586 | 0.8205 | 0.4615 |
| 0.0271 | 4.0 | 1384 | 0.0284 | 0.6819 | 0.6798 | 0.6062 | 0.7791 | 0.6625 | 0.7903 | 0.6301 | 0.6154 | 0.5075 | 0.8387 | 0.8718 | 0.5217 |
| 0.0132 | 5.0 | 1730 | 0.0264 | 0.7059 | 0.7106 | 0.6897 | 0.7229 | 0.6726 | 0.7717 | 0.6076 | 0.6207 | 0.7317 | 0.8 | 0.8718 | 0.6087 |
| 0.0115 | 6.0 | 2076 | 0.0265 | 0.7081 | 0.6938 | 0.7308 | 0.6867 | 0.7107 | 0.7826 | 0.6364 | 0.6429 | 0.7234 | 0.7333 | 0.6897 | 0.6316 |
| 0.0067 | 7.0 | 2422 | 0.0254 | 0.7495 | 0.7520 | 0.7153 | 0.7871 | 0.7183 | 0.7769 | 0.7368 | 0.7097 | 0.7391 | 0.8485 | 0.85 | 0.6364 |
| 0.0037 | 8.0 | 2768 | 0.0285 | 0.7250 | 0.7358 | 0.6633 | 0.7992 | 0.6950 | 0.7376 | 0.7273 | 0.6667 | 0.68 | 0.8485 | 0.8947 | 0.6364 |
| 0.0052 | 9.0 | 3114 | 0.0278 | 0.7337 | 0.7267 | 0.6840 | 0.7912 | 0.7338 | 0.8167 | 0.6234 | 0.6207 | 0.7391 | 0.8485 | 0.8718 | 0.56 |
| 0.0035 | 10.0 | 3460 | 0.0298 | 0.7338 | 0.7314 | 0.6645 | 0.8193 | 0.7143 | 0.8 | 0.6575 | 0.6667 | 0.7083 | 0.8235 | 0.8718 | 0.6087 |
Framework versions
- Transformers 5.0.0
- Pytorch 2.10.0+cu128
- Datasets 4.0.0
- Tokenizers 0.22.2
- Downloads last month
- 38
Model tree for gaurv007/clauseguard-legal-bert
Base model
nlpaueb/legal-bert-base-uncased