| --- |
| library_name: peft |
| base_model: microsoft/codebert-base |
| tags: |
| - base_model:adapter:microsoft/codebert-base |
| - lora |
| - transformers |
| metrics: |
| - accuracy |
| - f1 |
| - precision |
| - recall |
| model-index: |
| - name: CodeGenDetect-CodeBert_Lora |
| results: [] |
| --- |
| |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| should probably proofread and complete it, then remove this comment. --> |
|
|
| # CodeGenDetect-CodeBert_Lora |
| |
| This model is a fine-tuned version of [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base) on the None dataset. |
| It achieves the following results on the evaluation set: |
| - Loss: 0.0384 |
| - Accuracy: 0.9907 |
| - F1: 0.9907 |
| - Precision: 0.9907 |
| - Recall: 0.9907 |
| |
| ## Model description |
| |
| More information needed |
| |
| ## Intended uses & limitations |
| |
| More information needed |
| |
| ## Training and evaluation data |
| |
| More information needed |
| |
| ## Training procedure |
| |
| ### Training hyperparameters |
| |
| The following hyperparameters were used during training: |
| - learning_rate: 2e-05 |
| - train_batch_size: 16 |
| - eval_batch_size: 16 |
| - seed: 42 |
| - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
| - lr_scheduler_type: linear |
| - lr_scheduler_warmup_steps: 500 |
| - num_epochs: 5 |
| - mixed_precision_training: Native AMP |
| |
| ### Training results |
| |
| | Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss | Precision | Recall | |
| |:-------------:|:------:|:-----:|:--------:|:------:|:---------------:|:---------:|:------:| |
| | 0.1381 | 0.128 | 4000 | 0.9586 | 0.9586 | 0.1627 | 0.9599 | 0.9586 | |
| | 0.0821 | 0.256 | 8000 | 0.9761 | 0.9761 | 0.1081 | 0.9761 | 0.9761 | |
| | 0.0667 | 0.384 | 12000 | 0.9786 | 0.9786 | 0.1008 | 0.9787 | 0.9786 | |
| | 0.0754 | 0.512 | 16000 | 0.9820 | 0.9820 | 0.0779 | 0.9821 | 0.9820 | |
| | 0.0776 | 0.64 | 20000 | 0.9846 | 0.9846 | 0.0617 | 0.9847 | 0.9846 | |
| | 0.0643 | 0.768 | 24000 | 0.9831 | 0.9831 | 0.0761 | 0.9832 | 0.9831 | |
| | 0.064 | 0.896 | 28000 | 0.9878 | 0.9878 | 0.0495 | 0.9878 | 0.9878 | |
| | 0.0477 | 1.024 | 32000 | 0.9879 | 0.9879 | 0.0480 | 0.9880 | 0.9879 | |
| | 0.0427 | 1.152 | 36000 | 0.9894 | 0.9894 | 0.0424 | 0.9894 | 0.9894 | |
| | 0.0381 | 1.28 | 40000 | 0.9880 | 0.9880 | 0.0484 | 0.9880 | 0.9880 | |
| | 0.0423 | 1.408 | 44000 | 0.9901 | 0.9901 | 0.0399 | 0.9901 | 0.9901 | |
| | 0.0389 | 1.536 | 48000 | 0.9888 | 0.9888 | 0.0513 | 0.9889 | 0.9888 | |
| | 0.0416 | 1.6640 | 52000 | 0.9908 | 0.9908 | 0.0358 | 0.9908 | 0.9908 | |
| | 0.0374 | 1.792 | 56000 | 0.0370 | 0.9905 | 0.9905 | 0.9905 | 0.9905 | |
| | 0.0441 | 1.92 | 60000 | 0.0355 | 0.9905 | 0.9905 | 0.9905 | 0.9905 | |
| | 0.0358 | 2.048 | 64000 | 0.0384 | 0.9907 | 0.9907 | 0.9907 | 0.9907 | |
| |
| |
| ### Framework versions |
| |
| - PEFT 0.18.0 |
| - Transformers 4.57.3 |
| - Pytorch 2.9.0+cu126 |
| - Datasets 4.0.0 |
| - Tokenizers 0.22.1 |