Kimata commited on
Commit
88955eb
·
verified ·
1 Parent(s): 4824a9a

Kimata/roberta-rawtext

Browse files
Files changed (3) hide show
  1. README.md +11 -32
  2. model.safetensors +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -4,11 +4,6 @@ license: mit
4
  base_model: roberta-base
5
  tags:
6
  - generated_from_trainer
7
- metrics:
8
- - accuracy
9
- - precision
10
- - recall
11
- - f1
12
  model-index:
13
  - name: results
14
  results: []
@@ -21,12 +16,17 @@ should probably proofread and complete it, then remove this comment. -->
21
 
22
  This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
23
  It achieves the following results on the evaluation set:
24
- - Loss: 0.0528
25
- - Accuracy: 0.9885
26
- - Precision: 0.9885
27
- - Recall: 0.9885
28
- - F1: 0.9885
29
- - Roc Auc: 0.9992
 
 
 
 
 
30
 
31
  ## Model description
32
 
@@ -53,27 +53,6 @@ The following hyperparameters were used during training:
53
  - lr_scheduler_type: linear
54
  - num_epochs: 3
55
 
56
- ### Training results
57
-
58
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
59
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
60
- | 0.1227 | 0.2 | 50 | 0.2116 | 0.935 | 0.9392 | 0.935 | 0.9338 | 0.9937 |
61
- | 0.0744 | 0.4 | 100 | 0.0989 | 0.97 | 0.9705 | 0.97 | 0.9698 | 0.9960 |
62
- | 0.0715 | 0.6 | 150 | 0.0651 | 0.982 | 0.9820 | 0.982 | 0.9820 | 0.9977 |
63
- | 0.1218 | 0.8 | 200 | 0.1539 | 0.9555 | 0.9590 | 0.9555 | 0.9559 | 0.9961 |
64
- | 0.0709 | 1.0 | 250 | 0.0528 | 0.9855 | 0.9855 | 0.9855 | 0.9855 | 0.9989 |
65
- | 0.0602 | 1.2 | 300 | 0.0986 | 0.978 | 0.9782 | 0.978 | 0.9779 | 0.9986 |
66
- | 0.034 | 1.4 | 350 | 0.0687 | 0.9835 | 0.9835 | 0.9835 | 0.9835 | 0.9986 |
67
- | 0.0137 | 1.6 | 400 | 0.0613 | 0.9845 | 0.9845 | 0.9845 | 0.9845 | 0.9989 |
68
- | 0.047 | 1.8 | 450 | 0.0472 | 0.9895 | 0.9895 | 0.9895 | 0.9895 | 0.9991 |
69
- | 0.0617 | 2.0 | 500 | 0.0497 | 0.9885 | 0.9885 | 0.9885 | 0.9885 | 0.9991 |
70
- | 0.0513 | 2.2 | 550 | 0.0534 | 0.987 | 0.9870 | 0.987 | 0.9870 | 0.9992 |
71
- | 0.0269 | 2.4 | 600 | 0.0467 | 0.9885 | 0.9885 | 0.9885 | 0.9885 | 0.9993 |
72
- | 0.001 | 2.6 | 650 | 0.0509 | 0.987 | 0.9870 | 0.987 | 0.9870 | 0.9994 |
73
- | 0.0195 | 2.8 | 700 | 0.0521 | 0.9895 | 0.9895 | 0.9895 | 0.9895 | 0.9992 |
74
- | 0.0011 | 3.0 | 750 | 0.0528 | 0.9885 | 0.9885 | 0.9885 | 0.9885 | 0.9992 |
75
-
76
-
77
  ### Framework versions
78
 
79
  - Transformers 4.53.3
 
4
  base_model: roberta-base
5
  tags:
6
  - generated_from_trainer
 
 
 
 
 
7
  model-index:
8
  - name: results
9
  results: []
 
16
 
17
  This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - eval_loss: 0.0172
20
+ - eval_accuracy: 0.9968
21
+ - eval_precision: 0.9968
22
+ - eval_recall: 0.9968
23
+ - eval_f1: 0.9968
24
+ - eval_roc_auc: 0.9999
25
+ - eval_runtime: 273.0945
26
+ - eval_samples_per_second: 133.046
27
+ - eval_steps_per_second: 4.16
28
+ - epoch: 2.8622
29
+ - step: 13000
30
 
31
  ## Model description
32
 
 
53
  - lr_scheduler_type: linear
54
  - num_epochs: 3
55
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  ### Framework versions
57
 
58
  - Transformers 4.53.3
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5adac38bac293102527e72aa0685f2cb1c8a167844d6d4695e5a7d2de65e353a
3
  size 498612824
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:054fb5dbd76db6c8ef457eb7cffc378cbb1975ee0ef6ba130cf8399b86249edf
3
  size 498612824
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4f01bf9e71e28f11c0816409abdc096384493efe1db27cfe973f5bf362886906
3
  size 5304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33902044fa38cdde41de8098659106296c2e4ce04b62b087a93d6c5496f89057
3
  size 5304