Aime2k2 commited on
Commit
a71be41
·
verified ·
1 Parent(s): 5319050

Finetuned DistilBERT for spam email classification

Browse files
Files changed (6) hide show
  1. README.md +57 -15
  2. config.json +36 -0
  3. model.safetensors +3 -0
  4. tokenizer.json +0 -0
  5. tokenizer_config.json +15 -0
  6. training_args.bin +3 -0
README.md CHANGED
@@ -1,26 +1,68 @@
1
  ---
 
 
 
2
  tags:
3
- - ml-intern
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Aime2k2/spam-email-distilbert
 
7
 
8
- <!-- ml-intern-provenance -->
9
- ## Generated by ML Intern
10
 
11
- This model repository was generated by [ML Intern](https://github.com/huggingface/ml-intern), an agent for machine learning research and development on the Hugging Face Hub.
 
 
 
 
 
 
12
 
13
- - Try ML Intern: https://smolagents-ml-intern.hf.space
14
- - Source code: https://github.com/huggingface/ml-intern
15
 
16
- ## Usage
17
 
18
- ```python
19
- from transformers import AutoModelForCausalLM, AutoTokenizer
20
 
21
- model_id = 'Aime2k2/spam-email-distilbert'
22
- tokenizer = AutoTokenizer.from_pretrained(model_id)
23
- model = AutoModelForCausalLM.from_pretrained(model_id)
24
- ```
25
 
26
- For non-causal architectures, replace `AutoModelForCausalLM` with the appropriate `AutoModel` class.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: distilbert/distilbert-base-uncased
5
  tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - accuracy
9
+ - f1
10
+ - precision
11
+ - recall
12
+ model-index:
13
+ - name: spam-email-distilbert
14
+ results: []
15
  ---
16
 
17
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
+ should probably proofread and complete it, then remove this comment. -->
19
 
20
+ # spam-email-distilbert
 
21
 
22
+ This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
23
+ It achieves the following results on the evaluation set:
24
+ - Loss: 0.0500
25
+ - Accuracy: 0.9932
26
+ - F1: 0.9883
27
+ - Precision: 0.9933
28
+ - Recall: 0.9833
29
 
30
+ ## Model description
 
31
 
32
+ More information needed
33
 
34
+ ## Intended uses & limitations
 
35
 
36
+ More information needed
 
 
 
37
 
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+
42
+ ## Training procedure
43
+
44
+ ### Training hyperparameters
45
+
46
+ The following hyperparameters were used during training:
47
+ - learning_rate: 2e-05
48
+ - train_batch_size: 4
49
+ - eval_batch_size: 8
50
+ - seed: 42
51
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
52
+ - lr_scheduler_type: linear
53
+ - num_epochs: 2
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
58
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
59
+ | 0.0026 | 1.0 | 1034 | 0.0495 | 0.9894 | 0.9816 | 0.9865 | 0.9767 |
60
+ | 0.0013 | 2.0 | 2068 | 0.0500 | 0.9932 | 0.9883 | 0.9933 | 0.9833 |
61
+
62
+
63
+ ### Framework versions
64
+
65
+ - Transformers 5.8.0
66
+ - Pytorch 2.11.0+cu130
67
+ - Datasets 4.8.5
68
+ - Tokenizers 0.22.2
config.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation": "gelu",
3
+ "architectures": [
4
+ "DistilBertForSequenceClassification"
5
+ ],
6
+ "attention_dropout": 0.1,
7
+ "bos_token_id": null,
8
+ "dim": 768,
9
+ "dropout": 0.1,
10
+ "dtype": "float32",
11
+ "eos_token_id": null,
12
+ "hidden_dim": 3072,
13
+ "id2label": {
14
+ "0": "not spam",
15
+ "1": "spam"
16
+ },
17
+ "initializer_range": 0.02,
18
+ "label2id": {
19
+ "not spam": 0,
20
+ "spam": 1
21
+ },
22
+ "max_position_embeddings": 512,
23
+ "model_type": "distilbert",
24
+ "n_heads": 12,
25
+ "n_layers": 6,
26
+ "pad_token_id": 0,
27
+ "problem_type": "single_label_classification",
28
+ "qa_dropout": 0.1,
29
+ "seq_classif_dropout": 0.2,
30
+ "sinusoidal_pos_embds": false,
31
+ "tie_weights_": true,
32
+ "tie_word_embeddings": true,
33
+ "transformers_version": "5.8.0",
34
+ "use_cache": false,
35
+ "vocab_size": 30522
36
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc3887e1bc5c3fa365f7006a4b23d6d171ea7f884c0b6d6ef728ca91181f009f
3
+ size 267832560
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "backend": "tokenizers",
3
+ "cls_token": "[CLS]",
4
+ "do_lower_case": true,
5
+ "is_local": false,
6
+ "local_files_only": false,
7
+ "mask_token": "[MASK]",
8
+ "model_max_length": 512,
9
+ "pad_token": "[PAD]",
10
+ "sep_token": "[SEP]",
11
+ "strip_accents": null,
12
+ "tokenize_chinese_chars": true,
13
+ "tokenizer_class": "BertTokenizer",
14
+ "unk_token": "[UNK]"
15
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1daa3f13d06e600ccfaa98693959912a70101d4c3ce286d40f7059a941c113b2
3
+ size 5265