KitsuVp commited on
Commit
ca90ed0
·
verified ·
1 Parent(s): 1b90fa0

Model save

Browse files
README.md CHANGED
@@ -1,203 +1,60 @@
1
  ---
2
- language: en
3
- license: apache-2.0
4
  tags:
5
- - causal-lm
6
- - research
7
- - fp8
8
- - attention
9
- - normalization
10
- - neollm
11
- datasets:
12
- - HuggingFaceFW/fineweb-edu
13
  ---
14
 
15
- # NeoLLM
16
-
17
- NeoLLM is a **135 M parameter** decoder-only language model trained from scratch on
18
- [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) in **FP8**
19
- precision, completing training in approximately **6 hours** on a single NVIDIA RTX 5090.
20
- It integrates a collection of recently published attention and normalization techniques
21
- into a single architecture, with the goal of studying how they interact during
22
- pretraining. The model is actively being developed and the current checkpoint represents
23
- an intermediate training state.
24
 
25
- > **Author / contact:** [@Kyokopom](https://x.com/Kyokopom) on X
26
- > **Repository:** [KitsuVp/NeoLLM](https://huggingface.co/KitsuVp/NeoLLM)
27
-
28
- ---
29
-
30
- ## Architecture
31
-
32
- NeoLLM is a decoder-only transformer with the following configuration:
33
-
34
- | Parameter | Value |
35
- |---|---|
36
- | Hidden size | 512 |
37
- | Layers | 12 |
38
- | Attention heads | 8 |
39
- | KV heads (GQA) | 2 |
40
- | Head dim | 64 |
41
- | Intermediate size | 1536 |
42
- | Vocabulary | Qwen3 tokenizer (200,005 tokens) |
43
- | Context length | 512 tokens |
44
-
45
- ### Parameter breakdown
46
-
47
- | Parameter bucket | Count |
48
- |---|---|
49
- | **Total parameters** | 158.16M (158,156,792) |
50
- | **Embedding parameters** (tied) | 102.40M (102,402,560) |
51
- | **Non-embedding parameters** | 55.75M (55,754,232) |
52
- | **Effective trainable parameters** | 158.16M (158,156,792) |
53
-
54
- > Weight tying is **enabled**: the input embedding matrix and the language-model head
55
- > share the same parameters, so the effective trainable budget is
56
- > `total − embed = 55.75M`.
57
-
58
- ### Integrated techniques
59
-
60
- Each layer combines the following mechanisms simultaneously.
61
-
62
- **Normalization and residual stream**
63
-
64
- - **SeeDNorm** ([arXiv:2510.22777](https://arxiv.org/abs/2510.22777)) — Applied to Q and K
65
- projections. Dynamically rescales the normalization based on the input's own statistics,
66
- making the attention geometry more stable across varying input distributions.
67
- - **PolyNorm** ([arXiv:2602.04902](https://arxiv.org/abs/2602.04902)) — Replaces the standard
68
- MLP activation with three branches: linear (x), quadratic (x²), and cubic (x³) — each
69
- normalized and combined with learned weights. This allows the MLP to express both linear
70
- and non-linear relationships simultaneously.
71
- - **GPAS** ([arXiv:2506.22049](https://arxiv.org/abs/2506.22049)) — Gradient-Preserving
72
- Activation Scaling. Applied to residual connections between sublayers; helps gradients
73
- flow more cleanly during training without distorting the residual stream.
74
- - **LayerNorm Scaling / LNS** ([arXiv:2502.05795](https://arxiv.org/abs/2502.05795)) — Each
75
- layer's output is scaled by 1/√ℓ where ℓ is the layer index. Directly addresses the
76
- "Curse of Depth" in Pre-LN transformers.
77
-
78
- **Attention mechanisms**
79
-
80
- - **FAN** ([arXiv:2502.21309](https://arxiv.org/abs/2502.21309)) — Fourier Analysis Networks.
81
- A portion of the input projection channels are dedicated to representing periodic patterns
82
- (cosine/sine pairs), while the remainder handle standard linear content.
83
- - **MEA** ([arXiv:2601.19611](https://arxiv.org/abs/2601.19611)) — Explicit Multi-head
84
- Attention. Adds small learnable interaction matrices between attention heads for K and V.
85
- - **LUCID** ([arXiv:2602.10410](https://arxiv.org/abs/2602.10410)) — Applies a learned
86
- lower-triangular preconditioner to V before attention, decorrelating value representations
87
- across positions.
88
- - **Affine-Scaled Attention** ([arXiv:2602.23057](https://arxiv.org/abs/2602.23057)) — Adds
89
- two learnable per-head scalars (α and β) to the softmax weights:
90
- `[α·softmax(QKᵀ) + β]·V`.
91
- - **XSA** ([arXiv:2603.09078](https://arxiv.org/abs/2603.09078)) — Exclusive Self Attention.
92
- After computing attention, removes the component of the output aligned with the token's
93
- own value vector.
94
- - **Directional Routing** ([arXiv:2603.14923](https://arxiv.org/abs/2603.14923)) — Each head
95
- learns K=4 directions in the output space; a learned router suppresses the attention output
96
- along each direction per input.
97
- - **Gated Attention** ([arXiv:2505.06708](https://arxiv.org/abs/2505.06708)) — A sigmoid gate
98
- is applied to the attention output before the output projection, introducing non-linearity
99
- and preventing attention sinks.
100
- - **Momentum Attention** ([arXiv:2411.03884](https://arxiv.org/abs/2411.03884)) — Modifies Q
101
- and K by subtracting a fraction of the previous position's Q and K values (causal
102
- first-difference).
103
-
104
- **MLP**
105
-
106
- - **Learnable Multipliers** ([arXiv:2601.04890](https://arxiv.org/abs/2601.04890)) — Adds
107
- per-row and per-column learnable scalar parameters to each linear layer.
108
- - **SimpleGPT** ([arXiv:2602.01212](https://arxiv.org/abs/2602.01212)) — A normalization
109
- strategy derived from second-order geometry analysis, applied inside MLP projections to
110
- improve optimization stability.
111
-
112
- ---
113
 
114
- ## Training
115
-
116
- | Setting | Value |
117
- |---|---|
118
- | Dataset | FineWeb-Edu (sample-10BT) |
119
- | Tokens seen | ~1.54B (46,875 steps × batch 64 × length 512) |
120
- | Precision | FP8 native (E4M3 weights/activations, E5M2 gradients) + BF16 fallback |
121
- | Optimizer | Conda (Column-Normalized Adam) + GPA |
122
- | Learning rate | 6e-04 with linear warmup (10 % of steps) |
123
- | Weight decay | 0.1 |
124
- | Training time | ~6h 11m |
125
- | Hardware | NVIDIA RTX 5090 (single GPU) |
126
-
127
- ### Training curve
128
-
129
- | Step | Train Loss | Val Loss |
130
- |---|---|---|
131
- | 5,000 | 6.297 | 6.164 |
132
- | 10,000 | 5.631 | 5.491 |
133
- | 15,000 | 5.333 | 5.188 |
134
- | 20,000 | 5.158 | 4.997 |
135
- | 25,000 | 5.033 | 4.863 |
136
- | 30,000 | 4.933 | 4.761 |
137
- | 35,000 | 4.861 | 4.685 |
138
- | 40,000 | 4.761 | 4.598 |
139
- | 45,000 | 4.698 | 4.524 |
140
- | 46,875 | — | 4.507 |
141
 
142
- ---
143
 
144
- ## Limitations
145
 
146
- - **Token budget** ~1.5 B tokens seen; below estimated optimum. Knowledge-intensive tasks
147
- will improve with more training.
148
- - **Gradient spike at step 40k** — Reorganized the attention pattern in layer 9 that
149
- previously captured long-range token correlations. A checkpoint from ~step 38k is expected
150
- to have better aggregate benchmark scores.
151
- - **PolyNorm exclusivity** — The quadratic branch has become partially redundant with the
152
- linear branch. Will be corrected in the next training run.
153
- - **Base model only** — Not instruction-tuned or aligned; purely a next-token-prediction
154
- base model.
155
 
156
- ---
157
 
158
- ## References
159
-
160
- All papers whose techniques are integrated into NeoLLM's architecture:
161
-
162
- | Technique | Paper title | arXiv |
163
- |---|---|---|
164
- | SeeDNorm | Self-Rescaled Dynamic Normalization | [2510.22777](https://arxiv.org/abs/2510.22777) |
165
- | MEA | Explicit Multi-head Attention | [2601.19611](https://arxiv.org/abs/2601.19611) |
166
- | Learnable Multipliers | Freeing the Scale of Language Model Matrix Layers | [2601.04890](https://arxiv.org/abs/2601.04890) |
167
- | Directional Routing | Directional Routing in Transformers | [2603.14923](https://arxiv.org/abs/2603.14923) |
168
- | XSA | Exclusive Self Attention | [2603.09078](https://arxiv.org/abs/2603.09078) |
169
- | Gated Attention | Gated Attention for LLMs | [2505.06708](https://arxiv.org/abs/2505.06708) |
170
- | Affine-Scaled Attention | Affine-Scaled Attention | [2602.23057](https://arxiv.org/abs/2602.23057) |
171
- | LNS | The Curse of Depth in LLMs | [2502.05795](https://arxiv.org/abs/2502.05795) |
172
- | LUCID | Attention with Preconditioned Representations | [2602.10410](https://arxiv.org/abs/2602.10410) |
173
- | FAN | Fourier Analysis Networks | [2502.21309](https://arxiv.org/abs/2502.21309) |
174
- | SimpleGPT | SimpleGPT | [2602.01212](https://arxiv.org/abs/2602.01212) |
175
- | GPAS | Gradient-Preserving Activation Scaling | [2506.22049](https://arxiv.org/abs/2506.22049) |
176
- | PolyNorm | PolyNorm / PolyCom | [2602.04902](https://arxiv.org/abs/2602.04902) |
177
- | Momentum Attention | Momentum Attention | [2411.03884](https://arxiv.org/abs/2411.03884) |
178
- | TWEO (analysis ref.) | Transformers Without Extreme Outliers | [2511.23225](https://arxiv.org/abs/2511.23225) |
179
 
180
- ---
181
 
182
- ## Citation
183
 
184
- ```bibtex
185
- @misc{neollm2026,
186
- title = {NeoLLM: A Research Language Model Integrating Recent Attention and Normalization Techniques},
187
- author = {KitsuVp},
188
- year = {2026},
189
- url = {https://huggingface.co/KitsuVp/NeoLLM}
190
- }
191
- ```
192
 
193
- ---
 
 
 
 
 
 
 
 
194
 
195
- ## Author
196
 
197
- [@Kyokopom](https://x.com/Kyokopom) on X
 
 
 
 
 
198
 
199
- ---
200
 
201
- ## License
202
 
203
- Apache 2.0
 
 
 
 
1
  ---
2
+ library_name: transformers
 
3
  tags:
4
+ - generated_from_trainer
5
+ model-index:
6
+ - name: NeoLLM
7
+ results: []
 
 
 
 
8
  ---
9
 
10
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
+ should probably proofread and complete it, then remove this comment. -->
 
 
 
 
 
 
 
12
 
13
+ # NeoLLM
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
+ This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Loss: 3.6334
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
+ ## Model description
20
 
21
+ More information needed
22
 
23
+ ## Intended uses & limitations
 
 
 
 
 
 
 
 
24
 
25
+ More information needed
26
 
27
+ ## Training and evaluation data
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
+ More information needed
30
 
31
+ ## Training procedure
32
 
33
+ ### Training hyperparameters
 
 
 
 
 
 
 
34
 
35
+ The following hyperparameters were used during training:
36
+ - learning_rate: 0.0006
37
+ - train_batch_size: 64
38
+ - eval_batch_size: 64
39
+ - seed: 42
40
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
41
+ - lr_scheduler_type: linear
42
+ - lr_scheduler_warmup_steps: 0.1
43
+ - num_epochs: 1
44
 
45
+ ### Training results
46
 
47
+ | Training Loss | Epoch | Step | Validation Loss |
48
+ |:-------------:|:-----:|:-----:|:---------------:|
49
+ | 4.0003 | 0.32 | 5000 | 3.9566 |
50
+ | 3.7787 | 0.64 | 10000 | 3.7377 |
51
+ | 3.6911 | 0.96 | 15000 | 3.6386 |
52
+ | 3.6809 | 1.0 | 15625 | 3.6334 |
53
 
 
54
 
55
+ ### Framework versions
56
 
57
+ - Transformers 5.5.3
58
+ - Pytorch 2.11.0+cu130
59
+ - Datasets 4.8.4
60
+ - Tokenizers 0.22.2
chat_template.jinja CHANGED
@@ -1,54 +1,45 @@
1
- {%- if tools %}
2
- {{- '<|im_start|>system\n' }}
3
- {%- if messages[0]['role'] == 'system' %}
4
- {{- messages[0]['content'] }}
5
- {%- else %}
6
- {{- 'You are a helpful assistant.' }}
7
- {%- endif %}
8
- {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
9
- {%- for tool in tools %}
10
- {{- "\n" }}
11
- {{- tool | tojson }}
12
- {%- endfor %}
13
- {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
14
- {%- else %}
15
- {%- if messages[0]['role'] == 'system' %}
16
- {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
17
- {%- else %}
18
- {{- '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n' }}
19
- {%- endif %}
20
- {%- endif %}
21
- {%- for message in messages %}
22
- {%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %}
23
- {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
24
- {%- elif message.role == "assistant" %}
25
- {{- '<|im_start|>' + message.role }}
26
- {%- if message.content %}
27
- {{- '\n' + message.content }}
28
- {%- endif %}
29
- {%- for tool_call in message.tool_calls %}
30
- {%- if tool_call.function is defined %}
31
- {%- set tool_call = tool_call.function %}
32
- {%- endif %}
33
- {{- '\n<tool_call>\n{"name": "' }}
34
- {{- tool_call.name }}
35
- {{- '", "arguments": ' }}
36
- {{- tool_call.arguments | tojson }}
37
- {{- '}\n</tool_call>' }}
38
- {%- endfor %}
39
- {{- '<|im_end|>\n' }}
40
- {%- elif message.role == "tool" %}
41
- {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
42
- {{- '<|im_start|>user' }}
43
- {%- endif %}
44
- {{- '\n<tool_response>\n' }}
45
- {{- message.content }}
46
- {{- '\n</tool_response>' }}
47
- {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
48
- {{- '<|im_end|>\n' }}
49
- {%- endif %}
50
- {%- endif %}
51
- {%- endfor %}
52
- {%- if add_generation_prompt %}
53
- {{- '<|im_start|>assistant\n' }}
54
- {%- endif %}
 
1
+ {{- bos_token -}}
2
+ {%- set keep_past_thinking = keep_past_thinking | default(false) -%}
3
+ {%- set ns = namespace(system_prompt="") -%}
4
+ {%- if messages[0]["role"] == "system" -%}
5
+ {%- set ns.system_prompt = messages[0]["content"] -%}
6
+ {%- set messages = messages[1:] -%}
7
+ {%- endif -%}
8
+ {%- if tools -%}
9
+ {%- set ns.system_prompt = ns.system_prompt + ("\n" if ns.system_prompt else "") + "List of tools: [" -%}
10
+ {%- for tool in tools -%}
11
+ {%- if tool is not string -%}
12
+ {%- set tool = tool | tojson -%}
13
+ {%- endif -%}
14
+ {%- set ns.system_prompt = ns.system_prompt + tool -%}
15
+ {%- if not loop.last -%}
16
+ {%- set ns.system_prompt = ns.system_prompt + ", " -%}
17
+ {%- endif -%}
18
+ {%- endfor -%}
19
+ {%- set ns.system_prompt = ns.system_prompt + "]" -%}
20
+ {%- endif -%}
21
+ {%- if ns.system_prompt -%}
22
+ {{- "<|im_start|>system\n" + ns.system_prompt + "<|im_end|>\n" -}}
23
+ {%- endif -%}
24
+ {%- set ns.last_assistant_index = -1 -%}
25
+ {%- for message in messages -%}
26
+ {%- if message["role"] == "assistant" -%}
27
+ {%- set ns.last_assistant_index = loop.index0 -%}
28
+ {%- endif -%}
29
+ {%- endfor -%}
30
+ {%- for message in messages -%}
31
+ {{- "<|im_start|>" + message["role"] + "\n" -}}
32
+ {%- set content = message["content"] -%}
33
+ {%- if content is not string -%}
34
+ {%- set content = content | tojson -%}
35
+ {%- endif -%}
36
+ {%- if message["role"] == "assistant" and not keep_past_thinking and loop.index0 != ns.last_assistant_index -%}
37
+ {%- if "</think>" in content -%}
38
+ {%- set content = content.split("</think>")[-1] | trim -%}
39
+ {%- endif -%}
40
+ {%- endif -%}
41
+ {{- content + "<|im_end|>\n" -}}
42
+ {%- endfor -%}
43
+ {%- if add_generation_prompt -%}
44
+ {{- "<|im_start|>assistant\n" -}}
45
+ {%- endif -%}
 
 
 
 
 
 
 
 
 
config.json CHANGED
@@ -11,12 +11,12 @@
11
  "AutoModel": "modeling_neollm.NeoLLMModel",
12
  "AutoModelForCausalLM": "modeling_neollm.NeoLLMForCausalLM"
13
  },
14
- "bos_token_id": 200004,
15
  "directional_routing_k": 4,
16
  "directional_routing_temp": 3.0,
17
  "dropout_rate": 0.1,
18
  "dtype": "bfloat16",
19
- "eos_token_id": 200004,
20
  "fan_ratio": 0.125,
21
  "fan_ratio_ffn": 0.0625,
22
  "generator_d_seed": 128,
@@ -45,7 +45,7 @@
45
  "num_attention_heads": 8,
46
  "num_hidden_layers": 12,
47
  "num_key_value_heads": 2,
48
- "pad_token_id": 200001,
49
  "partial_rotary_factor": 0.25,
50
  "polynorm_exclusive": false,
51
  "repo_d_p": 64,
@@ -58,16 +58,16 @@
58
  },
59
  "rope_theta": 10000.0,
60
  "tie_word_embeddings": true,
61
- "transformers_version": "5.5.0",
62
  "use_affine_scaled_attention": true,
63
  "use_attn_res": false,
64
  "use_cache": false,
65
- "use_directional_routing": true,
66
  "use_hadamard_o_proj": true,
67
  "use_jtokm": false,
68
- "use_laurel": true,
69
- "use_laurel_lr": true,
70
- "use_laurel_rw": true,
71
  "use_lucid_attention": true,
72
  "use_mea_attention": true,
73
  "use_momentum_attention": true,
@@ -83,6 +83,6 @@
83
  "versatile_gumbel_temp_start": 5.0,
84
  "versatile_max_depth": 2,
85
  "versatile_total_experts": 4,
86
- "vocab_size": 200005,
87
  "xsa_eps": 1e-06
88
  }
 
11
  "AutoModel": "modeling_neollm.NeoLLMModel",
12
  "AutoModelForCausalLM": "modeling_neollm.NeoLLMForCausalLM"
13
  },
14
+ "bos_token_id": 1,
15
  "directional_routing_k": 4,
16
  "directional_routing_temp": 3.0,
17
  "dropout_rate": 0.1,
18
  "dtype": "bfloat16",
19
+ "eos_token_id": 7,
20
  "fan_ratio": 0.125,
21
  "fan_ratio_ffn": 0.0625,
22
  "generator_d_seed": 128,
 
45
  "num_attention_heads": 8,
46
  "num_hidden_layers": 12,
47
  "num_key_value_heads": 2,
48
+ "pad_token_id": 0,
49
  "partial_rotary_factor": 0.25,
50
  "polynorm_exclusive": false,
51
  "repo_d_p": 64,
 
58
  },
59
  "rope_theta": 10000.0,
60
  "tie_word_embeddings": true,
61
+ "transformers_version": "5.5.3",
62
  "use_affine_scaled_attention": true,
63
  "use_attn_res": false,
64
  "use_cache": false,
65
+ "use_directional_routing": false,
66
  "use_hadamard_o_proj": true,
67
  "use_jtokm": false,
68
+ "use_laurel": false,
69
+ "use_laurel_lr": false,
70
+ "use_laurel_rw": false,
71
  "use_lucid_attention": true,
72
  "use_mea_attention": true,
73
  "use_momentum_attention": true,
 
83
  "versatile_gumbel_temp_start": 5.0,
84
  "versatile_max_depth": 2,
85
  "versatile_total_experts": 4,
86
+ "vocab_size": 64402,
87
  "xsa_eps": 1e-06
88
  }
generation_config.json CHANGED
@@ -1,11 +1,11 @@
1
  {
2
  "_from_model_config": true,
3
- "bos_token_id": 200004,
4
  "eos_token_id": [
5
- 200004
6
  ],
7
  "output_attentions": false,
8
  "output_hidden_states": false,
9
- "pad_token_id": 200001,
10
- "transformers_version": "5.5.0"
11
  }
 
1
  {
2
  "_from_model_config": true,
3
+ "bos_token_id": 1,
4
  "eos_token_id": [
5
+ 7
6
  ],
7
  "output_attentions": false,
8
  "output_hidden_states": false,
9
+ "pad_token_id": 0,
10
+ "transformers_version": "5.5.3"
11
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4f99fdeed60c04f6ed6ac81513606c7c2a97a2d7d30c6e98f4e22716050066ed
3
- size 316396360
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca5fb5b2d4172761f6f863a1d57b1fab971b8c411b3239bd2d8925fb4bf171ea
3
+ size 156549256
tokenizer.json CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4ad7e65f213ca8139ec0ef1f3445de27c21a6fecdad89d50b2af23efd6bd7d76
3
- size 14937008
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df1d8d5ec5d091b460562ffd545e4a5e91d17d4a0db7ebe733be34ed374377bd
3
+ size 4733389
tokenizer_config.json CHANGED
@@ -1,12 +1,19 @@
1
  {
2
- "add_prefix_space": false,
3
  "backend": "tokenizers",
4
- "bos_token": "<|endoftext|>",
5
  "clean_up_tokenization_spaces": false,
6
- "eos_token": "<|endoftext|>",
7
  "is_local": false,
 
 
 
 
 
8
  "model_max_length": 1000000000000000019884624838656,
9
- "pad_token": "<|padding|>",
 
 
10
  "tokenizer_class": "TokenizersBackend",
11
- "unk_token": "<|endoftext|>"
 
12
  }
 
1
  {
 
2
  "backend": "tokenizers",
3
+ "bos_token": "<|startoftext|>",
4
  "clean_up_tokenization_spaces": false,
5
+ "eos_token": "<|im_end|>",
6
  "is_local": false,
7
+ "legacy": false,
8
+ "model_input_names": [
9
+ "input_ids",
10
+ "attention_mask"
11
+ ],
12
  "model_max_length": 1000000000000000019884624838656,
13
+ "pad_token": "<|pad|>",
14
+ "sp_model_kwargs": {},
15
+ "spaces_between_special_tokens": false,
16
  "tokenizer_class": "TokenizersBackend",
17
+ "use_default_system_prompt": false,
18
+ "use_fast": true
19
  }
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:26a0e94f7a02ee2ce084ebf0698e80bc5004f06d8b188a3891c9033114516d03
3
  size 5329
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f58299e731f70094b4ca534665893befd6f679e7b0ef988525dd21ec51db615
3
  size 5329