Model save
Browse files- README.md +40 -183
- chat_template.jinja +45 -54
- config.json +9 -9
- generation_config.json +4 -4
- model.safetensors +2 -2
- tokenizer.json +2 -2
- tokenizer_config.json +12 -5
- training_args.bin +1 -1
README.md
CHANGED
|
@@ -1,203 +1,60 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
license: apache-2.0
|
| 4 |
tags:
|
| 5 |
-
-
|
| 6 |
-
-
|
| 7 |
-
-
|
| 8 |
-
|
| 9 |
-
- normalization
|
| 10 |
-
- neollm
|
| 11 |
-
datasets:
|
| 12 |
-
- HuggingFaceFW/fineweb-edu
|
| 13 |
---
|
| 14 |
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
NeoLLM is a **135 M parameter** decoder-only language model trained from scratch on
|
| 18 |
-
[FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) in **FP8**
|
| 19 |
-
precision, completing training in approximately **6 hours** on a single NVIDIA RTX 5090.
|
| 20 |
-
It integrates a collection of recently published attention and normalization techniques
|
| 21 |
-
into a single architecture, with the goal of studying how they interact during
|
| 22 |
-
pretraining. The model is actively being developed and the current checkpoint represents
|
| 23 |
-
an intermediate training state.
|
| 24 |
|
| 25 |
-
|
| 26 |
-
> **Repository:** [KitsuVp/NeoLLM](https://huggingface.co/KitsuVp/NeoLLM)
|
| 27 |
-
|
| 28 |
-
---
|
| 29 |
-
|
| 30 |
-
## Architecture
|
| 31 |
-
|
| 32 |
-
NeoLLM is a decoder-only transformer with the following configuration:
|
| 33 |
-
|
| 34 |
-
| Parameter | Value |
|
| 35 |
-
|---|---|
|
| 36 |
-
| Hidden size | 512 |
|
| 37 |
-
| Layers | 12 |
|
| 38 |
-
| Attention heads | 8 |
|
| 39 |
-
| KV heads (GQA) | 2 |
|
| 40 |
-
| Head dim | 64 |
|
| 41 |
-
| Intermediate size | 1536 |
|
| 42 |
-
| Vocabulary | Qwen3 tokenizer (200,005 tokens) |
|
| 43 |
-
| Context length | 512 tokens |
|
| 44 |
-
|
| 45 |
-
### Parameter breakdown
|
| 46 |
-
|
| 47 |
-
| Parameter bucket | Count |
|
| 48 |
-
|---|---|
|
| 49 |
-
| **Total parameters** | 158.16M (158,156,792) |
|
| 50 |
-
| **Embedding parameters** (tied) | 102.40M (102,402,560) |
|
| 51 |
-
| **Non-embedding parameters** | 55.75M (55,754,232) |
|
| 52 |
-
| **Effective trainable parameters** | 158.16M (158,156,792) |
|
| 53 |
-
|
| 54 |
-
> Weight tying is **enabled**: the input embedding matrix and the language-model head
|
| 55 |
-
> share the same parameters, so the effective trainable budget is
|
| 56 |
-
> `total − embed = 55.75M`.
|
| 57 |
-
|
| 58 |
-
### Integrated techniques
|
| 59 |
-
|
| 60 |
-
Each layer combines the following mechanisms simultaneously.
|
| 61 |
-
|
| 62 |
-
**Normalization and residual stream**
|
| 63 |
-
|
| 64 |
-
- **SeeDNorm** ([arXiv:2510.22777](https://arxiv.org/abs/2510.22777)) — Applied to Q and K
|
| 65 |
-
projections. Dynamically rescales the normalization based on the input's own statistics,
|
| 66 |
-
making the attention geometry more stable across varying input distributions.
|
| 67 |
-
- **PolyNorm** ([arXiv:2602.04902](https://arxiv.org/abs/2602.04902)) — Replaces the standard
|
| 68 |
-
MLP activation with three branches: linear (x), quadratic (x²), and cubic (x³) — each
|
| 69 |
-
normalized and combined with learned weights. This allows the MLP to express both linear
|
| 70 |
-
and non-linear relationships simultaneously.
|
| 71 |
-
- **GPAS** ([arXiv:2506.22049](https://arxiv.org/abs/2506.22049)) — Gradient-Preserving
|
| 72 |
-
Activation Scaling. Applied to residual connections between sublayers; helps gradients
|
| 73 |
-
flow more cleanly during training without distorting the residual stream.
|
| 74 |
-
- **LayerNorm Scaling / LNS** ([arXiv:2502.05795](https://arxiv.org/abs/2502.05795)) — Each
|
| 75 |
-
layer's output is scaled by 1/√ℓ where ℓ is the layer index. Directly addresses the
|
| 76 |
-
"Curse of Depth" in Pre-LN transformers.
|
| 77 |
-
|
| 78 |
-
**Attention mechanisms**
|
| 79 |
-
|
| 80 |
-
- **FAN** ([arXiv:2502.21309](https://arxiv.org/abs/2502.21309)) — Fourier Analysis Networks.
|
| 81 |
-
A portion of the input projection channels are dedicated to representing periodic patterns
|
| 82 |
-
(cosine/sine pairs), while the remainder handle standard linear content.
|
| 83 |
-
- **MEA** ([arXiv:2601.19611](https://arxiv.org/abs/2601.19611)) — Explicit Multi-head
|
| 84 |
-
Attention. Adds small learnable interaction matrices between attention heads for K and V.
|
| 85 |
-
- **LUCID** ([arXiv:2602.10410](https://arxiv.org/abs/2602.10410)) — Applies a learned
|
| 86 |
-
lower-triangular preconditioner to V before attention, decorrelating value representations
|
| 87 |
-
across positions.
|
| 88 |
-
- **Affine-Scaled Attention** ([arXiv:2602.23057](https://arxiv.org/abs/2602.23057)) — Adds
|
| 89 |
-
two learnable per-head scalars (α and β) to the softmax weights:
|
| 90 |
-
`[α·softmax(QKᵀ) + β]·V`.
|
| 91 |
-
- **XSA** ([arXiv:2603.09078](https://arxiv.org/abs/2603.09078)) — Exclusive Self Attention.
|
| 92 |
-
After computing attention, removes the component of the output aligned with the token's
|
| 93 |
-
own value vector.
|
| 94 |
-
- **Directional Routing** ([arXiv:2603.14923](https://arxiv.org/abs/2603.14923)) — Each head
|
| 95 |
-
learns K=4 directions in the output space; a learned router suppresses the attention output
|
| 96 |
-
along each direction per input.
|
| 97 |
-
- **Gated Attention** ([arXiv:2505.06708](https://arxiv.org/abs/2505.06708)) — A sigmoid gate
|
| 98 |
-
is applied to the attention output before the output projection, introducing non-linearity
|
| 99 |
-
and preventing attention sinks.
|
| 100 |
-
- **Momentum Attention** ([arXiv:2411.03884](https://arxiv.org/abs/2411.03884)) — Modifies Q
|
| 101 |
-
and K by subtracting a fraction of the previous position's Q and K values (causal
|
| 102 |
-
first-difference).
|
| 103 |
-
|
| 104 |
-
**MLP**
|
| 105 |
-
|
| 106 |
-
- **Learnable Multipliers** ([arXiv:2601.04890](https://arxiv.org/abs/2601.04890)) — Adds
|
| 107 |
-
per-row and per-column learnable scalar parameters to each linear layer.
|
| 108 |
-
- **SimpleGPT** ([arXiv:2602.01212](https://arxiv.org/abs/2602.01212)) — A normalization
|
| 109 |
-
strategy derived from second-order geometry analysis, applied inside MLP projections to
|
| 110 |
-
improve optimization stability.
|
| 111 |
-
|
| 112 |
-
---
|
| 113 |
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|---|---|
|
| 118 |
-
| Dataset | FineWeb-Edu (sample-10BT) |
|
| 119 |
-
| Tokens seen | ~1.54B (46,875 steps × batch 64 × length 512) |
|
| 120 |
-
| Precision | FP8 native (E4M3 weights/activations, E5M2 gradients) + BF16 fallback |
|
| 121 |
-
| Optimizer | Conda (Column-Normalized Adam) + GPA |
|
| 122 |
-
| Learning rate | 6e-04 with linear warmup (10 % of steps) |
|
| 123 |
-
| Weight decay | 0.1 |
|
| 124 |
-
| Training time | ~6h 11m |
|
| 125 |
-
| Hardware | NVIDIA RTX 5090 (single GPU) |
|
| 126 |
-
|
| 127 |
-
### Training curve
|
| 128 |
-
|
| 129 |
-
| Step | Train Loss | Val Loss |
|
| 130 |
-
|---|---|---|
|
| 131 |
-
| 5,000 | 6.297 | 6.164 |
|
| 132 |
-
| 10,000 | 5.631 | 5.491 |
|
| 133 |
-
| 15,000 | 5.333 | 5.188 |
|
| 134 |
-
| 20,000 | 5.158 | 4.997 |
|
| 135 |
-
| 25,000 | 5.033 | 4.863 |
|
| 136 |
-
| 30,000 | 4.933 | 4.761 |
|
| 137 |
-
| 35,000 | 4.861 | 4.685 |
|
| 138 |
-
| 40,000 | 4.761 | 4.598 |
|
| 139 |
-
| 45,000 | 4.698 | 4.524 |
|
| 140 |
-
| 46,875 | — | 4.507 |
|
| 141 |
|
| 142 |
-
|
| 143 |
|
| 144 |
-
|
| 145 |
|
| 146 |
-
|
| 147 |
-
will improve with more training.
|
| 148 |
-
- **Gradient spike at step 40k** — Reorganized the attention pattern in layer 9 that
|
| 149 |
-
previously captured long-range token correlations. A checkpoint from ~step 38k is expected
|
| 150 |
-
to have better aggregate benchmark scores.
|
| 151 |
-
- **PolyNorm exclusivity** — The quadratic branch has become partially redundant with the
|
| 152 |
-
linear branch. Will be corrected in the next training run.
|
| 153 |
-
- **Base model only** — Not instruction-tuned or aligned; purely a next-token-prediction
|
| 154 |
-
base model.
|
| 155 |
|
| 156 |
-
|
| 157 |
|
| 158 |
-
##
|
| 159 |
-
|
| 160 |
-
All papers whose techniques are integrated into NeoLLM's architecture:
|
| 161 |
-
|
| 162 |
-
| Technique | Paper title | arXiv |
|
| 163 |
-
|---|---|---|
|
| 164 |
-
| SeeDNorm | Self-Rescaled Dynamic Normalization | [2510.22777](https://arxiv.org/abs/2510.22777) |
|
| 165 |
-
| MEA | Explicit Multi-head Attention | [2601.19611](https://arxiv.org/abs/2601.19611) |
|
| 166 |
-
| Learnable Multipliers | Freeing the Scale of Language Model Matrix Layers | [2601.04890](https://arxiv.org/abs/2601.04890) |
|
| 167 |
-
| Directional Routing | Directional Routing in Transformers | [2603.14923](https://arxiv.org/abs/2603.14923) |
|
| 168 |
-
| XSA | Exclusive Self Attention | [2603.09078](https://arxiv.org/abs/2603.09078) |
|
| 169 |
-
| Gated Attention | Gated Attention for LLMs | [2505.06708](https://arxiv.org/abs/2505.06708) |
|
| 170 |
-
| Affine-Scaled Attention | Affine-Scaled Attention | [2602.23057](https://arxiv.org/abs/2602.23057) |
|
| 171 |
-
| LNS | The Curse of Depth in LLMs | [2502.05795](https://arxiv.org/abs/2502.05795) |
|
| 172 |
-
| LUCID | Attention with Preconditioned Representations | [2602.10410](https://arxiv.org/abs/2602.10410) |
|
| 173 |
-
| FAN | Fourier Analysis Networks | [2502.21309](https://arxiv.org/abs/2502.21309) |
|
| 174 |
-
| SimpleGPT | SimpleGPT | [2602.01212](https://arxiv.org/abs/2602.01212) |
|
| 175 |
-
| GPAS | Gradient-Preserving Activation Scaling | [2506.22049](https://arxiv.org/abs/2506.22049) |
|
| 176 |
-
| PolyNorm | PolyNorm / PolyCom | [2602.04902](https://arxiv.org/abs/2602.04902) |
|
| 177 |
-
| Momentum Attention | Momentum Attention | [2411.03884](https://arxiv.org/abs/2411.03884) |
|
| 178 |
-
| TWEO (analysis ref.) | Transformers Without Extreme Outliers | [2511.23225](https://arxiv.org/abs/2511.23225) |
|
| 179 |
|
| 180 |
-
|
| 181 |
|
| 182 |
-
##
|
| 183 |
|
| 184 |
-
|
| 185 |
-
@misc{neollm2026,
|
| 186 |
-
title = {NeoLLM: A Research Language Model Integrating Recent Attention and Normalization Techniques},
|
| 187 |
-
author = {KitsuVp},
|
| 188 |
-
year = {2026},
|
| 189 |
-
url = {https://huggingface.co/KitsuVp/NeoLLM}
|
| 190 |
-
}
|
| 191 |
-
```
|
| 192 |
|
| 193 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 194 |
|
| 195 |
-
##
|
| 196 |
|
| 197 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 198 |
|
| 199 |
-
---
|
| 200 |
|
| 201 |
-
##
|
| 202 |
|
| 203 |
-
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
library_name: transformers
|
|
|
|
| 3 |
tags:
|
| 4 |
+
- generated_from_trainer
|
| 5 |
+
model-index:
|
| 6 |
+
- name: NeoLLM
|
| 7 |
+
results: []
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 11 |
+
should probably proofread and complete it, then remove this comment. -->
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
|
| 13 |
+
# NeoLLM
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
+
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
|
| 16 |
+
It achieves the following results on the evaluation set:
|
| 17 |
+
- Loss: 3.6334
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
+
## Model description
|
| 20 |
|
| 21 |
+
More information needed
|
| 22 |
|
| 23 |
+
## Intended uses & limitations
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
+
More information needed
|
| 26 |
|
| 27 |
+
## Training and evaluation data
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
+
More information needed
|
| 30 |
|
| 31 |
+
## Training procedure
|
| 32 |
|
| 33 |
+
### Training hyperparameters
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
+
The following hyperparameters were used during training:
|
| 36 |
+
- learning_rate: 0.0006
|
| 37 |
+
- train_batch_size: 64
|
| 38 |
+
- eval_batch_size: 64
|
| 39 |
+
- seed: 42
|
| 40 |
+
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
|
| 41 |
+
- lr_scheduler_type: linear
|
| 42 |
+
- lr_scheduler_warmup_steps: 0.1
|
| 43 |
+
- num_epochs: 1
|
| 44 |
|
| 45 |
+
### Training results
|
| 46 |
|
| 47 |
+
| Training Loss | Epoch | Step | Validation Loss |
|
| 48 |
+
|:-------------:|:-----:|:-----:|:---------------:|
|
| 49 |
+
| 4.0003 | 0.32 | 5000 | 3.9566 |
|
| 50 |
+
| 3.7787 | 0.64 | 10000 | 3.7377 |
|
| 51 |
+
| 3.6911 | 0.96 | 15000 | 3.6386 |
|
| 52 |
+
| 3.6809 | 1.0 | 15625 | 3.6334 |
|
| 53 |
|
|
|
|
| 54 |
|
| 55 |
+
### Framework versions
|
| 56 |
|
| 57 |
+
- Transformers 5.5.3
|
| 58 |
+
- Pytorch 2.11.0+cu130
|
| 59 |
+
- Datasets 4.8.4
|
| 60 |
+
- Tokenizers 0.22.2
|
chat_template.jinja
CHANGED
|
@@ -1,54 +1,45 @@
|
|
| 1 |
-
{
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
{%-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
{%-
|
| 10 |
-
|
| 11 |
-
{
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
{%-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
{%-
|
| 20 |
-
{%- endif %}
|
| 21 |
-
{%-
|
| 22 |
-
{
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
{
|
| 40 |
-
{%-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
{{- '\n</tool_response>' }}
|
| 47 |
-
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
|
| 48 |
-
{{- '<|im_end|>\n' }}
|
| 49 |
-
{%- endif %}
|
| 50 |
-
{%- endif %}
|
| 51 |
-
{%- endfor %}
|
| 52 |
-
{%- if add_generation_prompt %}
|
| 53 |
-
{{- '<|im_start|>assistant\n' }}
|
| 54 |
-
{%- endif %}
|
|
|
|
| 1 |
+
{{- bos_token -}}
|
| 2 |
+
{%- set keep_past_thinking = keep_past_thinking | default(false) -%}
|
| 3 |
+
{%- set ns = namespace(system_prompt="") -%}
|
| 4 |
+
{%- if messages[0]["role"] == "system" -%}
|
| 5 |
+
{%- set ns.system_prompt = messages[0]["content"] -%}
|
| 6 |
+
{%- set messages = messages[1:] -%}
|
| 7 |
+
{%- endif -%}
|
| 8 |
+
{%- if tools -%}
|
| 9 |
+
{%- set ns.system_prompt = ns.system_prompt + ("\n" if ns.system_prompt else "") + "List of tools: [" -%}
|
| 10 |
+
{%- for tool in tools -%}
|
| 11 |
+
{%- if tool is not string -%}
|
| 12 |
+
{%- set tool = tool | tojson -%}
|
| 13 |
+
{%- endif -%}
|
| 14 |
+
{%- set ns.system_prompt = ns.system_prompt + tool -%}
|
| 15 |
+
{%- if not loop.last -%}
|
| 16 |
+
{%- set ns.system_prompt = ns.system_prompt + ", " -%}
|
| 17 |
+
{%- endif -%}
|
| 18 |
+
{%- endfor -%}
|
| 19 |
+
{%- set ns.system_prompt = ns.system_prompt + "]" -%}
|
| 20 |
+
{%- endif -%}
|
| 21 |
+
{%- if ns.system_prompt -%}
|
| 22 |
+
{{- "<|im_start|>system\n" + ns.system_prompt + "<|im_end|>\n" -}}
|
| 23 |
+
{%- endif -%}
|
| 24 |
+
{%- set ns.last_assistant_index = -1 -%}
|
| 25 |
+
{%- for message in messages -%}
|
| 26 |
+
{%- if message["role"] == "assistant" -%}
|
| 27 |
+
{%- set ns.last_assistant_index = loop.index0 -%}
|
| 28 |
+
{%- endif -%}
|
| 29 |
+
{%- endfor -%}
|
| 30 |
+
{%- for message in messages -%}
|
| 31 |
+
{{- "<|im_start|>" + message["role"] + "\n" -}}
|
| 32 |
+
{%- set content = message["content"] -%}
|
| 33 |
+
{%- if content is not string -%}
|
| 34 |
+
{%- set content = content | tojson -%}
|
| 35 |
+
{%- endif -%}
|
| 36 |
+
{%- if message["role"] == "assistant" and not keep_past_thinking and loop.index0 != ns.last_assistant_index -%}
|
| 37 |
+
{%- if "</think>" in content -%}
|
| 38 |
+
{%- set content = content.split("</think>")[-1] | trim -%}
|
| 39 |
+
{%- endif -%}
|
| 40 |
+
{%- endif -%}
|
| 41 |
+
{{- content + "<|im_end|>\n" -}}
|
| 42 |
+
{%- endfor -%}
|
| 43 |
+
{%- if add_generation_prompt -%}
|
| 44 |
+
{{- "<|im_start|>assistant\n" -}}
|
| 45 |
+
{%- endif -%}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
config.json
CHANGED
|
@@ -11,12 +11,12 @@
|
|
| 11 |
"AutoModel": "modeling_neollm.NeoLLMModel",
|
| 12 |
"AutoModelForCausalLM": "modeling_neollm.NeoLLMForCausalLM"
|
| 13 |
},
|
| 14 |
-
"bos_token_id":
|
| 15 |
"directional_routing_k": 4,
|
| 16 |
"directional_routing_temp": 3.0,
|
| 17 |
"dropout_rate": 0.1,
|
| 18 |
"dtype": "bfloat16",
|
| 19 |
-
"eos_token_id":
|
| 20 |
"fan_ratio": 0.125,
|
| 21 |
"fan_ratio_ffn": 0.0625,
|
| 22 |
"generator_d_seed": 128,
|
|
@@ -45,7 +45,7 @@
|
|
| 45 |
"num_attention_heads": 8,
|
| 46 |
"num_hidden_layers": 12,
|
| 47 |
"num_key_value_heads": 2,
|
| 48 |
-
"pad_token_id":
|
| 49 |
"partial_rotary_factor": 0.25,
|
| 50 |
"polynorm_exclusive": false,
|
| 51 |
"repo_d_p": 64,
|
|
@@ -58,16 +58,16 @@
|
|
| 58 |
},
|
| 59 |
"rope_theta": 10000.0,
|
| 60 |
"tie_word_embeddings": true,
|
| 61 |
-
"transformers_version": "5.5.
|
| 62 |
"use_affine_scaled_attention": true,
|
| 63 |
"use_attn_res": false,
|
| 64 |
"use_cache": false,
|
| 65 |
-
"use_directional_routing":
|
| 66 |
"use_hadamard_o_proj": true,
|
| 67 |
"use_jtokm": false,
|
| 68 |
-
"use_laurel":
|
| 69 |
-
"use_laurel_lr":
|
| 70 |
-
"use_laurel_rw":
|
| 71 |
"use_lucid_attention": true,
|
| 72 |
"use_mea_attention": true,
|
| 73 |
"use_momentum_attention": true,
|
|
@@ -83,6 +83,6 @@
|
|
| 83 |
"versatile_gumbel_temp_start": 5.0,
|
| 84 |
"versatile_max_depth": 2,
|
| 85 |
"versatile_total_experts": 4,
|
| 86 |
-
"vocab_size":
|
| 87 |
"xsa_eps": 1e-06
|
| 88 |
}
|
|
|
|
| 11 |
"AutoModel": "modeling_neollm.NeoLLMModel",
|
| 12 |
"AutoModelForCausalLM": "modeling_neollm.NeoLLMForCausalLM"
|
| 13 |
},
|
| 14 |
+
"bos_token_id": 1,
|
| 15 |
"directional_routing_k": 4,
|
| 16 |
"directional_routing_temp": 3.0,
|
| 17 |
"dropout_rate": 0.1,
|
| 18 |
"dtype": "bfloat16",
|
| 19 |
+
"eos_token_id": 7,
|
| 20 |
"fan_ratio": 0.125,
|
| 21 |
"fan_ratio_ffn": 0.0625,
|
| 22 |
"generator_d_seed": 128,
|
|
|
|
| 45 |
"num_attention_heads": 8,
|
| 46 |
"num_hidden_layers": 12,
|
| 47 |
"num_key_value_heads": 2,
|
| 48 |
+
"pad_token_id": 0,
|
| 49 |
"partial_rotary_factor": 0.25,
|
| 50 |
"polynorm_exclusive": false,
|
| 51 |
"repo_d_p": 64,
|
|
|
|
| 58 |
},
|
| 59 |
"rope_theta": 10000.0,
|
| 60 |
"tie_word_embeddings": true,
|
| 61 |
+
"transformers_version": "5.5.3",
|
| 62 |
"use_affine_scaled_attention": true,
|
| 63 |
"use_attn_res": false,
|
| 64 |
"use_cache": false,
|
| 65 |
+
"use_directional_routing": false,
|
| 66 |
"use_hadamard_o_proj": true,
|
| 67 |
"use_jtokm": false,
|
| 68 |
+
"use_laurel": false,
|
| 69 |
+
"use_laurel_lr": false,
|
| 70 |
+
"use_laurel_rw": false,
|
| 71 |
"use_lucid_attention": true,
|
| 72 |
"use_mea_attention": true,
|
| 73 |
"use_momentum_attention": true,
|
|
|
|
| 83 |
"versatile_gumbel_temp_start": 5.0,
|
| 84 |
"versatile_max_depth": 2,
|
| 85 |
"versatile_total_experts": 4,
|
| 86 |
+
"vocab_size": 64402,
|
| 87 |
"xsa_eps": 1e-06
|
| 88 |
}
|
generation_config.json
CHANGED
|
@@ -1,11 +1,11 @@
|
|
| 1 |
{
|
| 2 |
"_from_model_config": true,
|
| 3 |
-
"bos_token_id":
|
| 4 |
"eos_token_id": [
|
| 5 |
-
|
| 6 |
],
|
| 7 |
"output_attentions": false,
|
| 8 |
"output_hidden_states": false,
|
| 9 |
-
"pad_token_id":
|
| 10 |
-
"transformers_version": "5.5.
|
| 11 |
}
|
|
|
|
| 1 |
{
|
| 2 |
"_from_model_config": true,
|
| 3 |
+
"bos_token_id": 1,
|
| 4 |
"eos_token_id": [
|
| 5 |
+
7
|
| 6 |
],
|
| 7 |
"output_attentions": false,
|
| 8 |
"output_hidden_states": false,
|
| 9 |
+
"pad_token_id": 0,
|
| 10 |
+
"transformers_version": "5.5.3"
|
| 11 |
}
|
model.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ca5fb5b2d4172761f6f863a1d57b1fab971b8c411b3239bd2d8925fb4bf171ea
|
| 3 |
+
size 156549256
|
tokenizer.json
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:df1d8d5ec5d091b460562ffd545e4a5e91d17d4a0db7ebe733be34ed374377bd
|
| 3 |
+
size 4733389
|
tokenizer_config.json
CHANGED
|
@@ -1,12 +1,19 @@
|
|
| 1 |
{
|
| 2 |
-
"add_prefix_space": false,
|
| 3 |
"backend": "tokenizers",
|
| 4 |
-
"bos_token": "<|
|
| 5 |
"clean_up_tokenization_spaces": false,
|
| 6 |
-
"eos_token": "<|
|
| 7 |
"is_local": false,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
"model_max_length": 1000000000000000019884624838656,
|
| 9 |
-
"pad_token": "<|
|
|
|
|
|
|
|
| 10 |
"tokenizer_class": "TokenizersBackend",
|
| 11 |
-
"
|
|
|
|
| 12 |
}
|
|
|
|
| 1 |
{
|
|
|
|
| 2 |
"backend": "tokenizers",
|
| 3 |
+
"bos_token": "<|startoftext|>",
|
| 4 |
"clean_up_tokenization_spaces": false,
|
| 5 |
+
"eos_token": "<|im_end|>",
|
| 6 |
"is_local": false,
|
| 7 |
+
"legacy": false,
|
| 8 |
+
"model_input_names": [
|
| 9 |
+
"input_ids",
|
| 10 |
+
"attention_mask"
|
| 11 |
+
],
|
| 12 |
"model_max_length": 1000000000000000019884624838656,
|
| 13 |
+
"pad_token": "<|pad|>",
|
| 14 |
+
"sp_model_kwargs": {},
|
| 15 |
+
"spaces_between_special_tokens": false,
|
| 16 |
"tokenizer_class": "TokenizersBackend",
|
| 17 |
+
"use_default_system_prompt": false,
|
| 18 |
+
"use_fast": true
|
| 19 |
}
|
training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 5329
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4f58299e731f70094b4ca534665893befd6f679e7b0ef988525dd21ec51db615
|
| 3 |
size 5329
|