Mingke977 commited on
Commit
87b0d64
·
verified ·
1 Parent(s): 38dc87b

Add files using upload-large-folder tool

Browse files
README.md CHANGED
@@ -1,384 +1,49 @@
1
  ---
2
- language:
3
- - zh
4
- - en
5
- pipeline_tag: text-generation
6
  library_name: transformers
7
- ---
8
- <div align="center">
9
- <picture>
10
- <img src="figures/joyai-logo.png" width="30%" alt="JoyAI-LLM Flash">
11
- </picture>
12
- </div>
13
- <hr>
14
-
15
- <div align="center" style="line-height: 1;">
16
- <a href="https://huggingface.co/jdopensource" target="_blank"><img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-JD-ffc107?color=ffc107&logoColor=white"/></a>
17
- <a href="https://huggingface.co/jdopensource/JoyAI-LLM-Flash/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/badge/License-Modified_MIT-f5de53?&color=f5de53"/></a>
18
- </div>
19
-
20
- <p align="center">
21
- <b>📰&nbsp;&nbsp;<a href="https://huggingface.co/jdopensource/JoyAI-LLM-Flash/blob/main/JoyAI_Flash_techreport.pdf">Tech Report</a>
22
- </p>
23
-
24
-
25
-
26
- ## 1. Model Introduction
27
-
28
- JoyAI-LLM Flash is a state-of-the-art medium-sized instruct language model with 3 billion activated parameters and 48 billion total parameters. JoyAI-LLM Flash was pretrained on 20 trillion text tokens using Muon optimizer, followed by large-scale supervised fine-tuning (SFT), direct preference optimization (DPO), and reinforcement learning (RL) across diverse environments. JoyAI-LLM Flash achieves strong performance across frontier knowledge, reasoning, coding tasks and agentic capabilities.
29
-
30
- ### Key Features
31
-
32
- - Fibration Policy Optimization: Introduces fiber bundle theory into reinforcement learning, proposing a novel optimization framework, FiberPO. This method is specifically designed to handle the challenges of large-scale and heterogeneous agent training, improving stability and robustness under complex data distributions. [paper link](https://arxiv.org/abs/2603.08239)
33
- - Training-Inference Collaboration: apply Muon optimizer with dense MTP, develop novel optimization techniques to resolve instabilities while scaling up, delivering 1.3× to 1.7× the throughput of the non-MTP version.
34
- - Agentic Intelligence: designed for tool use, reasoning, and autonomous problem-solving.
35
-
36
- ## 2. Model Summary
37
-
38
- | | |
39
- | :-----------------------------------------: | :----------------------: |
40
- | **Architecture** | Mixture-of-Experts (MoE) |
41
- | **Total Parameters** | 48B |
42
- | **Activated Parameters** | 3B |
43
- | **Number of Layers** (Dense layer included) | 40 |
44
- | **Number of Dense Layers** | 1 |
45
- | **Attention Hidden Dimension** | 2048 |
46
- | **MoE Hidden Dimension** (per Expert) | 768 |
47
- | **Number of Attention Heads** | 32 |
48
- | **Number of Experts** | 256 |
49
- | **Selected Experts per Token** | 8 |
50
- | **Number of Shared Experts** | 1 |
51
- | **Vocabulary Size** | 129K |
52
- | **Context Length** | 128K |
53
- | **Attention Mechanism** | MLA |
54
- | **Activation Function** | SwiGLU |
55
- | </div> | |
56
-
57
-
58
- ## 3. Evaluation Results
59
-
60
- <table>
61
- <thead>
62
- <tr>
63
- <th align="center">Benchmark</th>
64
- <th align="center"><sup>JoyAI-LLM Flash</sup></th>
65
- <th align="center"><sup>Qwen3-30B-A3B-Instuct-2507</sup></th>
66
- <th align="center"><sup>GLM-4.7-Flash<br>(Non-thinking)</sup></th>
67
- </tr>
68
- </thead>
69
- <tbody>
70
-
71
-
72
- <tr>
73
- <td align="center" colspan=8><strong>Knowledge &amp; Alignment</strong></td>
74
- </tr>
75
- <tr>
76
- <td align="center" style="vertical-align: middle">MMLU</td>
77
- <td align="center" style="vertical-align: middle"><strong>89.50</strong></td>
78
- <td align="center" style="vertical-align: middle">86.87</td>
79
- <td align="center" style="vertical-align: middle">80.53</td>
80
- </tr>
81
- <tr>
82
- <td align="center" style="vertical-align: middle">MMLU-Pro</td>
83
- <td align="center" style="vertical-align: middle"><strong>81.02</strong></td>
84
- <td align="center" style="vertical-align: middle">73.88</td>
85
- <td align="center" style="vertical-align: middle">63.62</td>
86
- </tr>
87
- <tr>
88
- <td align="center" style="vertical-align: middle">CMMLU</td>
89
- <td align="center" style="vertical-align: middle"><strong>87.03</strong></td>
90
- <td align="center" style="vertical-align: middle">85.88</td>
91
- <td align="center" style="vertical-align: middle">75.85</td>
92
- </tr>
93
- <tr>
94
- <td align="center" style="vertical-align: middle">GPQA-Diamond</td>
95
- <td align="center" style="vertical-align: middle"><strong>74.43</strong></td>
96
- <td align="center" style="vertical-align: middle">68.69</td>
97
- <td align="center" style="vertical-align: middle">39.90</td>
98
- </tr>
99
- <tr>
100
- <td align="center" style="vertical-align: middle">SuperGPQA</td>
101
- <td align="center" style="vertical-align: middle"><strong>55.00</strong></td>
102
- <td align="center" style="vertical-align: middle">52.00</td>
103
- <td align="center" style="vertical-align: middle">32.00</td>
104
- </tr>
105
- <tr>
106
- <td align="center" style="vertical-align: middle">LiveBench</td>
107
- <td align="center" style="vertical-align: middle"><strong>72.90</strong></td>
108
- <td align="center" style="vertical-align: middle">59.70</td>
109
- <td align="center" style="vertical-align: middle">43.10</td>
110
- </tr>
111
- <tr>
112
- <td align="center" style="vertical-align: middle">IFEval</td>
113
- <td align="center" style="vertical-align: middle"><strong>86.69</strong></td>
114
- <td align="center" style="vertical-align: middle">83.18</td>
115
- <td align="center" style="vertical-align: middle">82.44</td>
116
- </tr>
117
- <tr>
118
- <td align="center" style="vertical-align: middle">AlignBench</td>
119
- <td align="center" style="vertical-align: middle"><strong>8.24</strong></td>
120
- <td align="center" style="vertical-align: middle">8.07</td>
121
- <td align="center" style="vertical-align: middle">6.85</td>
122
- </tr>
123
- <tr>
124
- <td align="center" style="vertical-align: middle">HellaSwag</td>
125
- <td align="center" style="vertical-align: middle"><strong>91.79</strong></td>
126
- <td align="center" style="vertical-align: middle">89.90</td>
127
- <td align="center" style="vertical-align: middle">60.84</td>
128
- </tr>
129
-
130
- <tr>
131
- <td align="center" colspan=8><strong>Coding</strong></td>
132
- </tr>
133
- <tr>
134
- <td align="center" style="vertical-align: middle">HumanEval</td>
135
- <td align="center" style="vertical-align: middle"><strong>96.34</strong></td>
136
- <td align="center" style="vertical-align: middle">95.12</td>
137
- <td align="center" style="vertical-align: middle">74.39</td>
138
- </tr>
139
- <tr>
140
- <td align="center" style="vertical-align: middle">LiveCodeBench</td>
141
- <td align="center" style="vertical-align: middle"><strong>65.60</strong></td>
142
- <td align="center" style="vertical-align: middle">39.71</td>
143
- <td align="center" style="vertical-align: middle">27.43</td>
144
- </tr>
145
- <tr>
146
- <td align="center" style="vertical-align: middle">SciCode</td>
147
- <td align="center" style="vertical-align: middle"><strong>3.08/22.92</strong></td>
148
- <td align="center" style="vertical-align: middle"><strong>3.08/22.92</strong></td>
149
- <td align="center" style="vertical-align: middle">3.08/15.11</td>
150
- </tr>
151
- <tr>
152
- <td align="center" colspan=8><strong>Mathematics</strong></td>
153
- </tr>
154
- <tr>
155
- <td align="center" style="vertical-align: middle">GSM8K</td>
156
- <td align="center" style="vertical-align: middle"><strong>95.83</strong></td>
157
- <td align="center" style="vertical-align: middle">79.83</td>
158
- <td align="center" style="vertical-align: middle">81.88</td>
159
- </tr>
160
- <tr>
161
- <td align="center" style="vertical-align: middle">AIME2025</td>
162
- <td align="center" style="vertical-align: middle"><strong>65.83</strong></td>
163
- <td align="center" style="vertical-align: middle">62.08</td>
164
- <td align="center" style="vertical-align: middle">24.17</td>
165
- </tr>
166
- <tr>
167
- <td align="center" style="vertical-align: middle">MATH 500</td>
168
- <td align="center" style="vertical-align: middle"><strong>97.10</strong></td>
169
- <td align="center" style="vertical-align: middle">89.80</td>
170
- <td align="center" style="vertical-align: middle">90.90</td>
171
- </tr>
172
-
173
- <tr>
174
- <td align="center" colspan=8><strong>Agentic</strong></td>
175
- </tr>
176
- <tr>
177
- <td align="center" style="vertical-align: middle">SWE-bench Verified</td>
178
- <td align="center" style="vertical-align: middle"><strong>60.60</strong></td>
179
- <td align="center" style="vertical-align: middle">24.44</td>
180
- <td align="center" style="vertical-align: middle">51.60</td>
181
- </tr>
182
- <tr>
183
- <td align="center" style="vertical-align: middle">Tau2-Retail</td>
184
- <td align="center" style="vertical-align: middle"><strong>67.55</strong></td>
185
- <td align="center" style="vertical-align: middle">53.51</td>
186
- <td align="center" style="vertical-align: middle">62.28</td>
187
- </tr>
188
- <tr>
189
- <td align="center" style="vertical-align: middle">Tau2-Airline</td>
190
- <td align="center" style="vertical-align: middle"><strong>54.00</strong></td>
191
- <td align="center" style="vertical-align: middle">32.00</td>
192
- <td align="center" style="vertical-align: middle">52.00</td>
193
- </tr>
194
- <tr>
195
- <td align="center" style="vertical-align: middle">Tau2-Telecom</td>
196
- <td align="center" style="vertical-align: middle">79.83</td>
197
- <td align="center" style="vertical-align: middle">4.39</td>
198
- <td align="center" style="vertical-align: middle"><strong>88.60</strong></td>
199
- </tr>
200
-
201
- <tr>
202
- <td align="center" colspan=8><strong>Long Context</strong></td>
203
- </tr>
204
- <tr>
205
- <td align="center" style="vertical-align: middle">RULER</td>
206
- <td align="center" style="vertical-align: middle"><strong>95.60</strong></td>
207
- <td align="center" style="vertical-align: middle">89.66</td>
208
- <td align="center" style="vertical-align: middle">56.12</td>
209
- </tr>
210
- </tbody>
211
- </table>
212
-
213
-
214
- ## 4. Deployment
215
-
216
- > [!Note]
217
- > You can access JoyAI-LLM Flash API on https://docs.jdcloud.com/cn/jdaip/chat and we provide OpenAI/Anthropic-compatible API for you.
218
- > Currently, JoyAI-LLM Flash is recommended to run on the following inference engines:
219
-
220
- * vLLM
221
- * SGLang
222
-
223
- The minimum version requirement for `transformers` is `4.57.1`.
224
-
225
- Deployment examples can be found in the [Model Deployment Guide](docs/deploy_guidance.md).
226
-
227
-
228
-
229
- ## 5. Model Usage
230
-
231
- The usage demos below demonstrate how to call our official API.
232
-
233
- For third-party APIs deployed with vLLM or SGLang, please note that:
234
-
235
- > [!Note] Recommended sampling parameters: `temperature=0.6`, `top_p=1.0`
236
-
237
- ### Chat Completion
238
-
239
- This is a simple chat completion script which shows how to call JoyAI-Flash API.
240
-
241
- ```python
242
- from openai import OpenAI
243
-
244
- client = OpenAI(base_url="http://IP:PORT/v1", api_key="EMPTY")
245
-
246
-
247
- def simple_chat(client: OpenAI):
248
- messages = [
249
- {
250
- "role": "user",
251
- "content": [
252
- {
253
- "type": "text",
254
- "text": "which one is bigger, 9.11 or 9.9? think carefully.",
255
- }
256
- ],
257
- },
258
- ]
259
- model_name = client.models.list().data[0].id
260
- response = client.chat.completions.create(
261
- model=model_name, messages=messages, stream=False, max_tokens=4096
262
- )
263
- print(f"response: {response.choices[0].message.content}")
264
-
265
-
266
- if __name__ == "__main__":
267
- simple_chat(client)
268
- ```
269
-
270
-
271
- ### Tool call Completion
272
-
273
- This is a simple toll call completion script which shows how to call JoyAI-Flash API.
274
-
275
- ```python
276
- import json
277
-
278
- from openai import OpenAI
279
-
280
- client = OpenAI(base_url="http://IP:PORT/v1", api_key="EMPTY")
281
-
282
-
283
- def my_calculator(expression: str) -> str:
284
- return str(eval(expression))
285
-
286
-
287
- def rewrite(expression: str) -> str:
288
- return str(expression)
289
-
290
-
291
- def simple_tool_call(client: OpenAI):
292
- messages = [
293
- {
294
- "role": "user",
295
- "content": [
296
- {
297
- "type": "text",
298
- "text": "use my functions to compute the results for the equations: 6+1",
299
- },
300
- ],
301
- },
302
- ]
303
- tools = [
304
- {
305
- "type": "function",
306
- "function": {
307
- "name": "my_calculator",
308
- "description": "A calculator that can evaluate a mathematical equation and compute its results.",
309
- "parameters": {
310
- "type": "object",
311
- "properties": {
312
- "expression": {
313
- "type": "string",
314
- "description": "The mathematical expression to evaluate.",
315
- },
316
- },
317
- "required": ["expression"],
318
- },
319
- },
320
- },
321
- {
322
- "type": "function",
323
- "function": {
324
- "name": "rewrite",
325
- "description": "Rewrite a given text for improved clarity",
326
- "parameters": {
327
- "type": "object",
328
- "properties": {
329
- "text": {
330
- "type": "string",
331
- "description": "The input text to rewrite",
332
- }
333
- },
334
- },
335
- },
336
- },
337
- ]
338
- model_name = client.models.list().data[0].id
339
- response = client.chat.completions.create(
340
- model=model_name,
341
- messages=messages,
342
- temperature=1.0,
343
- max_tokens=1024,
344
- tools=tools,
345
- tool_choice="auto",
346
- )
347
- tool_calls = response.choices[0].message.tool_calls
348
-
349
- results = []
350
- for tool_call in tool_calls:
351
- function_name = tool_call.function.name
352
- function_args = tool_call.function.arguments
353
- if function_name == "my_calculator":
354
- result = my_calculator(**json.loads(function_args))
355
- results.append(result)
356
- messages.append({"role": "assistant", "tool_calls": tool_calls})
357
- for tool_call, result in zip(tool_calls, results):
358
- messages.append(
359
- {
360
- "role": "tool",
361
- "tool_call_id": tool_call.id,
362
- "name": tool_call.function.name,
363
- "content": result,
364
- }
365
- )
366
- response = client.chat.completions.create(
367
- model=model_name,
368
- messages=messages,
369
- temperature=1.0,
370
- max_tokens=1024,
371
- )
372
- print(response.choices[0].message.content)
373
-
374
-
375
- if __name__ == "__main__":
376
- simple_tool_call(client)
377
-
378
- ```
379
 
380
  ---
381
-
382
- ## 6. License
383
-
384
- Both the code repository and the model weights are released under the [Modified MIT License](LICENSE).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: []
 
 
 
3
  library_name: transformers
4
+ tags:
5
+ - mergekit
6
+ - merge
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
 
8
  ---
9
+ # c362_step50_ta05
10
+
11
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
+
13
+ ## Merge Details
14
+ ### Merge Method
15
+
16
+ This model was merged using the [Linear DARE](https://arxiv.org/abs/2311.03099) merge method using /root/myCodeLab/host/downloads/models/40Bra as a base.
17
+
18
+ ### Models Merged
19
+
20
+ The following models were included in the merge:
21
+ * /root/myCodeLab/host/verl/ckpts/40bra_k8s_single_domain/40bra_k8s_16node_sd_c362_20260327_205644_unknown/global_step_50/actor/huggingface
22
+
23
+ ### Configuration
24
+
25
+ The following YAML configuration was used to produce this model:
26
+
27
+ ```yaml
28
+ base_model: /root/myCodeLab/host/downloads/models/40Bra
29
+ dtype: float32
30
+ merge_method: dare_linear
31
+ modules:
32
+ default:
33
+ slices:
34
+ - sources:
35
+ - layer_range: [0, 40]
36
+ model: /root/myCodeLab/host/downloads/models/40Bra
37
+ - layer_range: [0, 40]
38
+ model: /root/myCodeLab/host/verl/ckpts/40bra_k8s_single_domain/40bra_k8s_16node_sd_c362_20260327_205644_unknown/global_step_50/actor/huggingface
39
+ parameters:
40
+ density: 1.0
41
+ weight:
42
+ - filter: .mlp.gate.
43
+ value: 0.0
44
+ - value: 0.5
45
+ - sources:
46
+ - layer_range: [40, 41]
47
+ model: /root/myCodeLab/host/downloads/models/40Bra
48
+ out_dtype: bfloat16
49
+ ```
chat_template.jinja CHANGED
@@ -100,4 +100,4 @@
100
 
101
  {%- if add_generation_prompt -%}
102
  {{ '<|Assistant|>' }}{{ '<|end_of_thought|>' }}
103
- {%- endif -%}
 
100
 
101
  {%- if add_generation_prompt -%}
102
  {{ '<|Assistant|>' }}{{ '<|end_of_thought|>' }}
103
+ {%- endif -%}
config.json CHANGED
@@ -10,9 +10,11 @@
10
  "AutoModelForCausalLM": "modeling_deepseek.DeepseekV3ForCausalLM"
11
  },
12
  "bos_token_id": 0,
 
13
  "eos_token_id": 1,
14
  "ep_size": 1,
15
  "first_k_dense_replace": 1,
 
16
  "hidden_act": "silu",
17
  "hidden_size": 2048,
18
  "initializer_range": 0.02,
@@ -31,18 +33,21 @@
31
  "num_hidden_layers": 40,
32
  "num_key_value_heads": 32,
33
  "num_nextn_predict_layers": 1,
 
34
  "q_lora_rank": 1536,
 
35
  "qk_nope_head_dim": 128,
36
  "qk_rope_head_dim": 64,
37
  "rms_norm_eps": 1e-06,
 
 
38
  "rope_theta": 32000000,
39
  "routed_scaling_factor": 2.5,
40
  "scoring_func": "sigmoid",
41
  "tie_word_embeddings": false,
42
  "topk_group": 1,
43
  "topk_method": "noaux_tc",
44
- "torch_dtype": "bfloat16",
45
- "transformers_version": "4.44.2",
46
  "use_cache": true,
47
  "v_head_dim": 128,
48
  "vocab_size": 129280
 
10
  "AutoModelForCausalLM": "modeling_deepseek.DeepseekV3ForCausalLM"
11
  },
12
  "bos_token_id": 0,
13
+ "dtype": "bfloat16",
14
  "eos_token_id": 1,
15
  "ep_size": 1,
16
  "first_k_dense_replace": 1,
17
+ "head_dim": 64,
18
  "hidden_act": "silu",
19
  "hidden_size": 2048,
20
  "initializer_range": 0.02,
 
33
  "num_hidden_layers": 40,
34
  "num_key_value_heads": 32,
35
  "num_nextn_predict_layers": 1,
36
+ "pretraining_tp": 1,
37
  "q_lora_rank": 1536,
38
+ "qk_head_dim": 192,
39
  "qk_nope_head_dim": 128,
40
  "qk_rope_head_dim": 64,
41
  "rms_norm_eps": 1e-06,
42
+ "rope_interleave": true,
43
+ "rope_scaling": null,
44
  "rope_theta": 32000000,
45
  "routed_scaling_factor": 2.5,
46
  "scoring_func": "sigmoid",
47
  "tie_word_embeddings": false,
48
  "topk_group": 1,
49
  "topk_method": "noaux_tc",
50
+ "transformers_version": "4.57.3",
 
51
  "use_cache": true,
52
  "v_head_dim": 128,
53
  "vocab_size": 129280
mergekit_config.yml ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ base_model: /root/myCodeLab/host/downloads/models/40Bra
2
+ dtype: float32
3
+ merge_method: dare_linear
4
+ modules:
5
+ default:
6
+ slices:
7
+ - sources:
8
+ - layer_range: [0, 40]
9
+ model: /root/myCodeLab/host/downloads/models/40Bra
10
+ - layer_range: [0, 40]
11
+ model: /root/myCodeLab/host/verl/ckpts/40bra_k8s_single_domain/40bra_k8s_16node_sd_c362_20260327_205644_unknown/global_step_50/actor/huggingface
12
+ parameters:
13
+ density: 1.0
14
+ weight:
15
+ - filter: .mlp.gate.
16
+ value: 0.0
17
+ - value: 0.5
18
+ - sources:
19
+ - layer_range: [40, 41]
20
+ model: /root/myCodeLab/host/downloads/models/40Bra
21
+ out_dtype: bfloat16
model-1-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ebd70bcbb0406a3ac84a410bbcb2084663eaff71777ff849b88c94a8f8e573fe
3
  size 140785016
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4ff3a9595c500b44c308dcb352de70289268867c1cb95398c991b6bda655288
3
  size 140785016
model-10-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4c7b0f180f1554d3f60a93f174c0d213ad21b5d8eafc5ab203391eea018d985b
3
- size 2479205264
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8096a32b85bba11dc67645fe3041e4a645b8e4f0a24395ff31de9499da1ef298
3
+ size 2479204768
model-11-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:14f08cc780cb9f13942ddac1e573e07ce29e901d3cdc1d841011f2f4d4b15d40
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:880cefe33cd0a1b2a110c31607ead991f7573c8639e6801cf4a31d4daa46600a
3
+ size 2479205552
model-12-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:494e82e9f99e062d72f0966ed459690accdf793853e9734452f6a6a09e24e13b
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:affdadcbfaac3364e4dd7a63274c9076f101375eb23225a01cebd22746e2b184
3
+ size 2479205552
model-13-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5c2697b5a70d52b52d60a23019b5129c81826ac49b6ec34ddf83020fc1ffa688
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f07893d0fb8981886ec8b10be24f262361548b634288fdfd4f4051b52326342
3
+ size 2479205552
model-14-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7d835adf4c7ca8ef751936453f0db435a012194d9c77861575e25b200f6c5bc0
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b5efe61c515172e2cb5266f912bcec2c3a2c03d51e1c0c25efe868c5f053fca
3
+ size 2479205552
model-15-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8e16104b8a0d1ec2cad06bf04ace8e15f0cb0224c0990dc752e7371094d63e36
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c19c4f37e88bd0bd89a5508e70e4fafa6d4181bf025136936448e514a647346
3
+ size 2479205552
model-16-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e0cb41f206e6cbe78c1ec3ce09de8449cbe606d4ebb77e45c35677454633b073
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a3360e97b75b40ec35001b28ecd2d78c63d613d2f0edf7dc17f55b55a9679d4
3
+ size 2479205552
model-17-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8179fa2fd5f3f9450c7820eb28dd63cb830495b02ff246e45791e619af75027e
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f8f695bb1d77d99ad3065101b88848ebbd266b2f078185b400e35ffc54d62b3
3
+ size 2479205552
model-18-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fd4b60bfa3641938bd8494c504a32193ef6f9eb42b5b4d4c5284782e8e24ab24
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ae0bd94998fedda4bc3a39607b18c19591c94f657435ed7fe5b3b547ac2407e
3
+ size 2479205552
model-19-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1cea9127593cd0f693721ad5c7f8a1c183878238d012f98b8b2eb5933b5e423e
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02eb943552fa1d1d31264da582a46f60afcf6afeef5e73359ad16e5487bf538b
3
+ size 2479205552
model-2-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3760de2dd7bd538e71b618298b336cfff46e79aae5d6688cb1fcb8ab7ecdf725
3
- size 2479205264
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:465e3b5a60dc5196b0838dfbc2722d8d37e5d47867aecd8fb9faec0c8abafe68
3
+ size 2479204768
model-20-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:32c9d913ad38fffee7a633c458c79e5f16dd3cf96b31ab0827745bcca40b5f32
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d97a05f96c7c68c5c7484c712297a893096a8288254984b4f89ec3f44737a9c3
3
+ size 2479205552
model-21-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ef6673a8b0a10657e2d0bcd32929c4458149b5dca2cfcfd47834c4f003a194ce
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca6c2d58a25c901fef688f21b46c1d10efd96300ccff833bdd173da7e5f07f6d
3
+ size 2479205552
model-22-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8f4c26ce9345eb72793fbe7635080c232d74339645a6a10233cb7c3b7717c960
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77d715e55c5fdd77903d1221e4a443c044aa0261cdefefa44b8483649ace2410
3
+ size 2479205552
model-23-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8f8b312280ea0902c49fd27f12711d4cb34266b8b8de1ad28fa4ba51871479d3
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c412898f5bf7edae9012022990e216e93c12173927d997ec517ebf1050f7aa31
3
+ size 2479205552
model-24-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bae31fdcf14f47b289ea101d25c39bbd3971788ccd3dde4d5012d79135f58ded
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:992bad2935e2ae6f78edb01857f36b8a662aa96f21f37c5c5b0c34af1b363636
3
+ size 2479205552
model-25-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9159819fce18e9948e2923487d56260dfef56b8f78a0800a1992e1c5e6b2a04c
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91249980356b8748e863e725939f32f532ea7566e7cf22abbe51cdc40e69a65c
3
+ size 2479205552
model-26-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6c21c2331132706da9eaea2d48f9c571a37ae0d08f2b854585edc34f7687f837
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edcd51b69499278d63a20f4a923cf2649bf13ed387dc11e9d7a8e832a3d467a0
3
+ size 2479205552
model-27-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bbe6f39372236e7fa059a8df1c5bac3241074ff0b5665aefd5987ceceaefd5df
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e0d6460b021bb4c72d67c948e849880817cd65e5db11a4a35464ef5cf831f47
3
+ size 2479205552
model-28-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f7ee0362080f92100521a4097e7a4e9bf34cf5d562e79670cae734927fa227fa
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa4cb2ea2d19e607d00da452ccc43b81230c01cae6676f17623a2fcd42efcc87
3
+ size 2479205552
model-29-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:70cb8b677597afb6041623e15d70be7e39e2790e0904b1eb200282e06df722c2
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6bc2b82240380b2844ff5a20b34c3ed0ed8bdd8732e18072b2750ee60e97466
3
+ size 2479205552
model-3-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ff03acde235f71818ca55862cdadc4e0e4c903cc444ddb4ff507d87ce2a485fd
3
- size 2479205264
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c57dec86e995872b1a560a9269c3fec464bb0ace701eecbf169d26109a9a7ec1
3
+ size 2479204768
model-30-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:66dab417ceacffd83d835b7efa3170e181b8cb052cfa1e663ea40e16039c02ca
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c0d93e33fa2a56e590b7c31aa08fcdaafbf06d7c38820599e5ea3eb6b85f80a
3
+ size 2479205552
model-31-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a8e9400d587dce7c550ab535db78e2de4199ac4ecb94253ee1c21de43c3bc075
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3fef3bf7fab89f904c3215fc2e1dfbf6921c1eaaae87111099f1405f8e3c5cf
3
+ size 2479205552
model-32-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0fefd56e0b80b6e5925299cb3043e9395be1f8890202fd77f72fd80d0b388e43
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86f69cad4c5713a826d0ae126f024dace6dbd9a05f02a0777d1a91c76ecd47e0
3
+ size 2479205552
model-33-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:92e15e004bb68a34c25e93960ec02e36181991adec25743d33f2de6ae964c2ea
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b71cd4eed73a24147f38ecf5fd0c10a28e2c43eb1824739d75e3ba20ad14eb6
3
+ size 2479205552
model-34-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:41e3eec85cb83bfbb30271454ad1faf9172f55204a6ec40bd5aef9f6d2281d64
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1bc48c177e7228314326f07ac6f6999e780c3a5df157b1df2625d194781a62a5
3
+ size 2479205552
model-35-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1ea0b32f02196c668878b3b7a6e8fe588483527a048e143ae8cceee0110a5140
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b61c4624b49d88254f798c5ebca2bb19e083613ac7fc301a3ec6e01eaaeb709b
3
+ size 2479205552
model-36-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:012a873cc36f35ba111b27f256ccd7e3b0d13e4ef5900b36b2c5c322fb68e805
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:830787087384987c57ab0d3a8c0f3f913396915c02a64fc4799a5c13ac8eb2c2
3
+ size 2479205552
model-37-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:91fb83607d81219803a6dbd9a2004719c3d49989287c8450aed34124f7e44574
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:141767d0b815eda3264cb783082b607bbfa0dfcb86ceaff424f5a54a567574d3
3
+ size 2479205552
model-38-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fc42535f05b94b2060dba56f2c497a73bee85ccf53cc53ca415dd0bca6a5ff01
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a58d307ba5dfc6422e26ae3bfe87b6ad1324a68c25e3fc1237f1b74c67ba551c
3
+ size 2479205552
model-39-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e0a60f258dee34202ba221ae076ee0a44ecca6366c6b7ca903ed339f6a33d391
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea0e3899c69f7c4a1d17dc7333da311c72462258fef37515d562e646bad018c4
3
+ size 2479205552
model-4-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1581b89e67c704979da46f27258d091af31312cd623a6836b3565bf3e27fef2b
3
- size 2479205264
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7255b501667852f85e93c263a17829eac1fdb22e6ffd786a348b0e843d66aa7
3
+ size 2479204768
model-40-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cbccb4f7be82a2ee4bd5d6fea515b1ae337308199f2140a5908d502d97c86ee9
3
- size 2479206048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2cedb85f49ce19e3e1e6576614399e85a093643b1d906f0c6cae46e185c0702
3
+ size 2479205552
model-5-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:eec85b4df90822d3cbeceb32710b02dcef607e43dc388713637276345892cab0
3
- size 2479205264
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17c9d61c739221cf01880c787238f2edd9a4cbad5de3172c92788bb4c29f07ee
3
+ size 2479204768
model-6-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e812075888d32153bdcf01722fb0140bfd50165056b4e9d11823d10f668b5cef
3
- size 2479205264
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b1448a397d403279f66d069270407e2a1d0831fe1ab311f7ec00c93cb4368d9b
3
+ size 2479204768
model-7-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3a1beb221b1d03b21eb4338a053ececdd193ebe558b19499d65a8eef9a0c4016
3
- size 2479205264
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6b385d20630cf0b241630984e08777dfbb5913c5e089f0537390eaa6afee880
3
+ size 2479204768
model-8-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fe3eb4fc70a3a8fa61fbf51b05b834c952e87f1ea5cebce16b47e7e088c4ee72
3
- size 2479205264
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee65a49d6ae2adafbc90c4ce3defeffa4124db76c8dd37ef8ed29acbf305f688
3
+ size 2479204768
model-9-of-40.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e89d4e726a1b80ac5000be27c1faab5e1624ce425bdbf2425f288a7a50e03405
3
- size 2479205264
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e27db86d323e90f81d526086ae89249f8a67545e1bbc1bc332d8da7d2735e8d
3
+ size 2479204768
model-non-layer.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:358810bbba6e784b3be7462f7398fed2ba159a0edcec4f586aa72e877f4187ca
3
  size 1059066184
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b992b0a3e3ab06a9490d364bf942df3bcf69874dcb9f940e935f674672f09cd
3
  size 1059066184