KrisSimon commited on
Commit
012bd30
·
verified ·
1 Parent(s): f47a066

Upload ARO Coder 4-bit (dpo)

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
.source_model ADDED
@@ -0,0 +1 @@
 
 
1
+ /Users/kris/Projects/ARO/ARO-Train/Train/models/dpo/fused
README.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: mit
4
+ tags:
5
+ - aro
6
+ - code-generation
7
+ - dsl
8
+ - mlx
9
+ - 4-bit
10
+ - lora
11
+ - fine-tuned
12
+ base_model: mlx-community/Qwen3-Coder-30B-A3B-Instruct-4bit
13
+ pipeline_tag: text-generation
14
+ library_name: mlx
15
+ ---
16
+
17
+ # ARO Coder
18
+
19
+ A fine-tuned code generation model specialised in the **ARO** (Action Result Object) programming language.
20
+
21
+ ARO is a domain-specific language where every statement follows the pattern:
22
+ `Verb the <Result> preposition [the] <Object>`.
23
+
24
+ | | |
25
+ |---|---|
26
+ | **Base model** | [mlx-community/Qwen3-Coder-30B-A3B-Instruct-4bit](https://huggingface.co/mlx-community/Qwen3-Coder-30B-A3B-Instruct-4bit) |
27
+ | **Quantization** | 4-bit (MLX) |
28
+ | **Language** | ARO |
29
+ | **Training samples** | 861 |
30
+ | **Syntax pass rate** | 47% |
31
+ | **Source label** | dpo |
32
+
33
+ ## Links
34
+
35
+ - **Website**: [arolang.github.io/aro](https://arolang.github.io/aro/)
36
+ - **GitHub**: [github.com/arolang/aro](https://github.com/arolang/aro)
37
+ - **Documentation**: [Wiki](https://github.com/arolang/aro/wiki)
38
+ - **Language Guide (PDF)**: [Download](https://github.com/arolang/aro/releases/latest/download/ARO-Language-Guide.pdf)
39
+ - **Discussions**: [GitHub Discussions](https://github.com/arolang/aro/discussions)
40
+
41
+ ## Quick Start
42
+
43
+ ### MLX (Apple Silicon)
44
+
45
+ ```python
46
+ from mlx_lm import load, generate
47
+
48
+ model, tokenizer = load("ARO-Lang/aro-coder-4bit")
49
+
50
+ messages = [
51
+ {"role": "system", "content": "You are an expert ARO programmer."},
52
+ {"role": "user", "content": "Write an ARO feature set that retrieves a user by ID and returns an OK response."},
53
+ ]
54
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
55
+ response = generate(model, tokenizer, prompt=prompt, max_tokens=500)
56
+ print(response)
57
+ ```
58
+
59
+ ### MLX Server (OpenAI-compatible API)
60
+
61
+ ```bash
62
+ python -m mlx_lm.server --model ARO-Lang/aro-coder-4bit --port 8080
63
+
64
+ curl http://localhost:8080/v1/chat/completions \
65
+ -H 'Content-Type: application/json' \
66
+ -d '{"model": "aro-coder", "messages": [{"role": "user", "content": "Write hello world in ARO"}]}'
67
+ ```
68
+
69
+ ### Ollama
70
+
71
+ ```bash
72
+ ollama run aro-coder
73
+ ```
74
+
75
+ ## Example Output
76
+
77
+ **Prompt:** *Write an ARO Application-Start that starts an HTTP server.*
78
+
79
+ ```aro
80
+ (Application-Start: My API) {
81
+ Log "Starting server..." to the <console>.
82
+ Start the <http-server> with <contract>.
83
+ Keepalive the <application> for the <events>.
84
+ Return an <OK: status> for the <startup>.
85
+ }
86
+ ```
87
+
88
+ ## What is ARO?
89
+
90
+ ARO is a DSL for expressing business features as Action-Result-Object statements.
91
+ Every program is a directory of `.aro` files with event-driven feature sets:
92
+
93
+ ```aro
94
+ (getUser: User API) {
95
+ Extract the <id> from the <pathParameters: id>.
96
+ Retrieve the <user> from the <user-repository> where id = <id>.
97
+ Return an <OK: status> with <user>.
98
+ }
99
+ ```
100
+
101
+ Key features:
102
+ - **Contract-first HTTP** — routes defined in `openapi.yaml`, feature sets match `operationId`
103
+ - **Event-driven** — feature sets triggered by events, not direct calls
104
+ - **Immutable bindings** — every transformation produces a new name
105
+ - **Happy-path only** — no error handling code; the runtime manages errors
106
+
107
+ ## Training
108
+
109
+ This model was trained with the ARO training pipeline:
110
+
111
+ 1. **Corpus collection** — 861 samples from Examples, Book, Wiki, Proposals, and real-world ARO applications
112
+ 2. **Supervised fine-tuning** — LoRA on all code generation, debugging, Q&A, and explanation tasks
113
+ 3. **DPO preference training** — using `aro check` validation to build chosen/rejected pairs
114
+ 4. **Iterative self-improvement** — multiple rounds of generate-validate-retrain
115
+
116
+ ## License
117
+
118
+ This model and the ARO language are open source under the [MIT License](https://github.com/arolang/aro/blob/main/LICENSE).
chat_template.jinja ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {% macro render_item_list(item_list, tag_name='required') %}
2
+ {%- if item_list is defined and item_list is iterable and item_list | length > 0 %}
3
+ {%- if tag_name %}{{- '\n<' ~ tag_name ~ '>' -}}{% endif %}
4
+ {{- '[' }}
5
+ {%- for item in item_list -%}
6
+ {%- if loop.index > 1 %}{{- ", "}}{% endif -%}
7
+ {%- if item is string -%}
8
+ {{ "`" ~ item ~ "`" }}
9
+ {%- else -%}
10
+ {{ item }}
11
+ {%- endif -%}
12
+ {%- endfor -%}
13
+ {{- ']' }}
14
+ {%- if tag_name %}{{- '</' ~ tag_name ~ '>' -}}{% endif %}
15
+ {%- endif %}
16
+ {% endmacro %}
17
+
18
+ {%- if messages[0]["role"] == "system" %}
19
+ {%- set system_message = messages[0]["content"] %}
20
+ {%- set loop_messages = messages[1:] %}
21
+ {%- else %}
22
+ {%- set loop_messages = messages %}
23
+ {%- endif %}
24
+
25
+ {%- if not tools is defined %}
26
+ {%- set tools = [] %}
27
+ {%- endif %}
28
+
29
+ {%- if system_message is defined %}
30
+ {{- "<|im_start|>system\n" + system_message }}
31
+ {%- else %}
32
+ {%- if tools is iterable and tools | length > 0 %}
33
+ {{- "<|im_start|>system\nYou are Qwen, a helpful AI assistant that can interact with a computer to solve tasks." }}
34
+ {%- endif %}
35
+ {%- endif %}
36
+ {%- if tools is iterable and tools | length > 0 %}
37
+ {{- "\n\nYou have access to the following functions:\n\n" }}
38
+ {{- "<tools>" }}
39
+ {%- for tool in tools %}
40
+ {%- if tool.function is defined %}
41
+ {%- set tool = tool.function %}
42
+ {%- endif %}
43
+ {{- "\n<function>\n<name>" ~ tool.name ~ "</name>" }}
44
+ {{- '\n<description>' ~ (tool.description | trim) ~ '</description>' }}
45
+ {{- '\n<parameters>' }}
46
+ {%- for param_name, param_fields in tool.parameters.properties|items %}
47
+ {{- '\n<parameter>' }}
48
+ {{- '\n<name>' ~ param_name ~ '</name>' }}
49
+ {%- if param_fields.type is defined %}
50
+ {{- '\n<type>' ~ (param_fields.type | string) ~ '</type>' }}
51
+ {%- endif %}
52
+ {%- if param_fields.description is defined %}
53
+ {{- '\n<description>' ~ (param_fields.description | trim) ~ '</description>' }}
54
+ {%- endif %}
55
+ {{- render_item_list(param_fields.enum, 'enum') }}
56
+ {%- set handled_keys = ['type', 'description', 'enum', 'required'] %}
57
+ {%- for json_key in param_fields.keys() | reject("in", handled_keys) %}
58
+ {%- set normed_json_key = json_key | replace("-", "_") | replace(" ", "_") | replace("$", "") %}
59
+ {%- if param_fields[json_key] is mapping %}
60
+ {{- '\n<' ~ normed_json_key ~ '>' ~ (param_fields[json_key] | tojson | safe) ~ '</' ~ normed_json_key ~ '>' }}
61
+ {%- else %}
62
+ {{-'\n<' ~ normed_json_key ~ '>' ~ (param_fields[json_key] | string) ~ '</' ~ normed_json_key ~ '>' }}
63
+ {%- endif %}
64
+ {%- endfor %}
65
+ {{- render_item_list(param_fields.required, 'required') }}
66
+ {{- '\n</parameter>' }}
67
+ {%- endfor %}
68
+ {{- render_item_list(tool.parameters.required, 'required') }}
69
+ {{- '\n</parameters>' }}
70
+ {%- if tool.return is defined %}
71
+ {%- if tool.return is mapping %}
72
+ {{- '\n<return>' ~ (tool.return | tojson | safe) ~ '</return>' }}
73
+ {%- else %}
74
+ {{- '\n<return>' ~ (tool.return | string) ~ '</return>' }}
75
+ {%- endif %}
76
+ {%- endif %}
77
+ {{- '\n</function>' }}
78
+ {%- endfor %}
79
+ {{- "\n</tools>" }}
80
+ {{- '\n\nIf you choose to call a function ONLY reply in the following format with NO suffix:\n\n<tool_call>\n<function=example_function_name>\n<parameter=example_parameter_1>\nvalue_1\n</parameter>\n<parameter=example_parameter_2>\nThis is the value for the second parameter\nthat can span\nmultiple lines\n</parameter>\n</function>\n</tool_call>\n\n<IMPORTANT>\nReminder:\n- Function calls MUST follow the specified format: an inner <function=...></function> block must be nested within <tool_call></tool_call> XML tags\n- Required parameters MUST be specified\n- You may provide optional reasoning for your function call in natural language BEFORE the function call, but NOT after\n- If there is no function call available, answer the question like normal with your current knowledge and do not tell the user about function calls\n</IMPORTANT>' }}
81
+ {%- endif %}
82
+ {%- if system_message is defined %}
83
+ {{- '<|im_end|>\n' }}
84
+ {%- else %}
85
+ {%- if tools is iterable and tools | length > 0 %}
86
+ {{- '<|im_end|>\n' }}
87
+ {%- endif %}
88
+ {%- endif %}
89
+ {%- for message in loop_messages %}
90
+ {%- if message.role == "assistant" and message.tool_calls is defined and message.tool_calls is iterable and message.tool_calls | length > 0 %}
91
+ {{- '<|im_start|>' + message.role }}
92
+ {%- if message.content is defined and message.content is string and message.content | trim | length > 0 %}
93
+ {{- '\n' + message.content | trim + '\n' }}
94
+ {%- endif %}
95
+ {%- for tool_call in message.tool_calls %}
96
+ {%- if tool_call.function is defined %}
97
+ {%- set tool_call = tool_call.function %}
98
+ {%- endif %}
99
+ {{- '\n<tool_call>\n<function=' + tool_call.name + '>\n' }}
100
+ {%- if tool_call.arguments is defined %}
101
+ {%- for args_name, args_value in tool_call.arguments|items %}
102
+ {{- '<parameter=' + args_name + '>\n' }}
103
+ {%- set args_value = args_value if args_value is string else args_value | string %}
104
+ {{- args_value }}
105
+ {{- '\n</parameter>\n' }}
106
+ {%- endfor %}
107
+ {%- endif %}
108
+ {{- '</function>\n</tool_call>' }}
109
+ {%- endfor %}
110
+ {{- '<|im_end|>\n' }}
111
+ {%- elif message.role == "user" or message.role == "system" or message.role == "assistant" %}
112
+ {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
113
+ {%- elif message.role == "tool" %}
114
+ {%- if loop.previtem and loop.previtem.role != "tool" %}
115
+ {{- '<|im_start|>user\n' }}
116
+ {%- endif %}
117
+ {{- '<tool_response>\n' }}
118
+ {{- message.content }}
119
+ {{- '\n</tool_response>\n' }}
120
+ {%- if not loop.last and loop.nextitem.role != "tool" %}
121
+ {{- '<|im_end|>\n' }}
122
+ {%- elif loop.last %}
123
+ {{- '<|im_end|>\n' }}
124
+ {%- endif %}
125
+ {%- else %}
126
+ {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>\n' }}
127
+ {%- endif %}
128
+ {%- endfor %}
129
+ {%- if add_generation_prompt %}
130
+ {{- '<|im_start|>assistant\n' }}
131
+ {%- endif %}
config.json ADDED
@@ -0,0 +1,434 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen3MoeForCausalLM"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "decoder_sparse_step": 1,
7
+ "eos_token_id": [
8
+ 151645,
9
+ 151643
10
+ ],
11
+ "head_dim": 128,
12
+ "hidden_act": "silu",
13
+ "hidden_size": 2048,
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 5472,
16
+ "max_position_embeddings": 262144,
17
+ "max_window_layers": 28,
18
+ "mlp_only_layers": [],
19
+ "model_type": "qwen3_moe",
20
+ "moe_intermediate_size": 768,
21
+ "norm_topk_prob": true,
22
+ "num_attention_heads": 32,
23
+ "num_experts": 128,
24
+ "num_experts_per_tok": 8,
25
+ "num_hidden_layers": 48,
26
+ "num_key_value_heads": 4,
27
+ "output_router_logits": false,
28
+ "qkv_bias": false,
29
+ "quantization": {
30
+ "group_size": 64,
31
+ "bits": 4,
32
+ "model.layers.0.mlp.gate": {
33
+ "group_size": 64,
34
+ "bits": 8
35
+ },
36
+ "model.layers.1.mlp.gate": {
37
+ "group_size": 64,
38
+ "bits": 8
39
+ },
40
+ "model.layers.2.mlp.gate": {
41
+ "group_size": 64,
42
+ "bits": 8
43
+ },
44
+ "model.layers.3.mlp.gate": {
45
+ "group_size": 64,
46
+ "bits": 8
47
+ },
48
+ "model.layers.4.mlp.gate": {
49
+ "group_size": 64,
50
+ "bits": 8
51
+ },
52
+ "model.layers.5.mlp.gate": {
53
+ "group_size": 64,
54
+ "bits": 8
55
+ },
56
+ "model.layers.6.mlp.gate": {
57
+ "group_size": 64,
58
+ "bits": 8
59
+ },
60
+ "model.layers.7.mlp.gate": {
61
+ "group_size": 64,
62
+ "bits": 8
63
+ },
64
+ "model.layers.8.mlp.gate": {
65
+ "group_size": 64,
66
+ "bits": 8
67
+ },
68
+ "model.layers.9.mlp.gate": {
69
+ "group_size": 64,
70
+ "bits": 8
71
+ },
72
+ "model.layers.10.mlp.gate": {
73
+ "group_size": 64,
74
+ "bits": 8
75
+ },
76
+ "model.layers.11.mlp.gate": {
77
+ "group_size": 64,
78
+ "bits": 8
79
+ },
80
+ "model.layers.12.mlp.gate": {
81
+ "group_size": 64,
82
+ "bits": 8
83
+ },
84
+ "model.layers.13.mlp.gate": {
85
+ "group_size": 64,
86
+ "bits": 8
87
+ },
88
+ "model.layers.14.mlp.gate": {
89
+ "group_size": 64,
90
+ "bits": 8
91
+ },
92
+ "model.layers.15.mlp.gate": {
93
+ "group_size": 64,
94
+ "bits": 8
95
+ },
96
+ "model.layers.16.mlp.gate": {
97
+ "group_size": 64,
98
+ "bits": 8
99
+ },
100
+ "model.layers.17.mlp.gate": {
101
+ "group_size": 64,
102
+ "bits": 8
103
+ },
104
+ "model.layers.18.mlp.gate": {
105
+ "group_size": 64,
106
+ "bits": 8
107
+ },
108
+ "model.layers.19.mlp.gate": {
109
+ "group_size": 64,
110
+ "bits": 8
111
+ },
112
+ "model.layers.20.mlp.gate": {
113
+ "group_size": 64,
114
+ "bits": 8
115
+ },
116
+ "model.layers.21.mlp.gate": {
117
+ "group_size": 64,
118
+ "bits": 8
119
+ },
120
+ "model.layers.22.mlp.gate": {
121
+ "group_size": 64,
122
+ "bits": 8
123
+ },
124
+ "model.layers.23.mlp.gate": {
125
+ "group_size": 64,
126
+ "bits": 8
127
+ },
128
+ "model.layers.24.mlp.gate": {
129
+ "group_size": 64,
130
+ "bits": 8
131
+ },
132
+ "model.layers.25.mlp.gate": {
133
+ "group_size": 64,
134
+ "bits": 8
135
+ },
136
+ "model.layers.26.mlp.gate": {
137
+ "group_size": 64,
138
+ "bits": 8
139
+ },
140
+ "model.layers.27.mlp.gate": {
141
+ "group_size": 64,
142
+ "bits": 8
143
+ },
144
+ "model.layers.28.mlp.gate": {
145
+ "group_size": 64,
146
+ "bits": 8
147
+ },
148
+ "model.layers.29.mlp.gate": {
149
+ "group_size": 64,
150
+ "bits": 8
151
+ },
152
+ "model.layers.30.mlp.gate": {
153
+ "group_size": 64,
154
+ "bits": 8
155
+ },
156
+ "model.layers.31.mlp.gate": {
157
+ "group_size": 64,
158
+ "bits": 8
159
+ },
160
+ "model.layers.32.mlp.gate": {
161
+ "group_size": 64,
162
+ "bits": 8
163
+ },
164
+ "model.layers.33.mlp.gate": {
165
+ "group_size": 64,
166
+ "bits": 8
167
+ },
168
+ "model.layers.34.mlp.gate": {
169
+ "group_size": 64,
170
+ "bits": 8
171
+ },
172
+ "model.layers.35.mlp.gate": {
173
+ "group_size": 64,
174
+ "bits": 8
175
+ },
176
+ "model.layers.36.mlp.gate": {
177
+ "group_size": 64,
178
+ "bits": 8
179
+ },
180
+ "model.layers.37.mlp.gate": {
181
+ "group_size": 64,
182
+ "bits": 8
183
+ },
184
+ "model.layers.38.mlp.gate": {
185
+ "group_size": 64,
186
+ "bits": 8
187
+ },
188
+ "model.layers.39.mlp.gate": {
189
+ "group_size": 64,
190
+ "bits": 8
191
+ },
192
+ "model.layers.40.mlp.gate": {
193
+ "group_size": 64,
194
+ "bits": 8
195
+ },
196
+ "model.layers.41.mlp.gate": {
197
+ "group_size": 64,
198
+ "bits": 8
199
+ },
200
+ "model.layers.42.mlp.gate": {
201
+ "group_size": 64,
202
+ "bits": 8
203
+ },
204
+ "model.layers.43.mlp.gate": {
205
+ "group_size": 64,
206
+ "bits": 8
207
+ },
208
+ "model.layers.44.mlp.gate": {
209
+ "group_size": 64,
210
+ "bits": 8
211
+ },
212
+ "model.layers.45.mlp.gate": {
213
+ "group_size": 64,
214
+ "bits": 8
215
+ },
216
+ "model.layers.46.mlp.gate": {
217
+ "group_size": 64,
218
+ "bits": 8
219
+ },
220
+ "model.layers.47.mlp.gate": {
221
+ "group_size": 64,
222
+ "bits": 8
223
+ }
224
+ },
225
+ "quantization_config": {
226
+ "group_size": 64,
227
+ "bits": 4,
228
+ "model.layers.0.mlp.gate": {
229
+ "group_size": 64,
230
+ "bits": 8
231
+ },
232
+ "model.layers.1.mlp.gate": {
233
+ "group_size": 64,
234
+ "bits": 8
235
+ },
236
+ "model.layers.2.mlp.gate": {
237
+ "group_size": 64,
238
+ "bits": 8
239
+ },
240
+ "model.layers.3.mlp.gate": {
241
+ "group_size": 64,
242
+ "bits": 8
243
+ },
244
+ "model.layers.4.mlp.gate": {
245
+ "group_size": 64,
246
+ "bits": 8
247
+ },
248
+ "model.layers.5.mlp.gate": {
249
+ "group_size": 64,
250
+ "bits": 8
251
+ },
252
+ "model.layers.6.mlp.gate": {
253
+ "group_size": 64,
254
+ "bits": 8
255
+ },
256
+ "model.layers.7.mlp.gate": {
257
+ "group_size": 64,
258
+ "bits": 8
259
+ },
260
+ "model.layers.8.mlp.gate": {
261
+ "group_size": 64,
262
+ "bits": 8
263
+ },
264
+ "model.layers.9.mlp.gate": {
265
+ "group_size": 64,
266
+ "bits": 8
267
+ },
268
+ "model.layers.10.mlp.gate": {
269
+ "group_size": 64,
270
+ "bits": 8
271
+ },
272
+ "model.layers.11.mlp.gate": {
273
+ "group_size": 64,
274
+ "bits": 8
275
+ },
276
+ "model.layers.12.mlp.gate": {
277
+ "group_size": 64,
278
+ "bits": 8
279
+ },
280
+ "model.layers.13.mlp.gate": {
281
+ "group_size": 64,
282
+ "bits": 8
283
+ },
284
+ "model.layers.14.mlp.gate": {
285
+ "group_size": 64,
286
+ "bits": 8
287
+ },
288
+ "model.layers.15.mlp.gate": {
289
+ "group_size": 64,
290
+ "bits": 8
291
+ },
292
+ "model.layers.16.mlp.gate": {
293
+ "group_size": 64,
294
+ "bits": 8
295
+ },
296
+ "model.layers.17.mlp.gate": {
297
+ "group_size": 64,
298
+ "bits": 8
299
+ },
300
+ "model.layers.18.mlp.gate": {
301
+ "group_size": 64,
302
+ "bits": 8
303
+ },
304
+ "model.layers.19.mlp.gate": {
305
+ "group_size": 64,
306
+ "bits": 8
307
+ },
308
+ "model.layers.20.mlp.gate": {
309
+ "group_size": 64,
310
+ "bits": 8
311
+ },
312
+ "model.layers.21.mlp.gate": {
313
+ "group_size": 64,
314
+ "bits": 8
315
+ },
316
+ "model.layers.22.mlp.gate": {
317
+ "group_size": 64,
318
+ "bits": 8
319
+ },
320
+ "model.layers.23.mlp.gate": {
321
+ "group_size": 64,
322
+ "bits": 8
323
+ },
324
+ "model.layers.24.mlp.gate": {
325
+ "group_size": 64,
326
+ "bits": 8
327
+ },
328
+ "model.layers.25.mlp.gate": {
329
+ "group_size": 64,
330
+ "bits": 8
331
+ },
332
+ "model.layers.26.mlp.gate": {
333
+ "group_size": 64,
334
+ "bits": 8
335
+ },
336
+ "model.layers.27.mlp.gate": {
337
+ "group_size": 64,
338
+ "bits": 8
339
+ },
340
+ "model.layers.28.mlp.gate": {
341
+ "group_size": 64,
342
+ "bits": 8
343
+ },
344
+ "model.layers.29.mlp.gate": {
345
+ "group_size": 64,
346
+ "bits": 8
347
+ },
348
+ "model.layers.30.mlp.gate": {
349
+ "group_size": 64,
350
+ "bits": 8
351
+ },
352
+ "model.layers.31.mlp.gate": {
353
+ "group_size": 64,
354
+ "bits": 8
355
+ },
356
+ "model.layers.32.mlp.gate": {
357
+ "group_size": 64,
358
+ "bits": 8
359
+ },
360
+ "model.layers.33.mlp.gate": {
361
+ "group_size": 64,
362
+ "bits": 8
363
+ },
364
+ "model.layers.34.mlp.gate": {
365
+ "group_size": 64,
366
+ "bits": 8
367
+ },
368
+ "model.layers.35.mlp.gate": {
369
+ "group_size": 64,
370
+ "bits": 8
371
+ },
372
+ "model.layers.36.mlp.gate": {
373
+ "group_size": 64,
374
+ "bits": 8
375
+ },
376
+ "model.layers.37.mlp.gate": {
377
+ "group_size": 64,
378
+ "bits": 8
379
+ },
380
+ "model.layers.38.mlp.gate": {
381
+ "group_size": 64,
382
+ "bits": 8
383
+ },
384
+ "model.layers.39.mlp.gate": {
385
+ "group_size": 64,
386
+ "bits": 8
387
+ },
388
+ "model.layers.40.mlp.gate": {
389
+ "group_size": 64,
390
+ "bits": 8
391
+ },
392
+ "model.layers.41.mlp.gate": {
393
+ "group_size": 64,
394
+ "bits": 8
395
+ },
396
+ "model.layers.42.mlp.gate": {
397
+ "group_size": 64,
398
+ "bits": 8
399
+ },
400
+ "model.layers.43.mlp.gate": {
401
+ "group_size": 64,
402
+ "bits": 8
403
+ },
404
+ "model.layers.44.mlp.gate": {
405
+ "group_size": 64,
406
+ "bits": 8
407
+ },
408
+ "model.layers.45.mlp.gate": {
409
+ "group_size": 64,
410
+ "bits": 8
411
+ },
412
+ "model.layers.46.mlp.gate": {
413
+ "group_size": 64,
414
+ "bits": 8
415
+ },
416
+ "model.layers.47.mlp.gate": {
417
+ "group_size": 64,
418
+ "bits": 8
419
+ }
420
+ },
421
+ "rms_norm_eps": 1e-06,
422
+ "rope_scaling": null,
423
+ "rope_theta": 10000000,
424
+ "router_aux_loss_coef": 0.0,
425
+ "shared_expert_intermediate_size": 0,
426
+ "sliding_window": null,
427
+ "tie_word_embeddings": false,
428
+ "torch_dtype": "bfloat16",
429
+ "transformers_version": "4.52.3",
430
+ "use_cache": true,
431
+ "use_qk_norm": true,
432
+ "use_sliding_window": false,
433
+ "vocab_size": 151936
434
+ }
generation_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "pad_token_id": 151643,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 151645,
6
+ 151643
7
+ ],
8
+ "repetition_penalty": 1.05,
9
+ "temperature": 0.7,
10
+ "top_p": 0.8,
11
+ "top_k": 20
12
+ }
model-00001-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25c6d0a62e02224605774ce2e9dc9a8c95910cde33557b938e55a368da1b5212
3
+ size 5321473414
model-00002-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7007ce495b063155207c3e8b4038771d0d751655274cb6b76c42b8d9c0624412
3
+ size 5366644780
model-00003-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7aac9d433daaefb64706bfdc80f7632c300af2ff53a1ae6429fbf7804119b42d
3
+ size 5276887419
model-00004-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:836ca454dbcc7675ebc97e1fe0ef3422a02858a899db5ac11813c5cc5d2c006e
3
+ size 1216066381
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
qwen3coder_tool_parser.py ADDED
@@ -0,0 +1,675 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SPDX-License-Identifier: Apache-2.0
2
+
3
+ import json
4
+ import re
5
+ import uuid
6
+ from collections.abc import Sequence
7
+ from typing import Union, Optional, Any, List, Dict
8
+ from enum import Enum
9
+
10
+ from vllm.entrypoints.openai.protocol import (
11
+ ChatCompletionRequest,
12
+ ChatCompletionToolsParam,
13
+ DeltaMessage,
14
+ DeltaToolCall,
15
+ DeltaFunctionCall,
16
+ ExtractedToolCallInformation,
17
+ FunctionCall,
18
+ ToolCall,
19
+ )
20
+ from vllm.entrypoints.openai.tool_parsers.abstract_tool_parser import (
21
+ ToolParser,
22
+ ToolParserManager,
23
+ )
24
+ from vllm.logger import init_logger
25
+ from vllm.transformers_utils.tokenizer import AnyTokenizer
26
+
27
+ logger = init_logger(__name__)
28
+
29
+
30
+ @ToolParserManager.register_module("qwen3_xml")
31
+ class Qwen3XMLToolParser(ToolParser):
32
+ def __init__(self, tokenizer: AnyTokenizer):
33
+ super().__init__(tokenizer)
34
+
35
+ self.current_tool_name_sent: bool = False
36
+ self.prev_tool_call_arr: list[dict] = []
37
+ self.current_tool_id: int = -1
38
+ self.streamed_args_for_tool: list[str] = []
39
+
40
+ # Sentinel tokens for streaming mode
41
+ self.tool_call_start_token: str = "<tool_call>"
42
+ self.tool_call_end_token: str = "</tool_call>"
43
+ self.tool_call_prefix: str = "<function="
44
+ self.function_end_token: str = "</function>"
45
+ self.parameter_prefix: str = "<parameter="
46
+ self.parameter_end_token: str = "</parameter>"
47
+ self.is_tool_call_started: bool = False
48
+ self.failed_count: int = 0
49
+
50
+ # Enhanced streaming state - reset for each new message
51
+ self._reset_streaming_state()
52
+
53
+ # Regex patterns
54
+ self.tool_call_complete_regex = re.compile(
55
+ r"<tool_call>(.*?)</tool_call>", re.DOTALL
56
+ )
57
+ self.tool_call_regex = re.compile(
58
+ r"<tool_call>(.*?)</tool_call>|<tool_call>(.*?)$", re.DOTALL
59
+ )
60
+ self.tool_call_function_regex = re.compile(
61
+ r"<function=(.*?)</function>|<function=(.*)$", re.DOTALL
62
+ )
63
+ self.tool_call_parameter_regex = re.compile(
64
+ r"<parameter=(.*?)</parameter>|<parameter=(.*?)$", re.DOTALL
65
+ )
66
+
67
+ if not self.model_tokenizer:
68
+ raise ValueError(
69
+ "The model tokenizer must be passed to the ToolParser "
70
+ "constructor during construction."
71
+ )
72
+
73
+ self.tool_call_start_token_id = self.vocab.get(self.tool_call_start_token)
74
+ self.tool_call_end_token_id = self.vocab.get(self.tool_call_end_token)
75
+
76
+ if self.tool_call_start_token_id is None or self.tool_call_end_token_id is None:
77
+ raise RuntimeError(
78
+ "Qwen3 XML Tool parser could not locate tool call start/end "
79
+ "tokens in the tokenizer!"
80
+ )
81
+
82
+ logger.info(f"vLLM Successfully import tool parser {self.__class__.__name__} !")
83
+
84
+ def _generate_tool_call_id(self) -> str:
85
+ """Generate a unique tool call ID."""
86
+ return f"call_{uuid.uuid4().hex[:24]}"
87
+
88
+ def _reset_streaming_state(self):
89
+ """Reset all streaming state."""
90
+ self.current_tool_index = 0
91
+ self.is_tool_call_started = False
92
+ self.header_sent = False
93
+ self.current_tool_id = None
94
+ self.current_function_name = None
95
+ self.current_param_name = None
96
+ self.current_param_value = ""
97
+ self.param_count = 0
98
+ self.in_param = False
99
+ self.in_function = False
100
+ self.accumulated_text = ""
101
+ self.json_started = False
102
+ self.json_closed = False
103
+
104
+ def _parse_xml_function_call(
105
+ self, function_call_str: str, tools: Optional[list[ChatCompletionToolsParam]]
106
+ ) -> Optional[ToolCall]:
107
+ def get_arguments_config(func_name: str) -> dict:
108
+ if tools is None:
109
+ return {}
110
+ for config in tools:
111
+ if not hasattr(config, "type") or not (
112
+ hasattr(config, "function") and hasattr(config.function, "name")
113
+ ):
114
+ continue
115
+ if config.type == "function" and config.function.name == func_name:
116
+ if not hasattr(config.function, "parameters"):
117
+ return {}
118
+ params = config.function.parameters
119
+ if isinstance(params, dict) and "properties" in params:
120
+ return params["properties"]
121
+ elif isinstance(params, dict):
122
+ return params
123
+ else:
124
+ return {}
125
+ logger.warning(f"Tool '{func_name}' is not defined in the tools list.")
126
+ return {}
127
+
128
+ def convert_param_value(
129
+ param_value: str, param_name: str, param_config: dict, func_name: str
130
+ ) -> Any:
131
+ # Handle null value for any type
132
+ if param_value.lower() == "null":
133
+ return None
134
+
135
+ if param_name not in param_config:
136
+ if param_config != {}:
137
+ logger.warning(
138
+ f"Parsed parameter '{param_name}' is not defined in the tool "
139
+ f"parameters for tool '{func_name}', directly returning the string value."
140
+ )
141
+ return param_value
142
+
143
+ if (
144
+ isinstance(param_config[param_name], dict)
145
+ and "type" in param_config[param_name]
146
+ ):
147
+ param_type = str(param_config[param_name]["type"]).strip().lower()
148
+ else:
149
+ param_type = "string"
150
+ if param_type in ["string", "str", "text", "varchar", "char", "enum"]:
151
+ return param_value
152
+ elif (
153
+ param_type.startswith("int")
154
+ or param_type.startswith("uint")
155
+ or param_type.startswith("long")
156
+ or param_type.startswith("short")
157
+ or param_type.startswith("unsigned")
158
+ ):
159
+ try:
160
+ param_value = int(param_value)
161
+ except:
162
+ logger.warning(
163
+ f"Parsed value '{param_value}' of parameter '{param_name}' is not an integer in tool "
164
+ f"'{func_name}', degenerating to string."
165
+ )
166
+ return param_value
167
+ elif param_type.startswith("num") or param_type.startswith("float"):
168
+ try:
169
+ float_param_value = float(param_value)
170
+ param_value = float_param_value if float_param_value - int(float_param_value) != 0 else int(float_param_value)
171
+ except:
172
+ logger.warning(
173
+ f"Parsed value '{param_value}' of parameter '{param_name}' is not a float in tool "
174
+ f"'{func_name}', degenerating to string."
175
+ )
176
+ return param_value
177
+ elif param_type in ["boolean", "bool", "binary"]:
178
+ param_value = param_value.lower()
179
+ if param_value not in ["true", "false"]:
180
+ logger.warning(
181
+ f"Parsed value '{param_value}' of parameter '{param_name}' is not a boolean (`true` of `false`) in tool '{func_name}', degenerating to false."
182
+ )
183
+ return param_value == "true"
184
+ else:
185
+ if param_type == "object" or param_type.startswith("dict"):
186
+ try:
187
+ param_value = json.loads(param_value)
188
+ return param_value
189
+ except:
190
+ logger.warning(
191
+ f"Parsed value '{param_value}' of parameter '{param_name}' is not a valid JSON object in tool "
192
+ f"'{func_name}', will try other methods to parse it."
193
+ )
194
+ try:
195
+ param_value = eval(param_value)
196
+ except:
197
+ logger.warning(
198
+ f"Parsed value '{param_value}' of parameter '{param_name}' cannot be converted via Python `eval()` in tool '{func_name}', degenerating to string."
199
+ )
200
+ return param_value
201
+
202
+ # Extract function name
203
+ end_index = function_call_str.index(">")
204
+ function_name = function_call_str[:end_index]
205
+ param_config = get_arguments_config(function_name)
206
+ parameters = function_call_str[end_index + 1 :]
207
+ param_dict = {}
208
+ for match in self.tool_call_parameter_regex.findall(parameters):
209
+ match_text = match[0] if match[0] else match[1]
210
+ idx = match_text.index(">")
211
+ param_name = match_text[:idx]
212
+ param_value = str(match_text[idx + 1 :])
213
+ # Remove prefix and trailing \n
214
+ if param_value.startswith("\n"):
215
+ param_value = param_value[1:]
216
+ if param_value.endswith("\n"):
217
+ param_value = param_value[:-1]
218
+
219
+ param_dict[param_name] = convert_param_value(
220
+ param_value, param_name, param_config, function_name
221
+ )
222
+ return ToolCall(
223
+ type="function",
224
+ function=FunctionCall(
225
+ name=function_name, arguments=json.dumps(param_dict, ensure_ascii=False)
226
+ ),
227
+ )
228
+
229
+ def _get_function_calls(self, model_output: str) -> List[str]:
230
+ # Find all tool calls
231
+ matched_ranges = self.tool_call_regex.findall(model_output)
232
+ raw_tool_calls = [
233
+ match[0] if match[0] else match[1] for match in matched_ranges
234
+ ]
235
+
236
+ # Back-off strategy if no tool_call tags found
237
+ if len(raw_tool_calls) == 0:
238
+ raw_tool_calls = [model_output]
239
+
240
+ raw_function_calls = []
241
+ for tool_call in raw_tool_calls:
242
+ raw_function_calls.extend(self.tool_call_function_regex.findall(tool_call))
243
+
244
+ function_calls = [
245
+ match[0] if match[0] else match[1] for match in raw_function_calls
246
+ ]
247
+ return function_calls
248
+
249
+ def extract_tool_calls(
250
+ self,
251
+ model_output: str,
252
+ request: ChatCompletionRequest,
253
+ ) -> ExtractedToolCallInformation:
254
+ # Quick check to avoid unnecessary processing
255
+ if self.tool_call_prefix not in model_output:
256
+ return ExtractedToolCallInformation(
257
+ tools_called=False, tool_calls=[], content=model_output
258
+ )
259
+
260
+ try:
261
+ function_calls = self._get_function_calls(model_output)
262
+ if len(function_calls) == 0:
263
+ return ExtractedToolCallInformation(
264
+ tools_called=False, tool_calls=[], content=model_output
265
+ )
266
+
267
+ tool_calls = [
268
+ self._parse_xml_function_call(function_call_str, request.tools)
269
+ for function_call_str in function_calls
270
+ ]
271
+
272
+ # Populate prev_tool_call_arr for serving layer to set finish_reason
273
+ self.prev_tool_call_arr.clear() # Clear previous calls
274
+ for tool_call in tool_calls:
275
+ if tool_call:
276
+ self.prev_tool_call_arr.append(
277
+ {
278
+ "name": tool_call.function.name,
279
+ "arguments": tool_call.function.arguments,
280
+ }
281
+ )
282
+
283
+ # Extract content before tool calls
284
+ content_index = model_output.find(self.tool_call_start_token)
285
+ content_index = (
286
+ content_index
287
+ if content_index >= 0
288
+ else model_output.find(self.tool_call_prefix)
289
+ )
290
+ content = model_output[:content_index] # .rstrip()
291
+
292
+ return ExtractedToolCallInformation(
293
+ tools_called=(len(tool_calls) > 0),
294
+ tool_calls=tool_calls,
295
+ content=content if content else None,
296
+ )
297
+
298
+ except Exception:
299
+ logger.exception("Error in extracting tool call from response.")
300
+ return ExtractedToolCallInformation(
301
+ tools_called=False, tool_calls=[], content=model_output
302
+ )
303
+
304
+ def extract_tool_calls_streaming(
305
+ self,
306
+ previous_text: str,
307
+ current_text: str,
308
+ delta_text: str,
309
+ previous_token_ids: Sequence[int],
310
+ current_token_ids: Sequence[int],
311
+ delta_token_ids: Sequence[int],
312
+ request: ChatCompletionRequest,
313
+ ) -> Union[DeltaMessage, None]:
314
+ # If no delta text, return None unless it's an EOS token after tool calls
315
+ if not delta_text:
316
+ # Check if this is an EOS token after all tool calls are complete
317
+ # We check for tool calls in the text even if is_tool_call_started is False
318
+ # because it might have been reset after processing all tools
319
+ if delta_token_ids and self.tool_call_end_token_id not in delta_token_ids:
320
+ # Count complete tool calls
321
+ complete_calls = len(
322
+ self.tool_call_complete_regex.findall(current_text)
323
+ )
324
+
325
+ # If we have completed tool calls and populated prev_tool_call_arr
326
+ if complete_calls > 0 and len(self.prev_tool_call_arr) > 0:
327
+ # Check if all tool calls are closed
328
+ open_calls = current_text.count(
329
+ self.tool_call_start_token
330
+ ) - current_text.count(self.tool_call_end_token)
331
+ if open_calls == 0:
332
+ # Return empty delta message to allow finish_reason processing
333
+ return DeltaMessage(content="")
334
+ elif not self.is_tool_call_started and current_text:
335
+ # This is a regular content response that's now complete
336
+ return DeltaMessage(content="")
337
+ return None
338
+
339
+ # Check if this is the first call (reset state if needed)
340
+ if not previous_text:
341
+ self._reset_streaming_state()
342
+
343
+ # Update accumulated text
344
+ self.accumulated_text = current_text
345
+
346
+ # Check if we need to advance to next tool
347
+ if self.json_closed and not self.in_function:
348
+ # Check if this tool call has ended
349
+ tool_ends = current_text.count(self.tool_call_end_token)
350
+ if tool_ends > self.current_tool_index:
351
+ # This tool has ended, advance to next
352
+ self.current_tool_index += 1
353
+ self.header_sent = False
354
+ self.param_count = 0
355
+ self.json_started = False
356
+ self.json_closed = False
357
+
358
+ # Check if there are more tool calls
359
+ tool_starts = current_text.count(self.tool_call_start_token)
360
+ if self.current_tool_index >= tool_starts:
361
+ # No more tool calls
362
+ self.is_tool_call_started = False
363
+ # Continue processing next tool
364
+ return None
365
+
366
+ # Handle normal content before tool calls
367
+ if not self.is_tool_call_started:
368
+ # Check if tool call is starting
369
+ if (
370
+ self.tool_call_start_token_id in delta_token_ids
371
+ or self.tool_call_start_token in delta_text
372
+ ):
373
+ self.is_tool_call_started = True
374
+ # Return any content before the tool call
375
+ if self.tool_call_start_token in delta_text:
376
+ content_before = delta_text[
377
+ : delta_text.index(self.tool_call_start_token)
378
+ ]
379
+ if content_before:
380
+ return DeltaMessage(content=content_before)
381
+ return None
382
+ else:
383
+ # Check if we're between tool calls - skip whitespace
384
+ if current_text.rstrip().endswith(self.tool_call_end_token):
385
+ # We just ended a tool call, skip whitespace
386
+ if delta_text.strip() == "":
387
+ return None
388
+ # Normal content, no tool call
389
+ return DeltaMessage(content=delta_text)
390
+
391
+ # Check if we're between tool calls (waiting for next one)
392
+ # Count tool calls we've seen vs processed
393
+ tool_starts_count = current_text.count(self.tool_call_start_token)
394
+ if self.current_tool_index >= tool_starts_count:
395
+ # We're past all tool calls, shouldn't be here
396
+ return None
397
+
398
+ # We're in a tool call, find the current tool call portion
399
+ # Need to find the correct tool call based on current_tool_index
400
+ tool_starts = []
401
+ idx = 0
402
+ while True:
403
+ idx = current_text.find(self.tool_call_start_token, idx)
404
+ if idx == -1:
405
+ break
406
+ tool_starts.append(idx)
407
+ idx += len(self.tool_call_start_token)
408
+
409
+ if self.current_tool_index >= len(tool_starts):
410
+ # No more tool calls to process yet
411
+ return None
412
+
413
+ tool_start_idx = tool_starts[self.current_tool_index]
414
+ # Find where this tool call ends (or current position if not ended yet)
415
+ tool_end_idx = current_text.find(self.tool_call_end_token, tool_start_idx)
416
+ if tool_end_idx == -1:
417
+ tool_text = current_text[tool_start_idx:]
418
+ else:
419
+ tool_text = current_text[
420
+ tool_start_idx : tool_end_idx + len(self.tool_call_end_token)
421
+ ]
422
+
423
+ # Looking for function header
424
+ if not self.header_sent:
425
+ if self.tool_call_prefix in tool_text:
426
+ func_start = tool_text.find(self.tool_call_prefix) + len(
427
+ self.tool_call_prefix
428
+ )
429
+ func_end = tool_text.find(">", func_start)
430
+
431
+ if func_end != -1:
432
+ # Found complete function name
433
+ self.current_function_name = tool_text[func_start:func_end]
434
+ self.current_tool_id = self._generate_tool_call_id()
435
+ self.header_sent = True
436
+ self.in_function = True
437
+
438
+ # IMPORTANT: Add to prev_tool_call_arr immediately when we detect a tool call
439
+ # This ensures finish_reason="tool_calls" even if parsing isn't complete
440
+ already_added = any(
441
+ tool.get("name") == self.current_function_name
442
+ for tool in self.prev_tool_call_arr
443
+ )
444
+ if not already_added:
445
+ self.prev_tool_call_arr.append(
446
+ {
447
+ "name": self.current_function_name,
448
+ "arguments": "{}", # Placeholder, will be updated later
449
+ }
450
+ )
451
+
452
+ # Send header with function info
453
+ return DeltaMessage(
454
+ tool_calls=[
455
+ DeltaToolCall(
456
+ index=self.current_tool_index,
457
+ id=self.current_tool_id,
458
+ function=DeltaFunctionCall(
459
+ name=self.current_function_name, arguments=""
460
+ ),
461
+ type="function",
462
+ )
463
+ ]
464
+ )
465
+ return None
466
+
467
+ # We've sent header, now handle function body
468
+ if self.in_function:
469
+ # Send opening brace if not sent yet
470
+ if not self.json_started and not self.parameter_prefix in delta_text:
471
+ self.json_started = True
472
+ return DeltaMessage(
473
+ tool_calls=[
474
+ DeltaToolCall(
475
+ index=self.current_tool_index,
476
+ function=DeltaFunctionCall(arguments="{"),
477
+ )
478
+ ]
479
+ )
480
+
481
+ # Make sure json_started is set if we're processing parameters
482
+ if not self.json_started:
483
+ self.json_started = True
484
+
485
+ # Check for function end in accumulated text
486
+ if not self.json_closed and self.function_end_token in tool_text:
487
+ # Close JSON
488
+ self.json_closed = True
489
+
490
+ # Extract the complete tool call to update prev_tool_call_arr with final arguments
491
+ # Find the function content
492
+ func_start = tool_text.find(self.tool_call_prefix) + len(
493
+ self.tool_call_prefix
494
+ )
495
+ func_content_end = tool_text.find(self.function_end_token, func_start)
496
+ if func_content_end != -1:
497
+ func_content = tool_text[func_start:func_content_end]
498
+ # Parse to get the complete arguments
499
+ try:
500
+ parsed_tool = self._parse_xml_function_call(
501
+ func_content, request.tools if request else None
502
+ )
503
+ if parsed_tool:
504
+ # Update existing entry in prev_tool_call_arr with complete arguments
505
+ for i, tool in enumerate(self.prev_tool_call_arr):
506
+ if tool.get("name") == parsed_tool.function.name:
507
+ self.prev_tool_call_arr[i]["arguments"] = (
508
+ parsed_tool.function.arguments
509
+ )
510
+ break
511
+ except Exception:
512
+ pass # Ignore parsing errors during streaming
513
+
514
+ result = DeltaMessage(
515
+ tool_calls=[
516
+ DeltaToolCall(
517
+ index=self.current_tool_index,
518
+ function=DeltaFunctionCall(arguments="}"),
519
+ )
520
+ ]
521
+ )
522
+
523
+ # Reset state for next tool
524
+ self.in_function = False
525
+ self.json_closed = True
526
+
527
+ return result
528
+
529
+ # Look for parameters
530
+ # Count how many complete parameters we have processed
531
+ complete_params = tool_text.count(self.parameter_end_token)
532
+
533
+ # Check if we should start a new parameter
534
+ if not self.in_param and self.param_count < complete_params:
535
+ # Find the unprocessed parameter
536
+ # Count parameter starts
537
+ param_starts = []
538
+ idx = 0
539
+ while True:
540
+ idx = tool_text.find(self.parameter_prefix, idx)
541
+ if idx == -1:
542
+ break
543
+ param_starts.append(idx)
544
+ idx += len(self.parameter_prefix)
545
+
546
+ if len(param_starts) > self.param_count:
547
+ # Process the next parameter
548
+ param_idx = param_starts[self.param_count]
549
+ param_start = param_idx + len(self.parameter_prefix)
550
+ remaining = tool_text[param_start:]
551
+
552
+ if ">" in remaining:
553
+ # We have the complete parameter name
554
+ name_end = remaining.find(">")
555
+ self.current_param_name = remaining[:name_end]
556
+
557
+ # Find the parameter value
558
+ value_start = param_start + name_end + 1
559
+ value_text = tool_text[value_start:]
560
+ if value_text.startswith("\n"):
561
+ value_text = value_text[1:]
562
+
563
+ # Find where this parameter ends
564
+ param_end_idx = value_text.find(self.parameter_end_token)
565
+ if param_end_idx != -1:
566
+ # Complete parameter found
567
+ param_value = value_text[:param_end_idx]
568
+ if param_value.endswith("\n"):
569
+ param_value = param_value[:-1]
570
+
571
+ # Build complete JSON fragment for this parameter
572
+ if self.param_count == 0:
573
+ json_fragment = (
574
+ '"'
575
+ + self.current_param_name
576
+ + '": "'
577
+ + json.dumps(param_value)[1:-1]
578
+ + '"'
579
+ )
580
+ else:
581
+ json_fragment = (
582
+ ', "'
583
+ + self.current_param_name
584
+ + '": "'
585
+ + json.dumps(param_value)[1:-1]
586
+ + '"'
587
+ )
588
+
589
+ self.param_count += 1
590
+
591
+ return DeltaMessage(
592
+ tool_calls=[
593
+ DeltaToolCall(
594
+ index=self.current_tool_index,
595
+ function=DeltaFunctionCall(
596
+ arguments=json_fragment
597
+ ),
598
+ )
599
+ ]
600
+ )
601
+
602
+ # Continue parameter value
603
+ if self.in_param:
604
+ if self.parameter_end_token in delta_text:
605
+ # End of parameter
606
+ end_idx = delta_text.find(self.parameter_end_token)
607
+ value_chunk = delta_text[:end_idx]
608
+
609
+ # Skip past > if at start
610
+ if not self.current_param_value and ">" in value_chunk:
611
+ gt_idx = value_chunk.find(">")
612
+ value_chunk = value_chunk[gt_idx + 1 :]
613
+
614
+ if not self.current_param_value and value_chunk.startswith("\n"):
615
+ value_chunk = value_chunk[1:]
616
+
617
+ # Calculate incremental JSON
618
+ full_value = self.current_param_value + value_chunk
619
+ prev_escaped = (
620
+ json.dumps(self.current_param_value)[1:-1]
621
+ if self.current_param_value
622
+ else ""
623
+ )
624
+ full_escaped = json.dumps(full_value)[1:-1]
625
+ delta_escaped = full_escaped[len(prev_escaped) :]
626
+
627
+ self.in_param = False
628
+ self.current_param_value = ""
629
+
630
+ return DeltaMessage(
631
+ tool_calls=[
632
+ DeltaToolCall(
633
+ index=self.current_tool_index,
634
+ function=DeltaFunctionCall(
635
+ arguments=delta_escaped + '"'
636
+ ),
637
+ )
638
+ ]
639
+ )
640
+ else:
641
+ # Continue accumulating value
642
+ value_chunk = delta_text
643
+
644
+ # Handle first chunk after param name
645
+ if not self.current_param_value and ">" in value_chunk:
646
+ gt_idx = value_chunk.find(">")
647
+ value_chunk = value_chunk[gt_idx + 1 :]
648
+
649
+ if not self.current_param_value and value_chunk.startswith("\n"):
650
+ value_chunk = value_chunk[1:]
651
+
652
+ if value_chunk:
653
+ # Stream the escaped delta
654
+ prev_escaped = (
655
+ json.dumps(self.current_param_value)[1:-1]
656
+ if self.current_param_value
657
+ else ""
658
+ )
659
+ self.current_param_value += value_chunk
660
+ full_escaped = json.dumps(self.current_param_value)[1:-1]
661
+ delta_escaped = full_escaped[len(prev_escaped) :]
662
+
663
+ if delta_escaped:
664
+ return DeltaMessage(
665
+ tool_calls=[
666
+ DeltaToolCall(
667
+ index=self.current_tool_index,
668
+ function=DeltaFunctionCall(
669
+ arguments=delta_escaped
670
+ ),
671
+ )
672
+ ]
673
+ )
674
+
675
+ return None
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be75606093db2094d7cd20f3c2f385c212750648bd6ea4fb2bf507a6a4c55506
3
+ size 11422650
tokenizer_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "backend": "tokenizers",
4
+ "bos_token": null,
5
+ "clean_up_tokenization_spaces": false,
6
+ "eos_token": "<|im_end|>",
7
+ "errors": "replace",
8
+ "is_local": true,
9
+ "model_max_length": 1048576,
10
+ "pad_token": "<|endoftext|>",
11
+ "split_special_tokens": false,
12
+ "tokenizer_class": "Qwen2Tokenizer",
13
+ "tool_parser_type": "qwen3_coder",
14
+ "unk_token": null
15
+ }