jma-informatique tc-mb commited on
Commit
1995f98
·
0 Parent(s):

Duplicate from openbmb/MiniCPM-V-4.6

Browse files

Co-authored-by: Tianchi Cai <tc-mb@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,445 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: image-text-to-text
4
+ tags:
5
+ - minicpm-v
6
+ - multimodal
7
+ ---
8
+
9
+ A Pocket-Sized MLLM for Ultra-Efficient Image and Video Understanding on Your Phone
10
+
11
+ [GitHub](https://github.com/OpenBMB/MiniCPM-o) | [CookBook](https://github.com/OpenSQZ/MiniCPM-V-CookBook) | [Demo](https://huggingface.co/spaces/openbmb/MiniCPM-V-4.6-Demo) |
12
+ [Feishu (Lark)](https://raw.githubusercontent.com/openbmb/MiniCPM-V/main/assets/feishu_qrcode.png)
13
+
14
+ ## MiniCPM-V 4.6
15
+
16
+ **MiniCPM-V 4.6** is our most edge-deployment-friendly model to date. The model is built based on SigLIP2-400M and the Qwen3.5-0.8B LLM. It inherits the strong single-image, multi-image, and video understanding capabilities of MiniCPM-V family, while significantly improving computation efficiency. It also introduces mixed 4x/16x visual token compression. Notable features of MiniCPM-V 4.6 include:
17
+
18
+ - 🔥 **Leading Foundation Capability.**
19
+ MiniCPM-V 4.6 scores 13 on the Artificial Analysis Intelligence Index benchmark, outperforming Qwen3.5-0.8B's score of 10 with 19x fewer token cost, and Qwen3.5-0.8B-Thinking's score of 11 with 43x fewer token cost. It also surpasses the larger Ministral 3 3B (score of 11).
20
+
21
+ - 💪 **Strong Multimodal Capability.**
22
+ MiniCPM-V 4.6 outperforms Qwen3.5-0.8B on most vision-language understanding tasks, and reaches Qwen3.5 2B-level capability on many benchmarks including OpenCompass, RefCOCO, HallusionBench, MUIRBench, and OCRBench.
23
+ - 🚀 **Ultra-Efficient Architecture.**
24
+ Based on the latest technique in [LLaVA-UHD v4](https://github.com/THUMAI-Lab/LLaVA-UHD-v4), MiniCPM-V 4.6 reduces the visual encoding computation FLOPs by more than 50%. It enables MiniCPM-V 4.6 to achieve better efficiency to even smaller models, achieving ~1.5x token throughput compared to Qwen3.5-0.8B.
25
+ It also supports mixed 4x/16x visual token compression rate, allowing flexible switching between accuracy and speed.
26
+ - 📱 **Broad Mobile Platform Coverage.**
27
+ MiniCPM-V 4.6 can be deployed across all three mainstream mobile platforms — iOS, Android, and HarmonyOS. With every edge adaptation code open-sourced, developers can reproduce the on-device experience in [just a few steps](#deploy-minicpm-v-46-on-ios-android-and-harmonyos-platforms).
28
+ - 🛠️ **Developer Friendly.**
29
+ MiniCPM-V 4.6 is adapted to [inference frameworks](#inference-and-training) such as vLLM, SGLang, llama.cpp, Ollama, and supports [fine-tuning ecosystems](#inference-and-training) such as SWIFT and LLaMA-Factory. Developers can quickly customize models for new domains and tasks on consumer-grade GPUs. We provide multiple quantized variants across GGUF, BNB, AWQ, and GPTQ formats.
30
+
31
+
32
+ ### Evaluation <!-- omit in toc -->
33
+
34
+ **Overall Performance (Instruct)**
35
+
36
+ <p align="center">
37
+ <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-V/main/assets/minicpmv4.6/instruct.png" width="90%"></img>
38
+ </p>
39
+
40
+
41
+ <details>
42
+ <summary>Click to view MiniCPM-V 4.6-Thinking performance.</summary>
43
+
44
+
45
+ <p align="center">
46
+ <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-V/main/assets/minicpmv4.6/thinking.png" width="90%"></img>
47
+ </p>
48
+
49
+
50
+ </details>
51
+
52
+
53
+ <details>
54
+ <summary>Click to view MiniCPM-V 4.6 inference efficiency results.</summary>
55
+
56
+
57
+ **High-Concurrency Throughput**
58
+
59
+ <p align="center">
60
+ <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-V/main/assets/minicpmv4.6/throughput.png" width="60%"></img>
61
+ </p>
62
+
63
+ **Single Request TTFT (ms)**
64
+
65
+ <p align="center">
66
+ <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-V/main/assets/minicpmv4.6/ttft.png" width="60%"></img>
67
+ </p>
68
+
69
+
70
+ </details>
71
+
72
+
73
+ ### Examples <!-- omit in toc -->
74
+
75
+ #### Overall
76
+
77
+ <div align="center">
78
+ <a href="https://www.youtube.com/watch?v=Ch5UG1FoysM"><img src="https://raw.githubusercontent.com/openbmb/MiniCPM-V/main/assets/minicpmv4.6/video_play.png" width="70%"></a>
79
+ </div>
80
+
81
+ MiniCPM-V 4.6 can be deployed across three mainstream end-side platforms — **iOS, Android and HarmonyOS**. The clips below are raw screen recordings on phone devices without edition.
82
+
83
+ <table align="center">
84
+ <tr>
85
+ <td align="center"><b>iPhone</b><br><sub>iPhone 17 Pro Max</sub></td>
86
+ <td align="center"><b>Android</b><br><sub>Redmi K70</sub></td>
87
+ <td align="center"><b>HarmonyOS</b><br><sub>HUAWEI nova 14</sub></td>
88
+ </tr>
89
+ <tr>
90
+ <td align="center"><img src="https://raw.githubusercontent.com/openbmb/MiniCPM-V/main/assets/minicpmv4.6/v46_iphone_en_handwriting.gif" width="100%"/></td>
91
+ <td align="center"><img src="https://raw.githubusercontent.com/openbmb/MiniCPM-V/main/assets/minicpmv4.6/v46_android_en_refraction.gif" width="100%"/></td>
92
+ <td align="center"><img src="https://raw.githubusercontent.com/openbmb/MiniCPM-V/main/assets/minicpmv4.6/v46_harmonyos_en_ticket.gif" width="100%"/></td>
93
+ </tr>
94
+ </table>
95
+
96
+
97
+ ### Usages
98
+
99
+ #### Inference with Transformers <!-- omit in toc -->
100
+ ##### Installation <!-- omit in toc -->
101
+
102
+ ```bash
103
+ pip install "transformers[torch]>=5.7.0" torchvision torchcodec
104
+ ```
105
+
106
+ > **Note on CUDA compatibility:** `torchcodec` (used for video decoding) may have compatibility issues with certain CUDA versions. For example, `torch>=2.11` bundles CUDA 13.1 by default, while environments with CUDA 12.x may encounter errors such as `RuntimeError: Could not load libtorchcodec`. Two workarounds:
107
+ >
108
+ > 1. **Replace `torchcodec` with `PyAV`** — supports both image and video inference without CUDA version constraints:
109
+ > ```bash
110
+ > pip install "transformers[torch]>=5.7.0" torchvision av
111
+ > ```
112
+ > 2. **Pin the CUDA version** when installing torch to match your environment (e.g. CUDA 12.8):
113
+ > ```bash
114
+ > pip install "transformers>=5.7.0" torchvision torchcodec --index-url https://download.pytorch.org/whl/cu128
115
+ > ```
116
+
117
+ ##### Load Model <!-- omit in toc -->
118
+
119
+ ```python
120
+ from transformers import AutoModelForImageTextToText, AutoProcessor
121
+
122
+ model_id = "openbmb/MiniCPM-V-4.6"
123
+
124
+ processor = AutoProcessor.from_pretrained(model_id)
125
+ model = AutoModelForImageTextToText.from_pretrained(
126
+ model_id, torch_dtype="auto", device_map="auto"
127
+ )
128
+
129
+ # Flash Attention 2 is recommended for better acceleration and memory saving,
130
+ # especially in multi-image and video scenarios.
131
+ # model = AutoModelForImageTextToText.from_pretrained(
132
+ # model_id,
133
+ # torch_dtype=torch.bfloat16,
134
+ # attn_implementation="flash_attention_2",
135
+ # device_map="auto",
136
+ # )
137
+ ```
138
+
139
+ ##### Image Inference <!-- omit in toc -->
140
+
141
+ ```python
142
+ messages = [
143
+ {
144
+ "role": "user",
145
+ "content": [
146
+ {"type": "image", "url": "https://huggingface.co/datasets/openbmb/DemoCase/resolve/main/refract.png"},
147
+ {"type": "text", "text": "What causes this phenomenon?"},
148
+ ],
149
+ }
150
+ ]
151
+
152
+ downsample_mode = "16x" # Using `downsample_mode="4x"` for Finer Detail
153
+
154
+ inputs = processor.apply_chat_template(
155
+ messages, tokenize=True, add_generation_prompt=True,
156
+ return_dict=True, return_tensors="pt",
157
+ downsample_mode=downsample_mode,
158
+ max_slice_nums=36,
159
+ ).to(model.device)
160
+
161
+ generated_ids = model.generate(**inputs, downsample_mode=downsample_mode, max_new_tokens=512)
162
+ generated_ids_trimmed = [
163
+ out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
164
+ ]
165
+ output_text = processor.batch_decode(
166
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
167
+ )
168
+ print(output_text[0])
169
+ ```
170
+
171
+ ##### Video Inference <!-- omit in toc -->
172
+
173
+ ```python
174
+ messages = [
175
+ {
176
+ "role": "user",
177
+ "content": [
178
+ {"type": "video", "url": "https://huggingface.co/datasets/openbmb/DemoCase/resolve/main/football.mp4"},
179
+ {"type": "text", "text": "Describe this video in detail. Follow the timeline and focus on on-screen text, interface changes, main actions, and scene changes."},
180
+ ],
181
+ }
182
+ ]
183
+
184
+ downsample_mode = "16x" # Using `downsample_mode="4x"` for Finer Detail
185
+
186
+ inputs = processor.apply_chat_template(
187
+ messages, tokenize=True, add_generation_prompt=True,
188
+ return_dict=True, return_tensors="pt",
189
+ downsample_mode=downsample_mode,
190
+ max_num_frames=128,
191
+ stack_frames=1,
192
+ max_slice_nums=1,
193
+ use_image_id=False,
194
+ ).to(model.device)
195
+
196
+ generated_ids = model.generate(**inputs, downsample_mode=downsample_mode, max_new_tokens=2048)
197
+ generated_ids_trimmed = [
198
+ out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
199
+ ]
200
+ output_text = processor.batch_decode(
201
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
202
+ )
203
+ print(output_text[0])
204
+ ```
205
+
206
+ ##### Advanced Parameters <!-- omit in toc -->
207
+
208
+ You can customize image/video processing by passing additional parameters to `apply_chat_template`:
209
+
210
+ | Parameter | Default | Applies to | Description |
211
+ |-----------|---------|------------|-------------|
212
+ | `downsample_mode` | `"16x"` | Image & Video | Visual token downsampling. `"16x"` merges tokens for efficiency; `"4x"` keeps 4× more tokens for finer detail. Must also be passed to `generate()`. |
213
+ | `max_slice_nums` | `9` | Image & Video | Maximum number of slices when splitting a high-resolution image. Higher values preserve more detail for large images. Recommended: `36` for image, `1` for video. |
214
+ | `max_num_frames` | `128` | Video only | Maximum number of main frames sampled from the video. |
215
+ | `stack_frames` | `1` | Video only | Total sample points per second. `1` = main frame only (no stacking). `N` (N>1) = 1 main frame + N−1 sub-frames per second; the sub-frames are composited into a grid image and interleaved with main frames. Recommended: `3` or `5`. |
216
+ | `use_image_id` | `True` | Image & Video | Whether to prepend `<image_id>N</image_id>` tags before each image/frame placeholder. Recommended: `True` for image, `False` for video. |
217
+
218
+ > **Note:** `downsample_mode` must be passed to **both** `apply_chat_template` (for correct placeholder count) and `generate` (for the vision encoder). All other parameters only need to be passed to `apply_chat_template`.
219
+
220
+ ##### Serving with `transformers serve` <!-- omit in toc -->
221
+
222
+ Hugging Face Transformers includes a lightweight OpenAI-compatible server for quick testing and moderate-load deployment.
223
+
224
+ ```bash
225
+ pip install "transformers[serving]>=5.7.0"
226
+ ```
227
+
228
+ Start the server:
229
+
230
+ ```bash
231
+ transformers serve openbmb/MiniCPM-V-4.6 --port 8000 --host 0.0.0.0 --continuous-batching
232
+ ```
233
+
234
+ Send a request:
235
+
236
+ ```bash
237
+ curl -s http://localhost:8000/v1/chat/completions \
238
+ -H 'Content-Type: application/json' \
239
+ -d '{
240
+ "model": "openbmb/MiniCPM-V-4.6",
241
+ "messages": [{
242
+ "role": "user",
243
+ "content": [
244
+ {"type": "image_url", "image_url": {"url": "https://huggingface.co/datasets/openbmb/DemoCase/resolve/main/refract.png"}},
245
+ {"type": "text", "text": "What causes this phenomenon?"}
246
+ ]
247
+ }]
248
+ }'
249
+ ```
250
+
251
+ #### Handling Escaped Newlines in Model Outputs <!-- omit in toc -->
252
+
253
+ In some cases, the model might output escaped newline characters `\n` as string literals instead of actual newlines. To render the text correctly, especially in UI layers, you can use the following utility function. This function carefully replaces literal `\n` with real newlines while protecting scenarios where `\n` has specific semantic meaning.
254
+
255
+ **Utility Function:**
256
+
257
+ ```python
258
+ import re
259
+
260
+ _PATTERN = re.compile(
261
+ r'(```[\s\S]*?```' # fenced code blocks
262
+ r'|`[^`]+`' # inline code
263
+ r'|\$\$[\s\S]*?\$\$' # display math
264
+ r'|\$[^$]+\$' # inline math
265
+ r'|\\\([\s\S]*?\\\)' # \(...\)
266
+ r'|\\\[[\s\S]*?\\\]' # \[...\]
267
+ r')'
268
+ r'|(?<!\\)(?:\\r\\n|\\[nr])'
269
+ )
270
+
271
+ def normalize_response_text(text: str) -> str:
272
+ """
273
+ Lightweight post-processing: Converts literal '\\n' to actual newlines,
274
+ while protecting code blocks, inline code, and LaTeX commands.
275
+ """
276
+ if not isinstance(text, str) or "\\" not in text:
277
+ return text
278
+ return _PATTERN.sub(lambda m: m.group(1) or '\n', text)
279
+ ```
280
+
281
+ #### Deploy MiniCPM-V 4.6 on iOS, Android, and HarmonyOS Platforms <!-- omit in toc -->
282
+
283
+ We have adapted MiniCPM-V 4.6 for deployment on **iOS, Android, and HarmonyOS** platforms, with **all edge adaptation code fully open-sourced**. Developers can reproduce the on-device experience in just a few steps. Visit our [edge deployment repository](https://github.com/OpenBMB/MiniCPM-V-edge-demo) for platform-specific build guides, or go to the [download page](https://github.com/OpenBMB/MiniCPM-V-edge-demo/blob/main/DOWNLOAD.md) to try pre-built apps directly.
284
+
285
+ <a id="inference-and-training"></a>
286
+ #### Use MiniCPM-V 4.6 in Other Inference and Training Frameworks <!-- omit in toc -->
287
+
288
+ MiniCPM-V 4.6 supports multiple inference and training frameworks. Below are quick-start commands for each. For full details, see our [Cookbook](https://github.com/OpenSQZ/MiniCPM-V-CookBook).
289
+
290
+ <details>
291
+ <summary><b>vLLM</b> — <a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/vllm/minicpm-v4_6_vllm.md">Full Guide</a></summary>
292
+
293
+ ```bash
294
+ vllm serve openbmb/MiniCPM-V-4.6 \
295
+ --port 8000 \
296
+ --enable-auto-tool-choice \
297
+ --tool-call-parser qwen3_coder \
298
+ --default-chat-template-kwargs '{"enable_thinking": false}'
299
+ ```
300
+
301
+ > **Note:** `--enable-auto-tool-choice` and `--tool-call-parser qwen3_coder` enable tool/function calling support. If you don't need tool use, you can omit these flags and simply run `vllm serve openbmb/MiniCPM-V-4.6`.
302
+
303
+ ```bash
304
+ curl -s http://localhost:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{
305
+ "model": "openbmb/MiniCPM-V-4.6",
306
+ "messages": [{"role": "user", "content": [
307
+ {"type": "image_url", "image_url": {"url": "https://huggingface.co/datasets/openbmb/DemoCase/resolve/main/refract.png"}},
308
+ {"type": "text", "text": "What causes this phenomenon?"}
309
+ ]}]
310
+ }'
311
+ ```
312
+
313
+
314
+ Tool calling example:
315
+
316
+ ```bash
317
+ curl -s http://localhost:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{
318
+ "model": "openbmb/MiniCPM-V-4.6",
319
+ "messages": [{"role": "user", "content": [
320
+ {"type": "text", "text": "北京的天气"}
321
+ ]}],
322
+ "tools": [{
323
+ "type": "function",
324
+ "function": {
325
+ "name": "get_weather",
326
+ "description": "Get the current weather for a given location",
327
+ "parameters": {
328
+ "type": "object",
329
+ "properties": {
330
+ "location": {"type": "string", "description": "City name"}
331
+ },
332
+ "required": ["location"]
333
+ }
334
+ }
335
+ }]
336
+ }'
337
+ ```
338
+
339
+ </details>
340
+
341
+ <details>
342
+ <summary><b>SGLang</b> — <a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/sglang/minicpm-v4_6_sglang.md">Full Guide</a></summary>
343
+
344
+ ```bash
345
+ python -m sglang.launch_server --model openbmb/MiniCPM-V-4.6 --port 30000
346
+ ```
347
+
348
+ ```bash
349
+ curl -s http://localhost:30000/v1/chat/completions -H 'Content-Type: application/json' -d '{
350
+ "model": "openbmb/MiniCPM-V-4.6",
351
+ "messages": [{"role": "user", "content": [
352
+ {"type": "image_url", "image_url": {"url": "https://huggingface.co/datasets/openbmb/DemoCase/resolve/main/refract.png"}},
353
+ {"type": "text", "text": "What causes this phenomenon?"}
354
+ ]}]
355
+ }'
356
+ ```
357
+
358
+ </details>
359
+
360
+ <details>
361
+ <summary><b>llama.cpp</b> — <a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/llama.cpp/minicpm-v4_6_llamacpp.md">Full Guide</a></summary>
362
+
363
+ ```bash
364
+ llama-server -m MiniCPM-V-4.6-Q4_K_M.gguf --port 8080
365
+ ```
366
+
367
+ ```bash
368
+ curl -s http://localhost:8080/v1/chat/completions -H 'Content-Type: application/json' -d '{
369
+ "model": "MiniCPM-V-4.6",
370
+ "messages": [{"role": "user", "content": [
371
+ {"type": "image_url", "image_url": {"url": "https://huggingface.co/datasets/openbmb/DemoCase/resolve/main/refract.png"}},
372
+ {"type": "text", "text": "What causes this phenomenon?"}
373
+ ]}]
374
+ }'
375
+ ```
376
+
377
+ </details>
378
+
379
+ <details>
380
+ <summary><b>Ollama</b> — <a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/ollama/minicpm-v4_6_ollama.md">Full Guide</a></summary>
381
+
382
+ ```bash
383
+ ollama run minicpm-v-4.6
384
+ ```
385
+
386
+ In the interactive session, paste an image path or URL directly to chat with the model.
387
+
388
+ </details>
389
+
390
+ <details>
391
+ <summary><b>LLaMA-Factory</b> (Fine-tuning) — <a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/llamafactory_minicpmv46.md">Full Guide</a></summary>
392
+
393
+ ```bash
394
+ llamafactory-cli train examples/train_lora/minicpmv4_6_lora_sft.yaml
395
+ ```
396
+
397
+ </details>
398
+
399
+ <details>
400
+ <summary><b>ms-swift</b> (Fine-tuning) — <a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/swift_minicpmv46.md">Full Guide</a></summary>
401
+
402
+ ```bash
403
+ swift sft --model_type minicpm-v-4_6 --dataset <your-dataset>
404
+ ```
405
+
406
+ </details>
407
+
408
+ ## License
409
+
410
+ #### Model License
411
+ * The MiniCPM-o/V model weights and code are open-sourced under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM-V/blob/main/LICENSE) license.
412
+
413
+ #### Statement
414
+ * As MLLMs, MiniCPM-o/V models generate content by learning a large number of multimodal corpora, but they cannot comprehend, express personal opinions, or make value judgements. Anything generated by MiniCPM-o/V models does not represent the views and positions of the model developers
415
+ * We will not be liable for any problems arising from the use of MiniCPM-o/V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination, or misuse of the model.
416
+
417
+
418
+ ## Technical Reports and Key Techniques Papers
419
+
420
+ 👏 Welcome to explore key techniques of MiniCPM-o/V and other multimodal projects of our team:
421
+
422
+ **Technical Reports:** [MiniCPM-o 4.5](https://huggingface.co/papers/2604.27393) | [MiniCPM-V 4.5](https://arxiv.org/abs/2509.18154) | [MiniCPM-o 2.6](https://openbmb.notion.site/MiniCPM-o-2-6-A-GPT-4o-Level-MLLM-for-Vision-Speech-and-Multimodal-Live-Streaming-on-Your-Phone-185ede1b7a558042b5d5e45e6b237da9) | [MiniCPM-Llama3-V 2.5](https://arxiv.org/abs/2408.01800) | [MiniCPM-V 2.0](https://openbmb.vercel.app/minicpm-v-2)
423
+
424
+ **Other Multimodal Projects:** [VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLPR](https://github.com/OpenBMB/RLPR) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [RLAIF-V](https://github.com/RLHF-V/RLAIF-V)
425
+
426
+
427
+ ## Citation <!-- omit in toc -->
428
+
429
+ If you find our model/code/paper helpful, please consider citing our papers 📝 and staring us ⭐️!
430
+
431
+ ```bib
432
+ @proceedings{yu2025minicpmv45cookingefficient,
433
+ title={MiniCPM-V 4.5: Cooking Efficient MLLMs via Architecture, Data, and Training Recipe},
434
+ author={Tianyu Yu and Zefan Wang and Chongyi Wang and Fuwei Huang and Wenshuo Ma and Zhihui He and Tianchi Cai and Weize Chen and Yuxiang Huang and Yuanqian Zhao and others},
435
+ year={2025},
436
+ url={https://arxiv.org/abs/2509.18154},
437
+ }
438
+
439
+ @article{yao2024minicpm,
440
+ title={MiniCPM-V: A GPT-4V Level MLLM on Your Phone},
441
+ author={Yao, Yuan and Yu, Tianyu and Zhang, Ao and Wang, Chongyi and Cui, Junbo and Zhu, Hongji and Cai, Tianchi and Li, Haoyu and Zhao, Weilin and He, Zhihui and others},
442
+ journal={arXiv preprint arXiv:2408.01800},
443
+ year={2024}
444
+ }
445
+ ```
chat_template.jinja ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if enable_thinking is not defined -%}
2
+ {%- set enable_thinking = false -%}
3
+ {%- endif -%}
4
+ {%- macro render_content(content, is_system_content=false) -%}
5
+ {%- if content is string -%}
6
+ {{- content -}}
7
+ {%- elif content is iterable and content is not mapping -%}
8
+ {%- set ns = namespace(parts=[]) -%}
9
+ {%- for item in content -%}
10
+ {%- if 'image' in item or 'image_url' in item or item.type == 'image' -%}
11
+ {%- if is_system_content -%}
12
+ {{- raise_exception('System message cannot contain images.') -}}
13
+ {%- endif -%}
14
+ {%- set ns.parts = ns.parts + ['<|image_pad|>'] -%}
15
+ {%- elif 'video' in item or item.type == 'video' -%}
16
+ {%- if is_system_content -%}
17
+ {{- raise_exception('System message cannot contain videos.') -}}
18
+ {%- endif -%}
19
+ {%- set ns.parts = ns.parts + ['<|video_pad|>'] -%}
20
+ {%- elif 'text' in item -%}
21
+ {%- set ns.parts = ns.parts + [item.text] -%}
22
+ {%- else -%}
23
+ {{- raise_exception('Unexpected item type in content.') -}}
24
+ {%- endif -%}
25
+ {%- endfor -%}
26
+ {{- ns.parts | join('\n') -}}
27
+ {%- elif content is none or content is undefined -%}
28
+ {{- '' -}}
29
+ {%- else -%}
30
+ {{- raise_exception('Unexpected content type.') -}}
31
+ {%- endif -%}
32
+ {%- endmacro -%}
33
+ {%- if not messages %}
34
+ {{- raise_exception('No messages provided.') }}
35
+ {%- endif %}
36
+ {%- if tools and tools is iterable and tools is not mapping %}
37
+ {{- '<|im_start|>system\n' }}
38
+ {{- "# Tools\n\nYou have access to the following functions:\n\n<tools>" }}
39
+ {%- for tool in tools %}
40
+ {{- "\n" }}
41
+ {{- tool | tojson }}
42
+ {%- endfor %}
43
+ {{- "\n</tools>" }}
44
+ {{- '\n\nIf you choose to call a function ONLY reply in the following format with NO suffix:\n\n<tool_call>\n<function=example_function_name>\n<parameter=example_parameter_1>\nvalue_1\n</parameter>\n<parameter=example_parameter_2>\nThis is the value for the second parameter\nthat can span\nmultiple lines\n</parameter>\n</function>\n</tool_call>\n\n<IMPORTANT>\nReminder:\n- Function calls MUST follow the specified format: an inner <function=...></function> block must be nested within <tool_call></tool_call> XML tags\n- Required parameters MUST be specified\n- You may provide optional reasoning for your function call in natural language BEFORE the function call, but NOT after\n- If there is no function call available, answer the question like normal with your current knowledge and do not tell the user about function calls\n</IMPORTANT>' }}
45
+ {%- if messages[0].role == 'system' %}
46
+ {%- set content = render_content(messages[0].content, true)|trim %}
47
+ {%- if content %}
48
+ {{- '\n\n' + content }}
49
+ {%- endif %}
50
+ {%- endif %}
51
+ {{- '<|im_end|>\n' }}
52
+ {%- else %}
53
+ {%- if messages[0].role == 'system' %}
54
+ {%- set content = render_content(messages[0].content, true)|trim %}
55
+ {{- '<|im_start|>system\n' + content + '<|im_end|>\n' }}
56
+ {%- endif %}
57
+ {%- endif %}
58
+ {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
59
+ {%- for message in messages[::-1] %}
60
+ {%- set index = (messages|length - 1) - loop.index0 %}
61
+ {%- if ns.multi_step_tool and message.role == "user" %}
62
+ {%- set content = render_content(message.content)|trim %}
63
+ {%- if not(content.startswith('<tool_response>') and content.endswith('</tool_response>')) %}
64
+ {%- set ns.multi_step_tool = false %}
65
+ {%- set ns.last_query_index = index %}
66
+ {%- endif %}
67
+ {%- endif %}
68
+ {%- endfor %}
69
+ {%- if ns.multi_step_tool %}
70
+ {{- raise_exception('No user query found in messages.') }}
71
+ {%- endif %}
72
+ {%- for message in messages %}
73
+ {%- set content = render_content(message.content)|trim %}
74
+ {%- if message.role == "system" %}
75
+ {%- if not loop.first %}
76
+ {{- raise_exception('System message must be at the beginning.') }}
77
+ {%- endif %}
78
+ {%- elif message.role == "user" %}
79
+ {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
80
+ {%- elif message.role == "assistant" %}
81
+ {%- set reasoning_content = '' %}
82
+ {%- if message.reasoning_content is string %}
83
+ {%- set reasoning_content = message.reasoning_content %}
84
+ {%- else %}
85
+ {%- if '</think>' in content %}
86
+ {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
87
+ {%- set content = content.split('</think>')[-1].lstrip('\n') %}
88
+ {%- endif %}
89
+ {%- endif %}
90
+ {%- set reasoning_content = reasoning_content|trim %}
91
+ {%- if loop.index0 > ns.last_query_index %}
92
+ {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content + '\n</think>\n\n' + content }}
93
+ {%- else %}
94
+ {{- '<|im_start|>' + message.role + '\n' + content }}
95
+ {%- endif %}
96
+ {%- if message.tool_calls and message.tool_calls is iterable and message.tool_calls is not mapping %}
97
+ {%- for tool_call in message.tool_calls %}
98
+ {%- if tool_call.function is defined %}
99
+ {%- set tool_call = tool_call.function %}
100
+ {%- endif %}
101
+ {%- if loop.first %}
102
+ {%- if content|trim %}
103
+ {{- '\n\n<tool_call>\n<function=' + tool_call.name + '>\n' }}
104
+ {%- else %}
105
+ {{- '<tool_call>\n<function=' + tool_call.name + '>\n' }}
106
+ {%- endif %}
107
+ {%- else %}
108
+ {{- '\n<tool_call>\n<function=' + tool_call.name + '>\n' }}
109
+ {%- endif %}
110
+ {%- if tool_call.arguments is defined %}
111
+ {%- for args_name, args_value in tool_call.arguments|items %}
112
+ {{- '<parameter=' + args_name + '>\n' }}
113
+ {%- set args_value = args_value | tojson | safe if args_value is mapping or (args_value is sequence and args_value is not string) else args_value | string %}
114
+ {{- args_value }}
115
+ {{- '\n</parameter>\n' }}
116
+ {%- endfor %}
117
+ {%- endif %}
118
+ {{- '</function>\n</tool_call>' }}
119
+ {%- endfor %}
120
+ {%- endif %}
121
+ {{- '<|im_end|>\n' }}
122
+ {%- elif message.role == "tool" %}
123
+ {%- if loop.previtem and loop.previtem.role != "tool" %}
124
+ {{- '<|im_start|>user' }}
125
+ {%- endif %}
126
+ {{- '\n<tool_response>\n' }}
127
+ {{- content }}
128
+ {{- '\n</tool_response>' }}
129
+ {%- if not loop.last and loop.nextitem.role != "tool" %}
130
+ {{- '<|im_end|>\n' }}
131
+ {%- elif loop.last %}
132
+ {{- '<|im_end|>\n' }}
133
+ {%- endif %}
134
+ {%- else %}
135
+ {{- raise_exception('Unexpected message role.') }}
136
+ {%- endif %}
137
+ {%- endfor %}
138
+ {%- if add_generation_prompt %}
139
+ {{- '<|im_start|>assistant\n' }}
140
+ {%- if enable_thinking is defined and enable_thinking is false %}
141
+ {{- '<think>\n\n</think>\n\n' }}
142
+ {%- else %}
143
+ {{- '<think>\n' }}
144
+ {%- endif %}
145
+ {%- endif %}
config.json ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "MiniCPMV4_6ForConditionalGeneration"
4
+ ],
5
+ "bos_token_id": null,
6
+ "drop_vision_last_layer": false,
7
+ "eos_token_id": 248044,
8
+ "image_size": 1120,
9
+ "model_type": "minicpmv4_6",
10
+ "pad_token_id": null,
11
+ "tie_word_embeddings": true,
12
+ "transformers_version": "5.7.0",
13
+ "use_cache": true,
14
+ "vision_config": {
15
+ "attention_dropout": 0.0,
16
+ "hidden_act": "gelu_pytorch_tanh",
17
+ "hidden_size": 1152,
18
+ "image_size": 980,
19
+ "intermediate_size": 4304,
20
+ "layer_norm_eps": 1e-06,
21
+ "model_type": "minicpmv4_6_vision",
22
+ "num_attention_heads": 16,
23
+ "num_channels": 3,
24
+ "num_hidden_layers": 27,
25
+ "patch_size": 14
26
+ },
27
+ "text_config": {
28
+ "attention_bias": false,
29
+ "attention_dropout": 0.0,
30
+ "attn_output_gate": true,
31
+ "full_attention_interval": 4,
32
+ "head_dim": 256,
33
+ "hidden_act": "silu",
34
+ "hidden_size": 1024,
35
+ "initializer_range": 0.02,
36
+ "intermediate_size": 3584,
37
+ "layer_types": [
38
+ "linear_attention",
39
+ "linear_attention",
40
+ "linear_attention",
41
+ "full_attention",
42
+ "linear_attention",
43
+ "linear_attention",
44
+ "linear_attention",
45
+ "full_attention",
46
+ "linear_attention",
47
+ "linear_attention",
48
+ "linear_attention",
49
+ "full_attention",
50
+ "linear_attention",
51
+ "linear_attention",
52
+ "linear_attention",
53
+ "full_attention",
54
+ "linear_attention",
55
+ "linear_attention",
56
+ "linear_attention",
57
+ "full_attention",
58
+ "linear_attention",
59
+ "linear_attention",
60
+ "linear_attention",
61
+ "full_attention"
62
+ ],
63
+ "linear_conv_kernel_dim": 4,
64
+ "linear_key_head_dim": 128,
65
+ "linear_num_key_heads": 16,
66
+ "linear_num_value_heads": 16,
67
+ "linear_value_head_dim": 128,
68
+ "mamba_ssm_dtype": "float32",
69
+ "max_position_embeddings": 262144,
70
+ "mlp_only_layers": [],
71
+ "mtp_num_hidden_layers": 1,
72
+ "mtp_use_dedicated_embeddings": false,
73
+ "num_attention_heads": 8,
74
+ "num_hidden_layers": 24,
75
+ "num_key_value_heads": 2,
76
+ "partial_rotary_factor": 0.25,
77
+ "rms_norm_eps": 1e-06,
78
+ "rope_parameters": {
79
+ "partial_rotary_factor": 0.25,
80
+ "rope_theta": 10000000,
81
+ "rope_type": "default"
82
+ },
83
+ "vocab_size": 248094,
84
+ "model_type": "qwen3_5_text",
85
+ "tie_word_embeddings": true
86
+ },
87
+ "insert_layer_id": 6,
88
+ "image_token_id": 248056,
89
+ "video_token_id": 248057
90
+ }
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 248045,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 248044,
6
+ 248046
7
+ ],
8
+ "temperature": 0.7,
9
+ "top_k": 0,
10
+ "top_p": 1.0,
11
+ "repetition_penalty": 1.0,
12
+ "transformers_version": "5.7.0"
13
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa67da5820411176d0f9593a00265bc25a73c45f62dc5a605a93b1b5516a0d34
3
+ size 2600957528
preprocessor_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "image_processor_type": "MiniCPMV4_6ImageProcessor",
3
+ "processor_class": "MiniCPMV4_6Processor",
4
+ "max_slice_nums": 9,
5
+ "scale_resolution": 448,
6
+ "patch_size": 14,
7
+ "use_image_id": true,
8
+ "slice_mode": true,
9
+ "image_mean": [
10
+ 0.5,
11
+ 0.5,
12
+ 0.5
13
+ ],
14
+ "image_std": [
15
+ 0.5,
16
+ 0.5,
17
+ 0.5
18
+ ]
19
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33861e37bb955af1e3f3061182b820f347eba2b9c2c1011c82794bf0d6e77b54
3
+ size 19992481
tokenizer_config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "audio_bos_token": "<|audio_start|>",
4
+ "audio_eos_token": "<|audio_end|>",
5
+ "audio_token": "<|audio_pad|>",
6
+ "backend": "tokenizers",
7
+ "bos_token": "<|im_start|>",
8
+ "clean_up_tokenization_spaces": false,
9
+ "eos_token": "<|im_end|>",
10
+ "errors": "replace",
11
+ "extra_special_tokens": {
12
+ "image_token": "<|image_pad|>",
13
+ "video_token": "<|video_pad|>",
14
+ "image_start_token": "",
16
+ "slice_start_token": "<slice>",
17
+ "slice_end_token": "</slice>",
18
+ "image_id_start_token": "<image_id>",
19
+ "image_id_end_token": "</image_id>"
20
+ },
21
+ "image_token": "<|image_pad|>",
22
+ "is_local": true,
23
+ "model_max_length": 262144,
24
+ "model_specific_special_tokens": {
25
+ "audio_bos_token": "<|audio_start|>",
26
+ "audio_eos_token": "<|audio_end|>",
27
+ "audio_token": "<|audio_pad|>",
28
+ "image_token": "<|image_pad|>",
29
+ "video_token": "<|video_pad|>",
30
+ "vision_bos_token": "<|vision_start|>",
31
+ "vision_eos_token": "<|vision_end|>"
32
+ },
33
+ "pad_token": "<|endoftext|>",
34
+ "pretokenize_regex": "(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?[\\p{L}\\p{M}]+|\\p{N}| ?[^\\s\\p{L}\\p{M}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+",
35
+ "split_special_tokens": false,
36
+ "unk_token": "<unk>",
37
+ "video_token": "<|video_pad|>",
38
+ "vision_bos_token": "<|vision_start|>",
39
+ "vision_eos_token": "<|vision_end|>",
40
+ "chat_template": "{%- if enable_thinking is not defined -%}\n {%- set enable_thinking = false -%}\n{%- endif -%}\n{%- macro render_content(content, is_system_content=false) -%}\n {%- if content is string -%}\n {{- content -}}\n {%- elif content is iterable and content is not mapping -%}\n {%- set ns = namespace(parts=[]) -%}\n {%- for item in content -%}\n {%- if 'image' in item or 'image_url' in item or item.type == 'image' -%}\n {%- if is_system_content -%}\n {{- raise_exception('System message cannot contain images.') -}}\n {%- endif -%}\n {%- set ns.parts = ns.parts + ['<|image_pad|>'] -%}\n {%- elif 'video' in item or item.type == 'video' -%}\n {%- if is_system_content -%}\n {{- raise_exception('System message cannot contain videos.') -}}\n {%- endif -%}\n {%- set ns.parts = ns.parts + ['<|video_pad|>'] -%}\n {%- elif 'text' in item -%}\n {%- set ns.parts = ns.parts + [item.text] -%}\n {%- else -%}\n {{- raise_exception('Unexpected item type in content.') -}}\n {%- endif -%}\n {%- endfor -%}\n {{- ns.parts | join('\\n') -}}\n {%- elif content is none or content is undefined -%}\n {{- '' -}}\n {%- else -%}\n {{- raise_exception('Unexpected content type.') -}}\n {%- endif -%}\n{%- endmacro -%}\n{%- if not messages %}\n {{- raise_exception('No messages provided.') }}\n{%- endif %}\n{%- if tools and tools is iterable and tools is not mapping %}\n {{- '<|im_start|>system\\n' }}\n {{- \"# Tools\\n\\nYou have access to the following functions:\\n\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\" }}\n {{- '\\n\\nIf you choose to call a function ONLY reply in the following format with NO suffix:\\n\\n<tool_call>\\n<function=example_function_name>\\n<parameter=example_parameter_1>\\nvalue_1\\n</parameter>\\n<parameter=example_parameter_2>\\nThis is the value for the second parameter\\nthat can span\\nmultiple lines\\n</parameter>\\n</function>\\n</tool_call>\\n\\n<IMPORTANT>\\nReminder:\\n- Function calls MUST follow the specified format: an inner <function=...></function> block must be nested within <tool_call></tool_call> XML tags\\n- Required parameters MUST be specified\\n- You may provide optional reasoning for your function call in natural language BEFORE the function call, but NOT after\\n- If there is no function call available, answer the question like normal with your current knowledge and do not tell the user about function calls\\n</IMPORTANT>' }}\n {%- if messages[0].role == 'system' %}\n {%- set content = render_content(messages[0].content, true)|trim %}\n {%- if content %}\n {{- '\\n\\n' + content }}\n {%- endif %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n{%- else %}\n {%- if messages[0].role == 'system' %}\n {%- set content = render_content(messages[0].content, true)|trim %}\n {{- '<|im_start|>system\\n' + content + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}\n{%- for message in messages[::-1] %}\n {%- set index = (messages|length - 1) - loop.index0 %}\n {%- if ns.multi_step_tool and message.role == \"user\" %}\n {%- set content = render_content(message.content)|trim %}\n {%- if not(content.startswith('<tool_response>') and content.endswith('</tool_response>')) %}\n {%- set ns.multi_step_tool = false %}\n {%- set ns.last_query_index = index %}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if ns.multi_step_tool %}\n {{- raise_exception('No user query found in messages.') }}\n{%- endif %}\n{%- for message in messages %}\n {%- set content = render_content(message.content)|trim %}\n {%- if message.role == \"system\" %}\n {%- if not loop.first %}\n {{- raise_exception('System message must be at the beginning.') }}\n {%- endif %}\n {%- elif message.role == \"user\" %}\n {{- '<|im_start|>' + message.role + '\\n' + content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {%- set reasoning_content = '' %}\n {%- if message.reasoning_content is string %}\n {%- set reasoning_content = message.reasoning_content %}\n {%- else %}\n {%- if '</think>' in content %}\n {%- set reasoning_content = content.split('</think>')[0].rstrip('\\n').split('<think>')[-1].lstrip('\\n') %}\n {%- set content = content.split('</think>')[-1].lstrip('\\n') %}\n {%- endif %}\n {%- endif %}\n {%- set reasoning_content = reasoning_content|trim %}\n {%- if loop.index0 > ns.last_query_index %}\n {{- '<|im_start|>' + message.role + '\\n<think>\\n' + reasoning_content + '\\n</think>\\n\\n' + content }}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- if message.tool_calls and message.tool_calls is iterable and message.tool_calls is not mapping %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {%- if loop.first %}\n {%- if content|trim %}\n {{- '\\n\\n<tool_call>\\n<function=' + tool_call.name + '>\\n' }}\n {%- else %}\n {{- '<tool_call>\\n<function=' + tool_call.name + '>\\n' }}\n {%- endif %}\n {%- else %}\n {{- '\\n<tool_call>\\n<function=' + tool_call.name + '>\\n' }}\n {%- endif %}\n {%- if tool_call.arguments is defined %}\n {%- for args_name, args_value in tool_call.arguments|items %}\n {{- '<parameter=' + args_name + '>\\n' }}\n {%- set args_value = args_value | tojson | safe if args_value is mapping or (args_value is sequence and args_value is not string) else args_value | string %}\n {{- args_value }}\n {{- '\\n</parameter>\\n' }}\n {%- endfor %}\n {%- endif %}\n {{- '</function>\\n</tool_call>' }}\n {%- endfor %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if loop.previtem and loop.previtem.role != \"tool\" %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- content }}\n {{- '\\n</tool_response>' }}\n {%- if not loop.last and loop.nextitem.role != \"tool\" %}\n {{- '<|im_end|>\\n' }}\n {%- elif loop.last %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- else %}\n {{- raise_exception('Unexpected message role.') }}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n {%- if enable_thinking is defined and enable_thinking is false %}\n {{- '<think>\\n\\n</think>\\n\\n' }}\n {%- else %}\n {{- '<think>\\n' }}\n {%- endif %}\n{%- endif %}\n",
41
+ "tokenizer_class": "Qwen2Tokenizer"
42
+ }