tc-mb commited on
Commit
18d6217
·
verified ·
1 Parent(s): 35aefd0

Add/Update README.md (model card)

Browse files
Files changed (1) hide show
  1. README.md +451 -3
README.md CHANGED
@@ -1,3 +1,451 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: image-text-to-text
4
+ tags:
5
+ - minicpm-v
6
+ - multimodal
7
+ ---
8
+
9
+ A Pocket-Sized MLLM for Ultra-Efficient Image and Video Understanding on Your Phone
10
+
11
+ [GitHub](https://github.com/OpenBMB/MiniCPM-o) | [CookBook](https://github.com/OpenSQZ/MiniCPM-V-CookBook) | [Demo](https://huggingface.co/spaces/openbmb/MiniCPM-V-4.6-Thinking-Demo) |
12
+ [Feishu (Lark)](https://raw.githubusercontent.com/openbmb/MiniCPM-V/main/assets/feishu_qrcode.png)
13
+
14
+ ## MiniCPM-V 4.6 Thinking
15
+
16
+ **MiniCPM-V 4.6 Thinking** is the long chain-of-thought reasoning variant of [MiniCPM-V 4.6](https://huggingface.co/openbmb/MiniCPM-V-4.6). It generates an explicit reasoning trace before producing the final answer, substantially boosting performance on complex multimodal reasoning, math, and OCR-heavy tasks, while keeping the same edge-friendly architecture (SigLIP2-400M vision encoder + Qwen3.5-0.8B LLM) and the mixed 4x/16x visual token compression of MiniCPM-V 4.6. Notable features of MiniCPM-V 4.6 Thinking include:
17
+
18
+ - 🔥 **Leading Foundation Capability.**
19
+ MiniCPM-V 4.6 scores 13 on the Artificial Analysis Intelligence Index benchmark, outperforming Qwen3.5-0.8B's score of 10 with 19x fewer token cost, and Qwen3.5-0.8B-Thinking's score of 11 with 43x fewer token cost. It also surpasses the larger Ministral 3 3B (score of 11).
20
+
21
+ - 💪 **Strong Multimodal Capability.**
22
+ MiniCPM-V 4.6 outperforms Qwen3.5-0.8B on most vision-language understanding tasks, and reaches Qwen3.5 2B-level capability on many benchmarks including OpenCompass, RefCOCO, HallusionBench, MUIRBench, and OCRBench.
23
+ - 🚀 **Ultra-Efficient Architecture.**
24
+ Based on the latest technique in [LLaVA-UHD v4](https://github.com/THUMAI-Lab/LLaVA-UHD-v4), MiniCPM-V 4.6 reduces the visual encoding computation FLOPs by more than 50%. It enables MiniCPM-V 4.6 to achieve better efficiency to even smaller models, achieving ~1.5x token throughput compared to Qwen3.5-0.8B.
25
+ It also supports mixed 4x/16x visual token compression rate, allowing flexible switching between accuracy and speed.
26
+ - 📱 **Broad Mobile Platform Coverage.**
27
+ MiniCPM-V 4.6 can be deployed across all three mainstream mobile platforms — iOS, Android, and HarmonyOS. With every edge adaptation code open-sourced, developers can reproduce the on-device experience in [just a few steps](#deploy-minicpm-v-46-on-ios-android-and-harmonyos-platforms).
28
+ - 🛠️ **Developer Friendly.**
29
+ MiniCPM-V 4.6 is adapted to [inference frameworks](#use-minicpm-v-46-in-other-inference-and-training-frameworks) such as vLLM, SGLang, llama.cpp, Ollama, and supports [fine-tuning ecosystems](#use-minicpm-v-46-in-other-inference-and-training-frameworks) such as SWIFT and LLaMA-Factory. Developers can quickly customize models for new domains and tasks on consumer-grade GPUs. We provide multiple quantized variants across GGUF, BNB, AWQ, and GPTQ formats.
30
+
31
+
32
+ ### Evaluation <!-- omit in toc -->
33
+
34
+ **Overall Performance (Thinking)**
35
+
36
+ <p align="center">
37
+ <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-V/main/assets/minicpmv4.6/thinking.png" width="90%"></img>
38
+ </p>
39
+
40
+
41
+ <details>
42
+ <summary>Click to view MiniCPM-V 4.6 (Instruct) performance.</summary>
43
+
44
+
45
+ <p align="center">
46
+ <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-V/main/assets/minicpmv4.6/instruct.png" width="90%"></img>
47
+ </p>
48
+
49
+
50
+ </details>
51
+
52
+
53
+ <details>
54
+ <summary>Click to view MiniCPM-V 4.6 inference efficiency results.</summary>
55
+
56
+
57
+ **High-Concurrency Throughput**
58
+
59
+ <p align="center">
60
+ <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-V/main/assets/minicpmv4.6/throughput.png" width="60%"></img>
61
+ </p>
62
+
63
+ **Single Request TTFT (ms)**
64
+
65
+ <p align="center">
66
+ <img src="https://raw.githubusercontent.com/openbmb/MiniCPM-V/main/assets/minicpmv4.6/ttft.png" width="60%"></img>
67
+ </p>
68
+
69
+
70
+ </details>
71
+
72
+
73
+ ### Examples <!-- omit in toc -->
74
+
75
+ #### Overall
76
+
77
+ <div align="center">
78
+ <a href="https://www.youtube.com/watch?v=Ch5UG1FoysM"><img src="https://raw.githubusercontent.com/openbmb/MiniCPM-V/main/assets/minicpmv4.6/video_play.png" width="70%"></a>
79
+ </div>
80
+
81
+ MiniCPM-V 4.6 can be deployed across three mainstream end-side platforms — **iOS, Android and HarmonyOS**. The clips below are raw screen recordings on phone devices without edition.
82
+
83
+ <table align="center">
84
+ <tr>
85
+ <td align="center"><b>iPhone</b><br><sub>iPhone 17 Pro Max</sub></td>
86
+ <td align="center"><b>Android</b><br><sub>Redmi K70</sub></td>
87
+ <td align="center"><b>HarmonyOS</b><br><sub>HUAWEI nova 14</sub></td>
88
+ </tr>
89
+ <tr>
90
+ <td align="center"><img src="https://raw.githubusercontent.com/openbmb/MiniCPM-V/main/assets/minicpmv4.6/v46_iphone_en_handwriting.gif" width="100%"/></td>
91
+ <td align="center"><img src="https://raw.githubusercontent.com/openbmb/MiniCPM-V/main/assets/minicpmv4.6/v46_android_en_refraction.gif" width="100%"/></td>
92
+ <td align="center"><img src="https://raw.githubusercontent.com/openbmb/MiniCPM-V/main/assets/minicpmv4.6/v46_harmonyos_en_ticket.gif" width="100%"/></td>
93
+ </tr>
94
+ </table>
95
+
96
+
97
+ ### Usages
98
+
99
+ #### Inference with Transformers <!-- omit in toc -->
100
+ ##### Installation <!-- omit in toc -->
101
+
102
+ ```bash
103
+ pip install "transformers[torch]>=5.7.0" torchvision torchcodec
104
+ ```
105
+
106
+ > **Note on CUDA compatibility:** `torchcodec` (used for video decoding) may have compatibility issues with certain CUDA versions. For example, `torch>=2.11` bundles CUDA 13.1 by default, while environments with CUDA 12.x may encounter errors such as `RuntimeError: Could not load libtorchcodec`. Two workarounds:
107
+ >
108
+ > 1. **Replace `torchcodec` with `PyAV`** — supports both image and video inference without CUDA version constraints:
109
+ > ```bash
110
+ > pip install "transformers[torch]>=5.7.0" torchvision av
111
+ > ```
112
+ > 2. **Pin the CUDA version** when installing torch to match your environment (e.g. CUDA 12.8):
113
+ > ```bash
114
+ > pip install "transformers>=5.7.0" torchvision torchcodec --index-url https://download.pytorch.org/whl/cu128
115
+ > ```
116
+
117
+ ##### Load Model <!-- omit in toc -->
118
+
119
+ ```python
120
+ from transformers import AutoModelForImageTextToText, AutoProcessor
121
+
122
+ model_id = "openbmb/MiniCPM-V-4.6-Thinking"
123
+
124
+ processor = AutoProcessor.from_pretrained(model_id)
125
+ model = AutoModelForImageTextToText.from_pretrained(
126
+ model_id, torch_dtype="auto", device_map="auto"
127
+ )
128
+
129
+ # Flash Attention 2 is recommended for better acceleration and memory saving,
130
+ # especially in multi-image and video scenarios.
131
+ # model = AutoModelForImageTextToText.from_pretrained(
132
+ # model_id,
133
+ # torch_dtype=torch.bfloat16,
134
+ # attn_implementation="flash_attention_2",
135
+ # device_map="auto",
136
+ # )
137
+ ```
138
+
139
+ ##### Image Inference <!-- omit in toc -->
140
+
141
+ ```python
142
+ messages = [
143
+ {
144
+ "role": "user",
145
+ "content": [
146
+ {"type": "image", "url": "https://huggingface.co/datasets/openbmb/DemoCase/resolve/main/refract.png"},
147
+ {"type": "text", "text": "What causes this phenomenon?"},
148
+ ],
149
+ }
150
+ ]
151
+
152
+ downsample_mode = "16x" # Using `downsample_mode="4x"` for Finer Detail
153
+
154
+ inputs = processor.apply_chat_template(
155
+ messages, tokenize=True, add_generation_prompt=True,
156
+ return_dict=True, return_tensors="pt",
157
+ downsample_mode=downsample_mode,
158
+ max_slice_nums=36,
159
+ ).to(model.device)
160
+
161
+ generated_ids = model.generate(**inputs, downsample_mode=downsample_mode, max_new_tokens=512)
162
+ generated_ids_trimmed = [
163
+ out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
164
+ ]
165
+ output_text = processor.batch_decode(
166
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
167
+ )
168
+ print(output_text[0])
169
+ ```
170
+
171
+ ##### Video Inference <!-- omit in toc -->
172
+
173
+ ```python
174
+ messages = [
175
+ {
176
+ "role": "user",
177
+ "content": [
178
+ {"type": "video", "url": "https://huggingface.co/datasets/openbmb/DemoCase/resolve/main/football.mp4"},
179
+ {"type": "text", "text": "Describe this video in detail. Follow the timeline and focus on on-screen text, interface changes, main actions, and scene changes."},
180
+ ],
181
+ }
182
+ ]
183
+
184
+ downsample_mode = "16x" # Using `downsample_mode="4x"` for Finer Detail
185
+
186
+ inputs = processor.apply_chat_template(
187
+ messages, tokenize=True, add_generation_prompt=True,
188
+ return_dict=True, return_tensors="pt",
189
+ downsample_mode=downsample_mode,
190
+ max_num_frames=128,
191
+ stack_frames=1,
192
+ max_slice_nums=1,
193
+ use_image_id=False,
194
+ ).to(model.device)
195
+
196
+ generated_ids = model.generate(**inputs, downsample_mode=downsample_mode, max_new_tokens=2048)
197
+ generated_ids_trimmed = [
198
+ out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
199
+ ]
200
+ output_text = processor.batch_decode(
201
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
202
+ )
203
+ print(output_text[0])
204
+ ```
205
+
206
+ ##### Advanced Parameters <!-- omit in toc -->
207
+
208
+ You can customize image/video processing by passing additional parameters to `apply_chat_template`:
209
+
210
+ | Parameter | Default | Applies to | Description |
211
+ |-----------|---------|------------|-------------|
212
+ | `downsample_mode` | `"16x"` | Image & Video | Visual token downsampling. `"16x"` merges tokens for efficiency; `"4x"` keeps 4× more tokens for finer detail. Must also be passed to `generate()`. |
213
+ | `max_slice_nums` | `9` | Image & Video | Maximum number of slices when splitting a high-resolution image. Higher values preserve more detail for large images. Recommended: `36` for image, `1` for video. |
214
+ | `max_num_frames` | `128` | Video only | Maximum number of main frames sampled from the video. |
215
+ | `stack_frames` | `1` | Video only | Total sample points per second. `1` = main frame only (no stacking). `N` (N>1) = 1 main frame + N−1 sub-frames per second; the sub-frames are composited into a grid image and interleaved with main frames. Recommended: `3` or `5`. |
216
+ | `use_image_id` | `True` | Image & Video | Whether to prepend `<image_id>N</image_id>` tags before each image/frame placeholder. Recommended: `True` for image, `False` for video. |
217
+
218
+ > **Note:** `downsample_mode` must be passed to **both** `apply_chat_template` (for correct placeholder count) and `generate` (for the vision encoder). All other parameters only need to be passed to `apply_chat_template`.
219
+
220
+ ##### Serving with `transformers serve` <!-- omit in toc -->
221
+
222
+ Hugging Face Transformers includes a lightweight OpenAI-compatible server for quick testing and moderate-load deployment.
223
+
224
+ ```bash
225
+ pip install "transformers[serving]>=5.7.0"
226
+ ```
227
+
228
+ Start the server:
229
+
230
+ ```bash
231
+ transformers serve openbmb/MiniCPM-V-4.6-Thinking --port 8000 --host 0.0.0.0 --continuous-batching
232
+ ```
233
+
234
+ Send a request:
235
+
236
+ ```bash
237
+ curl -s http://localhost:8000/v1/chat/completions \
238
+ -H 'Content-Type: application/json' \
239
+ -d '{
240
+ "model": "openbmb/MiniCPM-V-4.6-Thinking",
241
+ "messages": [{
242
+ "role": "user",
243
+ "content": [
244
+ {"type": "image_url", "image_url": {"url": "https://huggingface.co/datasets/openbmb/DemoCase/resolve/main/refract.png"}},
245
+ {"type": "text", "text": "What causes this phenomenon?"}
246
+ ]
247
+ }]
248
+ }'
249
+ ```
250
+
251
+ #### Handling Escaped Newlines in Model Outputs <!-- omit in toc -->
252
+
253
+ In some cases, the model might output escaped newline characters `\n` as string literals instead of actual newlines. To render the text correctly, especially in UI layers, you can use the following utility function. This function carefully replaces literal `\n` with real newlines while protecting scenarios where `\n` has specific semantic meaning.
254
+
255
+ **Utility Function:**
256
+
257
+ ```python
258
+ import re
259
+
260
+ _PATTERN = re.compile(
261
+ r'(```[\s\S]*?```' # fenced code blocks
262
+ r'|`[^`]+`' # inline code
263
+ r'|\$\$[\s\S]*?\$\$' # display math
264
+ r'|\$[^$]+\$' # inline math
265
+ r'|\\\([\s\S]*?\\\)' # \(...\)
266
+ r'|\\\[[\s\S]*?\\\]' # \[...\]
267
+ r')'
268
+ r'|(?<!\\)(?:\\r\\n|\\[nr])'
269
+ )
270
+
271
+ def normalize_response_text(text: str) -> str:
272
+ """
273
+ Lightweight post-processing: Converts literal '\\n' to actual newlines,
274
+ while protecting code blocks, inline code, and LaTeX commands.
275
+ """
276
+ if not isinstance(text, str) or "\\" not in text:
277
+ return text
278
+ return _PATTERN.sub(lambda m: m.group(1) or '\n', text)
279
+ ```
280
+
281
+ #### Deploy MiniCPM-V 4.6 on iOS, Android, and HarmonyOS Platforms <!-- omit in toc -->
282
+
283
+ We have adapted MiniCPM-V 4.6 for deployment on **iOS, Android, and HarmonyOS** platforms, with **all edge adaptation code fully open-sourced**. Developers can reproduce the on-device experience in just a few steps. Visit our [edge deployment repository](https://github.com/OpenBMB/MiniCPM-V-edge-demo) for platform-specific build guides, or go to the [download page](https://github.com/OpenBMB/MiniCPM-V-edge-demo/blob/main/DOWNLOAD.md) to try pre-built apps directly.
284
+
285
+ #### Use MiniCPM-V 4.6 in Other Inference and Training Frameworks <!-- omit in toc -->
286
+
287
+ MiniCPM-V 4.6 supports multiple inference and training frameworks. Below are quick-start commands for each. For full details, see our [Cookbook](https://github.com/OpenSQZ/MiniCPM-V-CookBook).
288
+
289
+ <details>
290
+ <summary><b>vLLM</b> — <a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/vllm/minicpm-v4_6_vllm.md">Full Guide</a></summary>
291
+
292
+ ```bash
293
+ vllm serve openbmb/MiniCPM-V-4.6-Thinking \
294
+ --port 8000 \
295
+ --enable-auto-tool-choice \
296
+ --tool-call-parser qwen3_coder \
297
+ --default-chat-template-kwargs '{"enable_thinking": true}'
298
+ ```
299
+
300
+ > **Note:** `--enable-auto-tool-choice` and `--tool-call-parser qwen3_coder` enable tool/function calling support. If you don't need tool use, you can omit these flags and simply run `vllm serve openbmb/MiniCPM-V-4.6-Thinking`.
301
+
302
+ ```bash
303
+ curl -s http://localhost:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{
304
+ "model": "openbmb/MiniCPM-V-4.6-Thinking",
305
+ "messages": [{"role": "user", "content": [
306
+ {"type": "image_url", "image_url": {"url": "https://huggingface.co/datasets/openbmb/DemoCase/resolve/main/refract.png"}},
307
+ {"type": "text", "text": "What causes this phenomenon?"}
308
+ ]}]
309
+ }'
310
+ ```
311
+
312
+
313
+ Tool calling example:
314
+
315
+ ```bash
316
+ curl -s http://localhost:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{
317
+ "model": "openbmb/MiniCPM-V-4.6-Thinking",
318
+ "messages": [{"role": "user", "content": [
319
+ {"type": "text", "text": "北京的天气"}
320
+ ]}],
321
+ "tools": [{
322
+ "type": "function",
323
+ "function": {
324
+ "name": "get_weather",
325
+ "description": "Get the current weather for a given location",
326
+ "parameters": {
327
+ "type": "object",
328
+ "properties": {
329
+ "location": {"type": "string", "description": "City name"}
330
+ },
331
+ "required": ["location"]
332
+ }
333
+ }
334
+ }]
335
+ }'
336
+ ```
337
+
338
+ </details>
339
+
340
+ <details>
341
+ <summary><b>SGLang</b> — <a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/sglang/minicpm-v4_6_sglang.md">Full Guide</a></summary>
342
+
343
+ ```bash
344
+ python -m sglang.launch_server --model openbmb/MiniCPM-V-4.6-Thinking --port 30000
345
+ ```
346
+
347
+ ```bash
348
+ curl -s http://localhost:30000/v1/chat/completions -H 'Content-Type: application/json' -d '{
349
+ "model": "openbmb/MiniCPM-V-4.6-Thinking",
350
+ "messages": [{"role": "user", "content": [
351
+ {"type": "image_url", "image_url": {"url": "https://huggingface.co/datasets/openbmb/DemoCase/resolve/main/refract.png"}},
352
+ {"type": "text", "text": "What causes this phenomenon?"}
353
+ ]}]
354
+ }'
355
+ ```
356
+
357
+ </details>
358
+
359
+ <details>
360
+ <summary><b>llama.cpp</b> — <a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/llama.cpp/minicpm-v4_6_llamacpp.md">Full Guide</a></summary>
361
+
362
+ ```bash
363
+ llama-server -m MiniCPM-V-4.6-Q4_K_M.gguf --port 8080
364
+ ```
365
+
366
+ ```bash
367
+ curl -s http://localhost:8080/v1/chat/completions -H 'Content-Type: application/json' -d '{
368
+ "model": "MiniCPM-V-4.6",
369
+ "messages": [{"role": "user", "content": [
370
+ {"type": "image_url", "image_url": {"url": "https://huggingface.co/datasets/openbmb/DemoCase/resolve/main/refract.png"}},
371
+ {"type": "text", "text": "What causes this phenomenon?"}
372
+ ]}]
373
+ }'
374
+ ```
375
+
376
+ </details>
377
+
378
+ <details>
379
+ <summary><b>Ollama</b> — <a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/ollama/minicpm-v4_6_ollama.md">Full Guide</a></summary>
380
+
381
+ ```bash
382
+ ollama run minicpm-v-4.6-thinking
383
+ ```
384
+
385
+ In the interactive session, paste an image path or URL directly to chat with the model.
386
+
387
+ </details>
388
+
389
+ <details>
390
+ <summary><b>LLaMA-Factory</b> (Fine-tuning) — <a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/llamafactory_minicpmv46.md">Full Guide</a></summary>
391
+
392
+ ```bash
393
+ llamafactory-cli train examples/train_lora/minicpmv4_6_lora_sft.yaml
394
+ ```
395
+
396
+ </details>
397
+
398
+ <details>
399
+ <summary><b>ms-swift</b> (Fine-tuning) — <a href="https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/swift_minicpmv46.md">Full Guide</a></summary>
400
+
401
+ ```bash
402
+ swift sft --model_type minicpm-v-4_6 --dataset <your-dataset>
403
+ ```
404
+
405
+ </details>
406
+
407
+ ## License
408
+
409
+ #### Model License
410
+ * The MiniCPM-o/V model weights and code are open-sourced under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM-V/blob/main/LICENSE) license.
411
+
412
+ #### Statement
413
+ * As MLLMs, MiniCPM-o/V models generate content by learning a large number of multimodal corpora, but they cannot comprehend, express personal opinions, or make value judgements. Anything generated by MiniCPM-o/V models does not represent the views and positions of the model developers
414
+ * We will not be liable for any problems arising from the use of MiniCPM-o/V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination, or misuse of the model.
415
+
416
+
417
+ ## Technical Reports and Key Techniques Papers
418
+
419
+ 👏 Welcome to explore key techniques of MiniCPM-o/V and other multimodal projects of our team:
420
+
421
+ **Technical Reports:** [MiniCPM-o 4.5](https://huggingface.co/papers/2604.27393) | [MiniCPM-V 4.5](https://arxiv.org/abs/2509.18154) | [MiniCPM-o 2.6](https://openbmb.notion.site/MiniCPM-o-2-6-A-GPT-4o-Level-MLLM-for-Vision-Speech-and-Multimodal-Live-Streaming-on-Your-Phone-185ede1b7a558042b5d5e45e6b237da9) | [MiniCPM-Llama3-V 2.5](https://arxiv.org/abs/2408.01800) | [MiniCPM-V 2.0](https://openbmb.vercel.app/minicpm-v-2)
422
+
423
+ **Other Multimodal Projects:** [VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLPR](https://github.com/OpenBMB/RLPR) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [RLAIF-V](https://github.com/RLHF-V/RLAIF-V)
424
+
425
+
426
+ ## Citation <!-- omit in toc -->
427
+
428
+ If you find our model/code/paper helpful, please consider citing our papers 📝 and staring us ⭐️!
429
+
430
+ ```bib
431
+ @misc{cui2026minicpmo45realtimefullduplex,
432
+ title={MiniCPM-o 4.5: Towards Real-Time Full-Duplex Omni-Modal Interaction},
433
+ author={Junbo Cui and Bokai Xu and Chongyi Wang and Tianyu Yu and Weiyue Sun and Yingjing Xu and Tianran Wang and Zhihui He and Wenshuo Ma and Tianchi Cai and others},
434
+ year={2026},
435
+ url={https://arxiv.org/abs/2604.27393},
436
+ }
437
+
438
+ @proceedings{yu2025minicpmv45cookingefficient,
439
+ title={MiniCPM-V 4.5: Cooking Efficient MLLMs via Architecture, Data, and Training Recipe},
440
+ author={Tianyu Yu and Zefan Wang and Chongyi Wang and Fuwei Huang and Wenshuo Ma and Zhihui He and Tianchi Cai and Weize Chen and Yuxiang Huang and Yuanqian Zhao and others},
441
+ year={2025},
442
+ url={https://arxiv.org/abs/2509.18154},
443
+ }
444
+
445
+ @article{yao2024minicpm,
446
+ title={MiniCPM-V: A GPT-4V Level MLLM on Your Phone},
447
+ author={Yao, Yuan and Yu, Tianyu and Zhang, Ao and Wang, Chongyi and Cui, Junbo and Zhu, Hongji and Cai, Tianchi and Li, Haoyu and Zhao, Weilin and He, Zhihui and others},
448
+ journal={arXiv preprint arXiv:2408.01800},
449
+ year={2024}
450
+ }
451
+ ```