Datasets:
Add evaluation results for the Qwen3.5 small model series
Browse files
README.md
CHANGED
|
@@ -138,6 +138,7 @@ We evaluated several of the latest popular MLLMs, including both closed-source a
|
|
| 138 |
| GPT-5.2(high) | β | Proprietary | 20251211 | 0.9430 | 0.9462 | 0.9446 |
|
| 139 |
| Seed1.8-Think | β | Proprietary | 20251218 | 0.9325 | 0.9403 | 0.9364 |
|
| 140 |
| Gemini-3-Pro-preview | β | Proprietary | 20251119 | 0.9318 | 0.9403 | 0.9361 |
|
|
|
|
| 141 |
| GPT-5(high) | β | Proprietary | 20250807 | 0.9279 | 0.9246 | 0.9263 |
|
| 142 |
| Gemini-2.5-Pro | β | Proprietary | 20250617 | 0.9095 | 0.9423 | 0.9259 |
|
| 143 |
| GPT-5.1(high) | β | Proprietary | 20251113 | 0.9213 | 0.9220 | 0.9216 |
|
|
@@ -146,6 +147,7 @@ We evaluated several of the latest popular MLLMs, including both closed-source a
|
|
| 146 |
| Qwen3-VL-32B-Think | β | Open | - | 0.9128 | 0.9161 | 0.9144 |
|
| 147 |
| GPT-5.1(medium) | β | Proprietary | 20251113 | 0.9108 | 0.9141 | 0.9125 |
|
| 148 |
| GPT-5-mini | β | Proprietary | 20250807 | 0.9108 | 0.9128 | 0.9118 |
|
|
|
|
| 149 |
| Seed1.5-VL-Think | β | Proprietary | 20250428 | 0.9056 | 0.9161 | 0.9109 |
|
| 150 |
| GPT o3 | β | Proprietary | 20250416 | 0.9056 | 0.9115 | 0.9086 |
|
| 151 |
| GPT o4 mini | β | Proprietary | 20250416 | 0.9062 | 0.9075 | 0.9069 |
|
|
@@ -174,7 +176,9 @@ We evaluated several of the latest popular MLLMs, including both closed-source a
|
|
| 174 |
| Qwen3-VL-4B-Instruct | Γ | Open | - | 0.7023 | 0.7023 | 0.7023 |
|
| 175 |
| Qwen3-VL-2B-Think | β | Open | - | 0.6780 | 0.6708 | 0.6744 |
|
| 176 |
| Qwen2.5-VL-3B | Γ | Open | - | 0.6748 | 0.6643 | 0.6696 |
|
|
|
|
| 177 |
| GPT-4o mini | Γ | Proprietary | 20240718 | 0.6636 | 0.6066 | 0.6351 |
|
|
|
|
| 178 |
| Qwen3-VL-2B-Instruct | Γ | Open | - | 0.5711 | 0.5928 | 0.5820 |
|
| 179 |
| *Choice longest answer* | - | - | - | 0.4262 | 0.4525 | 0.4394 |
|
| 180 |
| Deepseek-VL2 | Γ | Open | - | 0.4426 | 0.4216 | 0.4321 |
|
|
@@ -196,6 +200,7 @@ We also conducted separate evaluations for different task types (in RxnBench-en)
|
|
| 196 |
| GPT-5.2(high) | β | Proprietary | 20251211 | 0.9542 | 0.9643 | 0.9662 | 0.9491 | 0.9492 | 0.7910 |
|
| 197 |
| Seed1.8-Think | β | Proprietary | 20251218 | 0.9331 | 0.9484 | 0.9527 | 0.9444 | 0.9492 | 0.8284 |
|
| 198 |
| Gemini-3-Pro-preview | β | Proprietary | 20251119 | 0.9648 | 0.9246 | 0.9527 | 0.9398 | 0.9322 | 0.7463 |
|
|
|
|
| 199 |
| GPT-5(high) | β | Proprietary | 20250807 | 0.9313 | 0.9444 | 0.9527 | 0.9167 | 0.9661 | 0.8358 |
|
| 200 |
| Gemini-2.5-Pro | β | Proprietary | 20250617 | 0.9331 | 0.9246 | 0.9459 | 0.9491 | 0.9322 | 0.6343 |
|
| 201 |
| GPT-5.1(high) | β | Proprietary | 20251113 | 0.9243 | 0.9524 | 0.9426 | 0.9167 | 0.9661 | 0.7910 |
|
|
@@ -204,6 +209,7 @@ We also conducted separate evaluations for different task types (in RxnBench-en)
|
|
| 204 |
| Qwen3-VL-32B-Think | β | Open | - | 0.9296 | 0.9405 | 0.9426 | 0.9259 | 0.9153 | 0.7015 |
|
| 205 |
| GPT-5.1(medium) | β | Proprietary | 20251113 | 0.9243 | 0.9365 | 0.9426 | 0.9167 | 0.9492 | 0.7090 |
|
| 206 |
| GPT-5-mini | β | Proprietary | 20250807 | 0.9225 | 0.9325 | 0.9257 | 0.9259 | 0.9831 | 0.7388 |
|
|
|
|
| 207 |
| Seed1.5-VL-Think | β | Proprietary | 20250428 | 0.8996 | 0.9365 | 0.9358 | 0.9074 | 0.9153 | 0.8060 |
|
| 208 |
| GPT o3 | β | Proprietary | 20250416 | 0.9313 | 0.9325 | 0.9223 | 0.8981 | 0.9492 | 0.7090 |
|
| 209 |
| GPT o4 mini | β | Proprietary | 20250416 | 0.6391 | 0.7302 | 0.7500 | 0.6667 | 0.6271 | 0.4627 |
|
|
@@ -232,7 +238,9 @@ We also conducted separate evaluations for different task types (in RxnBench-en)
|
|
| 232 |
| Qwen3-VL-4B-Instruct | Γ | Open | - | 0.6708 | 0.7302 | 0.7804 | 0.7222 | 0.6610 | 0.5970 |
|
| 233 |
| Qwen3-VL-2B-Think | β | Open | - | 0.7342 | 0.6706 | 0.7128 | 0.7083 | 0.6102 | 0.3657 |
|
| 234 |
| Qwen2.5-VL-3B | Γ | Open | - | 0.6426 | 0.7381 | 0.7635 | 0.6898 | 0.6610 | 0.4776 |
|
|
|
|
| 235 |
| GPT-4o mini | Γ | Proprietary | 20240718 | 0.6391 | 0.7302 | 0.7500 | 0.6667 | 0.6271 | 0.4627 |
|
|
|
|
| 236 |
| Qwen3-VL-2B-Instruct | Γ | Open | - | 0.5405 | 0.6190 | 0.6318 | 0.6250 | 0.6102 | 0.3731 |
|
| 237 |
| Deepseek-VL2 | Γ | Open | - | 0.4120 | 0.5040 | 0.4899 | 0.4907 | 0.3729 | 0.3060 |
|
| 238 |
|
|
|
|
| 138 |
| GPT-5.2(high) | β | Proprietary | 20251211 | 0.9430 | 0.9462 | 0.9446 |
|
| 139 |
| Seed1.8-Think | β | Proprietary | 20251218 | 0.9325 | 0.9403 | 0.9364 |
|
| 140 |
| Gemini-3-Pro-preview | β | Proprietary | 20251119 | 0.9318 | 0.9403 | 0.9361 |
|
| 141 |
+
| Qwen3.5-9B | β | Open | - | 0.9298 | 0.9246 | 0.9272 |
|
| 142 |
| GPT-5(high) | β | Proprietary | 20250807 | 0.9279 | 0.9246 | 0.9263 |
|
| 143 |
| Gemini-2.5-Pro | β | Proprietary | 20250617 | 0.9095 | 0.9423 | 0.9259 |
|
| 144 |
| GPT-5.1(high) | β | Proprietary | 20251113 | 0.9213 | 0.9220 | 0.9216 |
|
|
|
|
| 147 |
| Qwen3-VL-32B-Think | β | Open | - | 0.9128 | 0.9161 | 0.9144 |
|
| 148 |
| GPT-5.1(medium) | β | Proprietary | 20251113 | 0.9108 | 0.9141 | 0.9125 |
|
| 149 |
| GPT-5-mini | β | Proprietary | 20250807 | 0.9108 | 0.9128 | 0.9118 |
|
| 150 |
+
| Qwen3.5-4B | β | Open | - | 0.9056 | 0.9180 | 0.9118 |
|
| 151 |
| Seed1.5-VL-Think | β | Proprietary | 20250428 | 0.9056 | 0.9161 | 0.9109 |
|
| 152 |
| GPT o3 | β | Proprietary | 20250416 | 0.9056 | 0.9115 | 0.9086 |
|
| 153 |
| GPT o4 mini | β | Proprietary | 20250416 | 0.9062 | 0.9075 | 0.9069 |
|
|
|
|
| 176 |
| Qwen3-VL-4B-Instruct | Γ | Open | - | 0.7023 | 0.7023 | 0.7023 |
|
| 177 |
| Qwen3-VL-2B-Think | β | Open | - | 0.6780 | 0.6708 | 0.6744 |
|
| 178 |
| Qwen2.5-VL-3B | Γ | Open | - | 0.6748 | 0.6643 | 0.6696 |
|
| 179 |
+
| Qwen3.5-2B | β | Open | - | 0.6597 | 0.6616 | 0.6607 |
|
| 180 |
| GPT-4o mini | Γ | Proprietary | 20240718 | 0.6636 | 0.6066 | 0.6351 |
|
| 181 |
+
| Qwen3.5-0.8B | β | Open | - | 0.6007 | 0.6066 | 0.6036 |
|
| 182 |
| Qwen3-VL-2B-Instruct | Γ | Open | - | 0.5711 | 0.5928 | 0.5820 |
|
| 183 |
| *Choice longest answer* | - | - | - | 0.4262 | 0.4525 | 0.4394 |
|
| 184 |
| Deepseek-VL2 | Γ | Open | - | 0.4426 | 0.4216 | 0.4321 |
|
|
|
|
| 200 |
| GPT-5.2(high) | β | Proprietary | 20251211 | 0.9542 | 0.9643 | 0.9662 | 0.9491 | 0.9492 | 0.7910 |
|
| 201 |
| Seed1.8-Think | β | Proprietary | 20251218 | 0.9331 | 0.9484 | 0.9527 | 0.9444 | 0.9492 | 0.8284 |
|
| 202 |
| Gemini-3-Pro-preview | β | Proprietary | 20251119 | 0.9648 | 0.9246 | 0.9527 | 0.9398 | 0.9322 | 0.7463 |
|
| 203 |
+
| Qwen3.5-9B | β | Open | - | 0.9542 | 0.9444 | 0.9426 | 0.9398 | 0.9661 | 0.7388 |
|
| 204 |
| GPT-5(high) | β | Proprietary | 20250807 | 0.9313 | 0.9444 | 0.9527 | 0.9167 | 0.9661 | 0.8358 |
|
| 205 |
| Gemini-2.5-Pro | β | Proprietary | 20250617 | 0.9331 | 0.9246 | 0.9459 | 0.9491 | 0.9322 | 0.6343 |
|
| 206 |
| GPT-5.1(high) | β | Proprietary | 20251113 | 0.9243 | 0.9524 | 0.9426 | 0.9167 | 0.9661 | 0.7910 |
|
|
|
|
| 209 |
| Qwen3-VL-32B-Think | β | Open | - | 0.9296 | 0.9405 | 0.9426 | 0.9259 | 0.9153 | 0.7015 |
|
| 210 |
| GPT-5.1(medium) | β | Proprietary | 20251113 | 0.9243 | 0.9365 | 0.9426 | 0.9167 | 0.9492 | 0.7090 |
|
| 211 |
| GPT-5-mini | β | Proprietary | 20250807 | 0.9225 | 0.9325 | 0.9257 | 0.9259 | 0.9831 | 0.7388 |
|
| 212 |
+
| Qwen3.5-4B | β | Open | - | 0.9366 | 0.9206 | 0.9257 | 0.9398 | 0.8983 | 0.6493 |
|
| 213 |
| Seed1.5-VL-Think | β | Proprietary | 20250428 | 0.8996 | 0.9365 | 0.9358 | 0.9074 | 0.9153 | 0.8060 |
|
| 214 |
| GPT o3 | β | Proprietary | 20250416 | 0.9313 | 0.9325 | 0.9223 | 0.8981 | 0.9492 | 0.7090 |
|
| 215 |
| GPT o4 mini | β | Proprietary | 20250416 | 0.6391 | 0.7302 | 0.7500 | 0.6667 | 0.6271 | 0.4627 |
|
|
|
|
| 238 |
| Qwen3-VL-4B-Instruct | Γ | Open | - | 0.6708 | 0.7302 | 0.7804 | 0.7222 | 0.6610 | 0.5970 |
|
| 239 |
| Qwen3-VL-2B-Think | β | Open | - | 0.7342 | 0.6706 | 0.7128 | 0.7083 | 0.6102 | 0.3657 |
|
| 240 |
| Qwen2.5-VL-3B | Γ | Open | - | 0.6426 | 0.7381 | 0.7635 | 0.6898 | 0.6610 | 0.4776 |
|
| 241 |
+
| Qwen3.5-2B | β | Open | - | 0.6620 | 0.7103 | 0.7027 | 0.7083 | 0.6271 | 0.3955 |
|
| 242 |
| GPT-4o mini | Γ | Proprietary | 20240718 | 0.6391 | 0.7302 | 0.7500 | 0.6667 | 0.6271 | 0.4627 |
|
| 243 |
+
| Qwen3.5-0.8B | β | Open | - | 0.6215 | 0.6310 | 0.6486 | 0.6620 | 0.5254 | 0.2836 |
|
| 244 |
| Qwen3-VL-2B-Instruct | Γ | Open | - | 0.5405 | 0.6190 | 0.6318 | 0.6250 | 0.6102 | 0.3731 |
|
| 245 |
| Deepseek-VL2 | Γ | Open | - | 0.4120 | 0.5040 | 0.4899 | 0.4907 | 0.3729 | 0.3060 |
|
| 246 |
|