pretty_name: MWS Vision Bench
dataset_name: mws-vision-bench
language:
- ru
license: cc-by-4.0
tags:
- benchmark
- multimodal
- ocr
- kie
- grounding
- vlm
- business
- russian
- document
- visual-question-answering
- document-question-answering
task_categories:
- visual-question-answering
- document-question-answering
size_categories:
- 1K<n<10K
annotations_creators:
- expert-generated
dataset_creators:
- MTS AI Research
papers:
- title: >-
MWS Vision Bench: The First Russian Business-OCR Benchmark for Multimodal
Models
authors:
- MTS AI Research Team
year: 2025
status: in preparation
note: Paper coming soon
homepage: https://huggingface.co/datasets/MTSAIR/MWS-Vision-Bench
repository: https://github.com/mts-ai/MWS-Vision-Bench
organization: MTSAIR
dataset_info:
- config_name: default
features:
- name: image
dtype: image
- name: id
dtype: string
- name: type
dtype: string
- name: dataset_name
dtype: string
- name: question
dtype: string
- name: answers
list: string
splits:
- name: train
num_bytes: 262810803
num_examples: 1302
download_size: 238315183
dataset_size: 262810803
- config_name: en
features:
- name: id
dtype: string
- name: type
dtype: string
- name: dataset_name
dtype: string
- name: question
dtype: string
- name: answers
list: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 252074099
num_examples: 1302
download_size: 251760293
dataset_size: 252074099
- config_name: zh
features:
- name: id
dtype: string
- name: type
dtype: string
- name: dataset_name
dtype: string
- name: question
dtype: string
- name: answers
list: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 252019805
num_examples: 1302
download_size: 251756853
dataset_size: 252019805
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: en
data_files:
- split: train
path: en/train-*
- config_name: zh
data_files:
- split: train
path: zh/train-*
MWS-Vision-Bench
🇷🇺 Русскоязычное описание ниже / Russian summary below.
MWS Vision Bench — the first Russian-language business-OCR benchmark designed for multimodal large language models (MLLMs).
This is the validation split - publicly available for open evaluation and comparison.
🧩 Paper is coming soon.
Language Configs
This dataset provides three Hugging Face configs for the same benchmark split:
default— Russian questionsen— English questionszh— Chinese questions
from datasets import load_dataset
vision_ru = load_dataset("MTSAIR/MWS-Vision-Bench")
vision_en = load_dataset("MTSAIR/MWS-Vision-Bench", "en")
vision_zh = load_dataset("MTSAIR/MWS-Vision-Bench", "zh")
🔗 Official repository: github.com/mts-ai/MWS-Vision-Bench
🏢 Organization: MTSAIR on Hugging Face
📰 Article on Habr (in Russian): “MWS Vision Bench — the first Russian business-OCR benchmark”
Update — February 16, 2026
New: VQA category update
We updated the Reasoning VQA (ru) category to improve evaluation robustness. Revised questions and answers only (images remain unchanged). Results are not directly comparable to versions prior to Feb 16, 2026 - for VQA and overall columns.
This update improves reliability of reasoning-based evaluation while keeping the benchmark structure intact.
📊 Dataset Statistics
- Total samples: 1,302
- Unique images: 400
- Task types: 5
🖼️ Dataset Preview
Examples of diverse document types in the benchmark: business documents, handwritten notes, technical drawings, receipts, and more.
📁 Repository Structure
MWS-Vision-Bench/
├── metadata.jsonl # Dataset annotations
├── images/ # Image files organized by category
│ ├── business/
│ │ ├── scans/
│ │ ├── sheets/
│ │ ├── plans/
│ │ └── diagramms/
│ └── personal/
│ ├── hand_documents/
│ ├── hand_notebooks/
│ └── hand_misc/
└── README.md # This file
📋 Data Format
Each line in metadata.jsonl contains one JSON object:
{
"file_name": "images/image_0.jpg", # Path to the image
"id": "1", # Unique identifier
"type": "text grounding ru", # Task type
"dataset_name": "business", # Subdataset name
"question": "...", # Question in Russian
"answers": ["398", "65", ...] # List of valid answers (as strings)
}
🎯 Task Types
| Task | Description | Count |
|---|---|---|
document parsing ru |
Parsing structured documents | 243 |
full-page OCR ru |
End-to-end OCR on full pages | 144 |
key information extraction ru |
Extracting key fields | 119 |
reasoning VQA ru |
Visual reasoning in Russian | 400 |
text grounding ru |
Text–region alignment | 396 |
📊 Leaderboard (Validation Set)
Top models evaluated on this validation dataset:
| Model | Overall | img→text | img→markdown | Grounding | KIE (JSON) | VQA |
|---|---|---|---|---|---|---|
| Claude-4.6-Opus | 0.704 | 0.841 | 0.748 | 0.168 | 0.852 | 0.908 |
| Gemini-2.5-pro | 0.690 | 0.840 | 0.717 | 0.070 | 0.888 | 0.935 |
| Gemini-3-flash-preview | 0.681 | 0.836 | 0.724 | 0.051 | 0.845 | 0.950 |
| Gemini-2.5-flash | 0.672 | 0.886 | 0.729 | 0.042 | 0.825 | 0.879 |
| Claude-4.5-Opus | 0.670 | 0.809 | 0.720 | 0.131 | 0.799 | 0.889 |
| Claude-4.5-Sonnet | 0.669 | 0.741 | 0.660 | 0.459 | 0.727 | 0.759 |
| GPT-5.2 | 0.663 | 0.799 | 0.656 | 0.173 | 0.855 | 0.835 |
| Alice AI VLM dev | 0.662 | 0.881 | 0.777 | 0.063 | 0.747 | 0.841 |
| GPT-4.1-mini | 0.659 | 0.863 | 0.735 | 0.093 | 0.750 | 0.853 |
| Cotype VL (32B 8 bit) | 0.649 | 0.802 | 0.754 | 0.267 | 0.683 | 0.737 |
| GPT-5-mini | 0.639 | 0.782 | 0.678 | 0.117 | 0.774 | 0.843 |
| Qwen3-VL-235B-A22B-Instruct | 0.623 | 0.812 | 0.668 | 0.050 | 0.755 | 0.830 |
| Qwen2.5-VL-72B-Instruct | 0.621 | 0.847 | 0.706 | 0.173 | 0.615 | 0.765 |
| GPT-5.1 | 0.588 | 0.716 | 0.680 | 0.092 | 0.670 | 0.783 |
| Qwen3-VL-8B-Instruct | 0.584 | 0.780 | 0.700 | 0.084 | 0.592 | 0.766 |
| Qwen3-VL-32B-Instruct | 0.582 | 0.730 | 0.631 | 0.056 | 0.708 | 0.784 |
| GPT-4.1 | 0.574 | 0.692 | 0.681 | 0.093 | 0.624 | 0.779 |
| Qwen3-VL-4B-Instruct | 0.515 | 0.699 | 0.702 | 0.061 | 0.506 | 0.607 |
Scale: 0.0 - 1.0 (higher is better)
📝 Submit your model: To evaluate on the private test set, contact g.gaikov@mts.ai
💻 Usage Example
from datasets import load_dataset
# Load dataset (authorization required if private)
dataset = load_dataset("MTSAIR/MWS-Vision-Bench", token="hf_...")
# Example iteration
for item in dataset:
print(f"ID: {item['id']}")
print(f"Type: {item['type']}")
print(f"Question: {item['question']}")
print(f"Image: {item['image_path']}")
print(f"Answers: {item['answers']}")
📄 License
MIT License
© 2024 MTS AI
See LICENSE for details.
📚 Citation
If you use this dataset in your research, please cite:
@misc{mwsvisionbench2024,
title={MWS-Vision-Bench: Russian Multimodal OCR Benchmark},
author={MTS AI Research},
organization={MTSAIR},
year={2025},
url={https://huggingface.co/datasets/MTSAIR/MWS-Vision-Bench},
note={Paper coming soon}
}
🤝 Contacts
- Team: MTSAIR Research
- Email: g.gaikov@mts.ai
🇷🇺 Краткое описание
MWS Vision Bench — первый русскоязычный бенчмарк для бизнес-OCR в эпоху мультимодальных моделей.
Он включает 1302 примера и 5 типов задач, отражающих реальные сценарии обработки бизнес-документов и рукописных данных.
Датасет создан для оценки и развития мультимодальных LLM в русскоязычном контексте.
📄 Научная статья в процессе подготовки (paper coming soon).
Made with ❤️ by MTS AI Research Team
