Stockmark-DocReasoner-Qwen2.5-VL-32B
Introduction
Stockmark-DocReasoner-Qwen2.5-VL-32B is a vision-language model specialized for Japanese document understanding and reasoning, particularly in the manufacturing domain.
Built on top of Qwen2.5-VL-32B-Instruct, the model is further trained to acquire:
- Explicit Chain-of-Thought (CoT) reasoning ("thinking") capability
- Multi-modal understanding across documents, charts, tables, and diagrams
- Domain-specific knowledge for manufacturing and business documents
It is designed to extract implicit knowledge from visually rich and structurally complex documents such as:
- Technical documentation
- Engineering design drawing
- Experimental reports
- Business documents
This project is supported by GENIAC.
Evaluation
We evaluated Japanese document understanding performance using the following three benchmarks:
All benchmark evaluation were performed using llm-jp-eval-mm and adopted the LLM-as-a-judge score as the comparison metric (using gpt-4o-2024-11-20 as judge model). Additionally, given the practical requirements for answer accuracy in business-domain VQA, we employed a binary scoring criterion when evaluating JA-Business-Doc-RQ-Bench and BusinessSlideVQA, and redesigned a prompt incorporating specific requirements (please refer to JA-Business-Doc-RQ-Bench for details).
JA-Business-Doc-RQ-Bench
| Model | Overall | Answer Types | Image Types | |||||
|---|---|---|---|---|---|---|---|---|
| Yes/No | Factoid | Numerical | Chart | Document | Table | Diagram | ||
| gpt-5.2-2025-12-11 (reasoning high) | 95.20 | 93.22 | 96.55 | 95.54 | 98.25 | 96.61 | 94.64 | 91.23 |
| Qwen3-VL-32B-Thinking | 94.32 | 89.83 | 100 | 93.75 | 100 | 96.61 | 96.43 | 84.21 |
| Stockmark-DocReasoner-Qwen2.5-VL-32B | 85.15 | 88.14 | 87.93 | 82.14 | 78.95 | 96.61 | 82.14 | 82.46 |
| Qwen3-VL-32B-Instruct | 83.84 | 69.49 | 96.55 | 84.82 | 87.72 | 88.14 | 78.57 | 80.70 |
| Qwen2.5-VL-32B-Instruct | 79.04 | 72.88 | 81.03 | 81.25 | 82.46 | 86.44 | 67.86 | 78.95 |
| gpt-4o-2024-11-20 | 59.39 | 67.80 | 51.72 | 58.93 | 56.14 | 62.71 | 55.36 | 63.16 |
JDocQA
| Model | LLM | Acc |
|---|---|---|
| Stockmark-DocReasoner-Qwen2.5-VL-32B | 4.0 | 0.31 |
| Qwen3-VL-32B-Thinking | 4.0 | 0.29 |
| Qwen3-VL-32B-Instruct | 4.0 | 0.25 |
| Qwen2.5-VL-32B-Instruct | 3.6 | 0.25 |
| gpt-4o-2024-11-20 | 3.6 | 0.22 |
BusinessSlideVQA
| Model | Acc |
|---|---|
| Stockmark-DocReasoner-Qwen2.5-VL-32B | 77.27 |
| Qwen3-VL-32B-Thinking | 85.91 |
| Qwen3-VL-32B-Instruct | 82.27 |
| Qwen2.5-VL-32B-Instruct | 68.64 |
| gpt-4o-2024-11-20 | 63.18 |
Quickstart
Inference using 🤗Transformers
Please make sure to have transformers>=4.49.0 installed.
pip install transformers>=4.49.0 accelerate torchvision qwen-vl-utils flash-attn
The following is a code snippet demonstrating how to use Stockmark-DocReasoner-Qwen2.5-VL-32B in pure transformers.
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"stockmark/Stockmark-DocReasoner-Qwen2.5-VL-32B",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map="auto",
)
processor = AutoProcessor.from_pretrained("stockmark/Stockmark-DocReasoner-Qwen2.5-VL-32B")
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "assets/demo.png",
},
{"type": "text", "text": "30歳未満の社員に対するアンケート回答結果で、最も割合が高かった「使用頻度」は何ですか?"},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=1024)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
Inference using vLLM
The following is a code snippet demonstrating how to use Stockmark-DocReasoner-Qwen2.5-VL-32B in vLLM.
import os
from transformers import AutoProcessor
from qwen_vl_utils import process_vision_info
from vllm import LLM, SamplingParams
os.environ["VLLM_WORKER_MULTIPROC_METHOD"] = "spawn"
def main():
llm = LLM(
model="stockmark/Stockmark-DocReasoner-Qwen2.5-VL-32B",
trust_remote_code=True,
dtype="bfloat16",
)
processor = AutoProcessor.from_pretrained("stockmark/Stockmark-DocReasoner-Qwen2.5-VL-32B")
message = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "assets/demo.png",
},
{"type": "text", "text": "30歳未満の社員に対するアンケート回答結果で、最も割合が高かった「使用頻度」は何ですか?"},
],
}
]
texts = processor.apply_chat_template(
message, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(message)
mm_data = {}
if image_inputs is not None:
mm_data["image"] = image_inputs
if video_inputs is not None:
mm_data["video"] = video_inputs
inputs = {
"prompt": texts,
"multi_modal_data": mm_data,
}
sampling_params = SamplingParams(
temperature=0,
max_tokens=1024
)
outputs = llm.generate(
inputs,
sampling_params=sampling_params,
)
answer = outputs[0].outputs[0].text
print(answer)
if __name__ == "__main__":
main()
Output Format
Default Thinking Mode
Stockmark-DocReasoner-Qwen2.5-VL-32B outputs structured reasoning by default:
<think>
...reasoning process...
</think>
<answer>
...final answer...
</answer>
Special Inference Modes
In addition to default reasoning outputs, Stockmark-DocReasoner-Qwen2.5-VL-32B supports prompt-based task switching to enable fast and structured inference for downstream applications.
STMK HTML: Convert the input document into a structured HTML representation.STMK Markdown: Convert documents into Markdown format.STMK JSON: Extract document content into structured JSON.STMK SMILES: Extract chemical structures from diagrams into SMILES format.
Developed by
Citation
@misc{stockmark_docreasoner_2026,
title={Stockmark-DocReasoner-Qwen2.5-VL-32B},
author={Stockmark Inc.},
year={2026}
}
- Downloads last month
- 68