Stockmark-DocReasoner-Qwen2.5-VL-32B-FP8-dynamic

Model Language Task Task License

Introduction

This repository contains an FP8 quantized version of the Stockmark-DocReasoner-Qwen2.5-VL-32B model. This optimization reduces the number of bits used to represent weights and activations from 16 to 8, which can theoretically reduce GPU memory requirements by approximately 50% and increase matrix-multiply compute throughput by around 2× (the actual performance gains may vary depending on hardware and workload characteristics). Weight quantization also reduces disk size requirements by approximately 50%. The llm-compressor library is used for quantization.

This project is supported by GENIAC.

Quickstart

The following is a code snippet demonstrating how to use Stockmark-DocReasoner-Qwen2.5-VL-32B-FP8-dynamic in vLLM.

import os
from transformers import AutoProcessor
from qwen_vl_utils import process_vision_info
from vllm import LLM, SamplingParams

os.environ["VLLM_WORKER_MULTIPROC_METHOD"] = "spawn"

def main():
    
    llm = LLM(
        model="stockmark/Stockmark-DocReasoner-Qwen2.5-VL-32B-FP8-dynamic",
        trust_remote_code=True,
    )
    processor = AutoProcessor.from_pretrained("stockmark/Stockmark-DocReasoner-Qwen2.5-VL-32B-FP8-dynamic")
    message = [
        {
            "role": "user",
            "content": [
                {
                    "type": "image",
                    "image": "assets/demo.png",
                },
                {"type": "text", "text": "30歳未満の社員に対するアンケート回答結果で、最も割合が高かった「使用頻度」は何ですか?"},
            ],
        }
    ]
    texts = processor.apply_chat_template(
        message, tokenize=False, add_generation_prompt=True
    )
    image_inputs, video_inputs = process_vision_info(message)

    mm_data = {}
    if image_inputs is not None:
        mm_data["image"] = image_inputs
    if video_inputs is not None:
        mm_data["video"] = video_inputs
    
    inputs = {
        "prompt": texts,
        "multi_modal_data": mm_data,
    }

    sampling_params = SamplingParams(
        temperature=0,
        max_tokens=1024
    )

    outputs = llm.generate(
        inputs,
        sampling_params=sampling_params,
    )

    answer = outputs[0].outputs[0].text
    print(answer)

if __name__ == "__main__":
    main()

Output Format

Default Thinking Mode

Stockmark-DocReasoner-Qwen2.5-VL-32B-FP8-dynamic outputs structured reasoning by default:

<think>
...reasoning process...
</think>
<answer>
...final answer...
</answer>

Special Inference Modes

In addition to default reasoning outputs, Stockmark-DocReasoner-Qwen2.5-VL-32B-FP8-dynamic supports prompt-based task switching to enable fast and structured inference for downstream applications.

  • STMK HTML: Convert the input document into a structured HTML representation.
  • STMK Markdown: Convert documents into Markdown format.
  • STMK JSON: Extract document content into structured JSON.
  • STMK SMILES: Extract chemical structures from diagrams into SMILES format.

Developed by

Stockmark Inc.

Citation

@misc{stockmark_docreasoner_fp8_2026,
  title={Stockmark-DocReasoner-Qwen2.5-VL-32B-FP8-dynamic},
  author={Stockmark Inc.},
  year={2026}
}
Downloads last month
42
Safetensors
Model size
33B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for stockmark/Stockmark-DocReasoner-Qwen2.5-VL-32B-FP8-dynamic