LanteRn: Latent Visual Structured Reasoning
Paper • 2603.25629 • Published
Supervised fine-tuning on VisCoT data — visual question answering with step-by-step bounding-box reasoning traces. Trained with CE + MSE loss (λ=0.1, latent_size=8). Evaluated on VisCoT, BLINK, and V*Bench.
LantErn extends Qwen2.5-VL-3B-Instruct with
Latent Visual Reasoning (LVR) tokens. Instead of always verbalising what it sees, the model can emit
compressed visual embeddings (<|lvr_start|>…<|lvr_end|>) during its chain-of-thought, enabling
non-verbalized visual reasoning interleaved with text.
Special tokens added:
| Token | Role |
|---|---|
<lvr_start> |
Begin a latent visual reasoning block |
<lvr_sep> |
Placeholder replaced by compressed visual embeddings (8 tokens) |
<lvr_end> |
End a latent visual reasoning block |
from functools import partial
import torch
from PIL import Image
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
# ── 1. Patch the forward to support mixed text/latent modality ────────────────
from src.models.qwen2_5VL.forward import qwen2_5_mixed_modality_forward_lantern
import transformers
transformers.models.qwen2_5_vl.modeling_qwen2_5_vl .Qwen2_5_VLForConditionalGeneration.forward = qwen2_5_mixed_modality_forward_lantern
# ── 2. Load model + processor ─────────────────────────────────────────────────
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"AGViveiros/LanteRn-3B-SFT", dtype=torch.bfloat16, use_cache=True,
)
processor = AutoProcessor.from_pretrained("AGViveiros/LanteRn-3B-SFT")
model.eval().cuda()
# ── 3. Build inputs ───────────────────────────────────────────────────────────
image = Image.open("path/to/image.jpg").convert("RGB")
messages = [{
"role": "user",
"content": [
{"type": "image", "image": image},
{"type": "text", "text": "Your question here"},
],
}]
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
image_inputs, _ = process_vision_info(messages)
inputs = processor(text=[text], images=image_inputs, return_tensors="pt").to("cuda")
prompt_len = inputs["input_ids"].shape[1]
# ── 4. Generate with latent visual reasoning ──────────────────────────────────
from src.lantern_generate.generate import generate as lantern_generate
output = model.generate(
**inputs,
max_new_tokens=512,
do_sample=False,
custom_generate=partial(lantern_generate),
use_cache=True,
return_dict_in_generate=True,
)
generated = output.sequences[0][prompt_len:]
print(processor.decode(generated, skip_special_tokens=False))
@article{Viveiros2026LanteRn,
title = {LanteRn: Latent Visual Structured Reasoning},
author = {Viveiros, Andr\'e G. and Gon\c{c}alves, Nuno and Lindemann, Matthias and Martins, Andr\'e},
journal = {arXiv preprint arXiv:2603.25629},
year = {2026},
url = {https://arxiv.org/abs/2603.25629}
}