ace-coreweave commited on
Commit
f13e4d0
·
verified ·
1 Parent(s): 5a49d03

fix: add missing use_deterministic_attn parameter to MoonViT3dEncoder

Browse files

MoonViT3dEncoder.__init__ references self.use_deterministic_attn on line 575
when constructing the MoonViTEncoderLayer blocks, but the attribute is never
set on self. Loading the model via AutoModelForCausalLM with
trust_remote_code=True raises:

AttributeError: 'MoonViT3dEncoder' object has no attribute
'use_deterministic_attn'

The sibling class MoonViTEncoderLayer already accepts use_deterministic_attn
as a keyword parameter with default False, so the attribute on the parent
3d-encoder was clearly intended to plumb through the same flag. Restore the
missing parameter with the same default.

Production serving paths (vLLM's Kimi-K25 model executor) bypass the HF
custom modeling init and construct the vision tower differently, so this
bug is invisible at serving time but blocks transformers-based workflows
like ModelOpt NVFP4 quantization and HF-native fine-tuning.

Identical fix already merged in Kimi-K2.5 PR #91 (by @katuni4ka , approved
by @fxmarty-amd ). This mirrors it to K2.6 byte-for-byte.

Minimal repro:

from transformers import AutoModelForCausalLM
AutoModelForCausalLM.from_pretrained(
"moonshotai/Kimi-K2.6", trust_remote_code=True, torch_dtype="auto",
)

Files changed (1) hide show
  1. modeling_kimi_k25.py +3 -2
modeling_kimi_k25.py CHANGED
@@ -562,7 +562,8 @@ class MoonViT3dEncoder(nn.Module):
562
  hidden_dim: int,
563
  num_layers: int,
564
  block_cfg: dict,
565
- video_attn_type: str = 'spatial_temporal') -> None:
 
566
  super().__init__()
567
 
568
  assert video_attn_type == 'spatial_temporal', f'video_attn_type must be "spatial_temporal", got {video_attn_type}'
@@ -572,7 +573,7 @@ class MoonViT3dEncoder(nn.Module):
572
  self.blocks = nn.ModuleList([
573
  MoonViTEncoderLayer(
574
  **block_cfg,
575
- use_deterministic_attn=self.use_deterministic_attn)
576
  for _ in range(num_layers)
577
  ])
578
  self.final_layernorm = nn.LayerNorm(hidden_dim)
 
562
  hidden_dim: int,
563
  num_layers: int,
564
  block_cfg: dict,
565
+ video_attn_type: str = 'spatial_temporal',
566
+ use_deterministic_attn: bool = False) -> None:
567
  super().__init__()
568
 
569
  assert video_attn_type == 'spatial_temporal', f'video_attn_type must be "spatial_temporal", got {video_attn_type}'
 
573
  self.blocks = nn.ModuleList([
574
  MoonViTEncoderLayer(
575
  **block_cfg,
576
+ use_deterministic_attn=use_deterministic_attn)
577
  for _ in range(num_layers)
578
  ])
579
  self.final_layernorm = nn.LayerNorm(hidden_dim)