Missing weight for layer gemma3_12b.transformer.model.layers.0.self_attn.q_proj

#4
by Leonovers - opened

NVFP4 Gemma3 in DualClipLoader with projections fails to load.

Workflow is official LTX I2V, except for loaders and model files, which is separated by Kijai: https://huggingface.co/Kijai/LTXV2_comfy

Gemma and projections are from this repo.

image

Logs [START] Security scan [DONE] Security scan ## ComfyUI-Manager: installing dependencies done. ** ComfyUI startup time: 2026-01-10 09:00:55.450 ** Platform: Windows ** Python version: 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)] ** Python executable: C:\Tools\ComfyUInp\python_embeded\python.exe ** ComfyUI Path: C:\Tools\ComfyUInp\ComfyUI ** ComfyUI Base Folder Path: C:\Tools\ComfyUInp\ComfyUI ** User directory: C:\Tools\ComfyUInp\ComfyUI\user ** ComfyUI-Manager config path: C:\Tools\ComfyUInp\ComfyUI\user\__manager\config.ini ** Log path: C:\Tools\ComfyUInp\ComfyUI\user\comfyui.log

Prestartup times for custom nodes:
2.8 seconds: C:\Tools\ComfyUInp\ComfyUI\custom_nodes\ComfyUI-Manager

Checkpoint files will always be loaded safely.
Total VRAM 12282 MB, total RAM 65384 MB
pytorch version: 2.7.1+cu128
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 : cudaMallocAsync
Using async weight offloading with 2 streams
Enabled pinned memory 29422.0
WARNING: You need pytorch with cu130 or higher to use optimized CUDA operations.
Found comfy_kitchen backend eager: {'available': True, 'disabled': False, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8', 'scaled_mm_nvfp4']}
Found comfy_kitchen backend cuda: {'available': True, 'disabled': True, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8']}
Found comfy_kitchen backend triton: {'available': True, 'disabled': True, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8']}
Using pytorch attention
Python version: 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]
ComfyUI version: 0.8.2
ComfyUI frontend version: 1.35.9
[Prompt Server] web root: C:\Tools\ComfyUInp\python_embeded\Lib\site-packages\comfyui_frontend_package\static
Total VRAM 12282 MB, total RAM 65384 MB
pytorch version: 2.7.1+cu128
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 : cudaMallocAsync
Using async weight offloading with 2 streams
Enabled pinned memory 29422.0

Loading: ComfyUI-Manager (V3.39.2)

[ComfyUI-Manager] network_mode: public
[ComfyUI-Manager] ComfyUI per-queue preview override detected (PR #11261). Manager's preview method feature is disabled. Use ComfyUI's --preview-method CLI option or 'Settings > Execution > Live preview method'.

ComfyUI Revision: 4498 [2e9d5168] *DETACHED | Released on '2026-01-07'

πŸ“Š Initial CUDA memory: 10.77GB free / 11.99GB total
ComfyUI-GGUF: Partial torch compile only, consider updating pytorch

Import times for custom nodes:
0.0 seconds: C:\Tools\ComfyUInp\ComfyUI\custom_nodes\websocket_image_save.py
0.0 seconds: C:\Tools\ComfyUInp\ComfyUI\custom_nodes\ComfyBootlegOffload.py
0.0 seconds: C:\Tools\ComfyUInp\ComfyUI\custom_nodes\ComfyUi_NNLatentUpscale
0.0 seconds: C:\Tools\ComfyUInp\ComfyUI\custom_nodes\Skimmed_CFG
0.0 seconds: C:\Tools\ComfyUInp\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
0.0 seconds: C:\Tools\ComfyUInp\ComfyUI\custom_nodes\ComfyUI-ODE
0.0 seconds: C:\Tools\ComfyUInp\ComfyUI\custom_nodes\Vantage-GGUF
0.0 seconds: C:\Tools\ComfyUInp\ComfyUI\custom_nodes\comfyui-kjnodes
0.4 seconds: C:\Tools\ComfyUInp\ComfyUI\custom_nodes\ComfyUI-Manager
1.8 seconds: C:\Tools\ComfyUInp\ComfyUI\custom_nodes\seedvr2_videoupscaler
2.4 seconds: C:\Tools\ComfyUInp\ComfyUI\custom_nodes\pr-was-node-suite-comfyui-47064894

Context impl SQLiteImpl.
Will assume non-transactional DDL.
No target revision found.
Starting server

To see the GUI go to: http://127.0.0.1:8188

got prompt
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
Requested to load VideoVAE
loaded completely; 9632.67 MB usable, 2378.23 MB loaded, full load: True
Found quantization metadata version 1
Using MixedPrecisionOps for text encoder
!!! Exception during processing !!! Missing weight for layer gemma3_12b.transformer.model.layers.0.self_attn.q_proj
Traceback (most recent call last):
File "C:\Tools\ComfyUInp\ComfyUI\execution.py", line 518, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Tools\ComfyUInp\ComfyUI\execution.py", line 329, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Tools\ComfyUInp\ComfyUI\execution.py", line 303, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "C:\Tools\ComfyUInp\ComfyUI\execution.py", line 291, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\Tools\ComfyUInp\ComfyUI\nodes.py", line 999, in load_clip
clip = comfy.sd.load_clip(ckpt_paths=[clip_path1, clip_path2], embedding_directory=folder_paths.get_folder_paths("embeddings"), clip_type=clip_type, model_options=model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Tools\ComfyUInp\ComfyUI\comfy\sd.py", line 1026, in load_clip
return load_text_encoder_state_dicts(clip_data, embedding_directory=embedding_directory, clip_type=clip_type, model_options=model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Tools\ComfyUInp\ComfyUI\comfy\sd.py", line 1310, in load_text_encoder_state_dicts
clip = CLIP(clip_target, embedding_directory=embedding_directory, parameters=parameters, tokenizer_data=tokenizer_data, state_dict=clip_data, model_options=model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Tools\ComfyUInp\ComfyUI\comfy\sd.py", line 140, in init
m, u = self.load_sd(c)
^^^^^^^^^^^^^^^
File "C:\Tools\ComfyUInp\ComfyUI\comfy\sd.py", line 293, in load_sd
return self.cond_stage_model.load_sd(sd)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Tools\ComfyUInp\ComfyUI\comfy\text_encoders\lt.py", line 122, in load_sd
return self.load_state_dict(sdo, strict=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Tools\ComfyUInp\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 2573, in load_state_dict
load(self, state_dict)
File "C:\Tools\ComfyUInp\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 2561, in load
load(child, child_state_dict, child_prefix) # noqa: F821
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Tools\ComfyUInp\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 2561, in load
load(child, child_state_dict, child_prefix) # noqa: F821
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Tools\ComfyUInp\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 2561, in load
load(child, child_state_dict, child_prefix) # noqa: F821
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[Previous line repeated 4 more times]
File "C:\Tools\ComfyUInp\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 2544, in load
module._load_from_state_dict(
File "C:\Tools\ComfyUInp\ComfyUI\comfy\ops.py", line 549, in _load_from_state_dict
raise ValueError(f"Missing weight for layer {layer_name}")
ValueError: Missing weight for layer gemma3_12b.transformer.model.layers.0.self_attn.q_proj

Owner

Update ComfyUI to the latest version (any version after https://github.com/Comfy-Org/ComfyUI/commit/bd0e6825e84606e0706bbb5645e9ea1f4ad8154d will do, there has not been a full release yet so you need to either manually git pull, or set it to update to the nightly versions), the errors have been turned in to warnings in the update. Comfy wraps the gemma model in a custom ltx text encoder class, so it gets confused about weight names, the warnings are from extra checks and are harmless, but on the older version it would crash.

Sign up or log in to comment