'Linear' object has no attribute 'weight'
I tried to drop it in the official Comfy workflow replacing its gemma with this gemma in the menu and it gives me this error. The original gemma works just fine.
!!! Exception during processing !!! 'Linear' object has no attribute 'weight'
AttributeError: 'Linear' object has no attribute 'weight'
I've uploaded a fixed version of the fp8 file - I couldn't reproduce any error on the other versions. Redownload the fp8 and replace the normal gemma (same thing as before) - it should work without problems now.
I really should have mentioned that now that i think of it, but the error was with the fp4 file. I am downloading the updated fp8 one now tho and test it anyway. I used fp4 because that is what the standard workflow uses for precision.
Oh, that's interesting. It works perfectly for me. Do you have any more detailed version of the error message?
Here is the full log from the updated fp8 model where the same error also still happens.
Launching ComfyUI from: /data/comfy
[START] Security scan
[ComfyUI-Manager] Using `uv` as Python module for pip operations.
Using Python 3.12.3 environment at: /data/comfy-venv
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2026-03-29 09:19:33.903
** Platform: Linux
** Python version: 3.12.3 (main, Mar 3 2026, 12:15:18) [GCC 13.3.0]
** Python executable: /data/comfy-venv/bin/python3
** ComfyUI Path: /data/comfy
** ComfyUI Base Folder Path: /data/comfy
** User directory: /data/comfy/user
** ComfyUI-Manager config path: /data/comfy/user/__manager/config.ini
** Log path: /data/comfy/user/comfyui.log
Using Python 3.12.3 environment at: /data/comfy-venv
Using Python 3.12.3 environment at: /data/comfy-venv
[comfy-env] ComfyUI-GeometryPack: no _root_env
[comfy-env] prestartup complete
Prestartup times for custom nodes:
0.0 seconds: /data/comfy/custom_nodes/rgthree-comfy
0.0 seconds: /data/comfy/custom_nodes/comfyui-easy-use
0.1 seconds: /data/comfy/custom_nodes/ComfyUI-GeometryPack
0.2 seconds: /data/comfy/custom_nodes/ComfyUI-Manager
WARNING: You need pytorch with cu130 or higher to use optimized CUDA operations.
Found comfy_kitchen backend triton: {'available': True, 'disabled': True, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_mxfp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8']}
Found comfy_kitchen backend cuda: {'available': True, 'disabled': True, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_mxfp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8', 'scaled_mm_nvfp4']}
Found comfy_kitchen backend eager: {'available': True, 'disabled': False, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_mxfp8', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_mxfp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8', 'scaled_mm_mxfp8', 'scaled_mm_nvfp4']}
Checkpoint files will always be loaded safely.
Total VRAM 24123 MB, total RAM 128457 MB
pytorch version: 2.10.0+cu128
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
Using async weight offloading with 2 streams
Enabled pinned memory 122034.0
working around nvidia conv3d memory bug.
Using pytorch attention
aimdo: src/control.c:68:INFO:comfy-aimdo inited for GPU: NVIDIA GeForce RTX 3090 (VRAM: 24123 MB)
DynamicVRAM support detected and enabled
Python version: 3.12.3 (main, Mar 3 2026, 12:15:18) [GCC 13.3.0]
ComfyUI version: 0.18.2
comfy-aimdo version: 0.2.12
comfy-kitchen version: 0.2.8
ComfyUI frontend version: 1.41.21
[Prompt Server] web root: /data/comfy-venv/lib/python3.12/site-packages/comfyui_frontend_package/static
Asset seeder disabled
### Loading: ComfyUI-Manager (V3.39.2)
[ComfyUI-Manager] network_mode: public
[ComfyUI-Manager] ComfyUI per-queue preview override detected (PR #11261). Manager's preview method feature is disabled. Use ComfyUI's --preview-method CLI option or 'Settings > Execution > Live preview method'.
### ComfyUI Revision: 4962 [a0ae3f3b] *DETACHED | Released on '2026-03-24'
[geompack] loading...
[geompack] calling register_nodes
[comfy-env] Version: 0.1.92
[comfy-env] Importing nodes (root)...
[comfy-env] Imported nodes root: 0 nodes
[comfy-env] Importing blender...
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[comfy-env] Imported blender: 5 nodes
[comfy-env] Importing gpu...
[comfy-env] Imported gpu: 1 nodes
[comfy-env] Importing main...
[comfy-env] Imported main: 57 nodes
[comfy-env] No env for gpu -- run 'comfy-env install'
[comfy-env] No env for blender -- run 'comfy-env install'
[comfy-env] No env for main -- run 'comfy-env install'
[comfy-env] Registered 63 total nodes
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[/data/comfy/custom_nodes/comfyui_controlnet_aux] | INFO -> Using ckpts path: /data/comfy/custom_nodes/comfyui_controlnet_aux/ckpts
[/data/comfy/custom_nodes/comfyui_controlnet_aux] | INFO -> Using symlinks: False
[/data/comfy/custom_nodes/comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'MIGraphXExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']
DWPose: Onnxruntime with acceleration providers detected
[rgthree-comfy] Loaded 48 epic nodes. 🎉
[rgthree-comfy] ComfyUI's new Node 2.0 rendering may be incompatible with some rgthree-comfy nodes and features, breaking some rendering as well as losing the ability to access a node's properties (a vital part of many nodes). It also appears to run MUCH more slowly spiking CPU usage and causing jankiness and unresponsiveness, especially with large workflows. Personally I am not planning to use the new Nodes 2.0 and, unfortunately, am not able to invest the time to investigate and overhaul rgthree-comfy where needed. If you have issues when Nodes 2.0 is enabled, I'd urge you to switch it off as well and join me in hoping ComfyUI is not planning to deprecate the existing, stable canvas rendering all together.
[ComfyUI-Easy-Use] server: v1.3.6 Loaded
[ComfyUI-Easy-Use] web root: /data/comfy/custom_nodes/comfyui-easy-use/web_version/v2 Loaded
[VibeVoice] Using embedded VibeVoice (MIT licensed)
[VibeVoice] VibeVoice nodes registered successfully
********
Warning: flash-attn is not installed. Will only run the manual PyTorch version. Please install flash-attn for faster inference.
********
✅ ComfyUI-Qwen-TTS v1.0.6 loaded
Import times for custom nodes:
0.0 seconds: /data/comfy/custom_nodes/websocket_image_save.py
0.0 seconds: /data/comfy/custom_nodes/Lucy-Edit-ComfyUI
0.0 seconds: /data/comfy/custom_nodes/comfyui-frame-interpolation
0.0 seconds: /data/comfy/custom_nodes/comfyui-segment-anything-2
0.0 seconds: /data/comfy/custom_nodes/rgthree-comfy
0.0 seconds: /data/comfy/custom_nodes/comfyui_controlnet_aux
0.0 seconds: /data/comfy/custom_nodes/ComfyUI-Manager
0.0 seconds: /data/comfy/custom_nodes/comfyui-kjnodes
0.0 seconds: /data/comfy/custom_nodes/comfyui-videohelpersuite
0.1 seconds: /data/comfy/custom_nodes/VibeVoice-ComfyUI
0.1 seconds: /data/comfy/custom_nodes/qwen3-tts-comfyui
0.2 seconds: /data/comfy/custom_nodes/ComfyUI-GeometryPack
0.6 seconds: /data/comfy/custom_nodes/ComfyUI-HyMotion
1.4 seconds: /data/comfy/custom_nodes/comfyui-easy-use
Context impl SQLiteImpl.
Will assume non-transactional DDL.
Starting server
To see the GUI go to: http://127.0.0.1:8188
FETCH ComfyRegistry Data: 5/135
got prompt
Found quantization metadata version 1
Using MixedPrecisionOps for text encoder
Missing weight for layer vision_model.encoder.layers.0.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.0.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.0.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.0.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.0.mlp.fc1
Missing weight for layer vision_model.encoder.layers.0.mlp.fc2
Missing weight for layer vision_model.encoder.layers.1.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.1.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.1.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.1.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.1.mlp.fc1
Missing weight for layer vision_model.encoder.layers.1.mlp.fc2
Missing weight for layer vision_model.encoder.layers.2.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.2.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.2.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.2.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.2.mlp.fc1
Missing weight for layer vision_model.encoder.layers.2.mlp.fc2
Missing weight for layer vision_model.encoder.layers.3.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.3.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.3.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.3.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.3.mlp.fc1
Missing weight for layer vision_model.encoder.layers.3.mlp.fc2
Missing weight for layer vision_model.encoder.layers.4.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.4.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.4.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.4.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.4.mlp.fc1
Missing weight for layer vision_model.encoder.layers.4.mlp.fc2
Missing weight for layer vision_model.encoder.layers.5.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.5.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.5.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.5.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.5.mlp.fc1
Missing weight for layer vision_model.encoder.layers.5.mlp.fc2
Missing weight for layer vision_model.encoder.layers.6.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.6.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.6.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.6.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.6.mlp.fc1
Missing weight for layer vision_model.encoder.layers.6.mlp.fc2
Missing weight for layer vision_model.encoder.layers.7.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.7.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.7.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.7.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.7.mlp.fc1
Missing weight for layer vision_model.encoder.layers.7.mlp.fc2
Missing weight for layer vision_model.encoder.layers.8.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.8.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.8.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.8.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.8.mlp.fc1
Missing weight for layer vision_model.encoder.layers.8.mlp.fc2
Missing weight for layer vision_model.encoder.layers.9.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.9.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.9.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.9.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.9.mlp.fc1
Missing weight for layer vision_model.encoder.layers.9.mlp.fc2
Missing weight for layer vision_model.encoder.layers.10.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.10.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.10.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.10.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.10.mlp.fc1
Missing weight for layer vision_model.encoder.layers.10.mlp.fc2
Missing weight for layer vision_model.encoder.layers.11.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.11.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.11.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.11.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.11.mlp.fc1
Missing weight for layer vision_model.encoder.layers.11.mlp.fc2
Missing weight for layer vision_model.encoder.layers.12.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.12.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.12.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.12.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.12.mlp.fc1
Missing weight for layer vision_model.encoder.layers.12.mlp.fc2
Missing weight for layer vision_model.encoder.layers.13.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.13.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.13.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.13.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.13.mlp.fc1
Missing weight for layer vision_model.encoder.layers.13.mlp.fc2
Missing weight for layer vision_model.encoder.layers.14.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.14.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.14.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.14.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.14.mlp.fc1
Missing weight for layer vision_model.encoder.layers.14.mlp.fc2
Missing weight for layer vision_model.encoder.layers.15.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.15.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.15.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.15.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.15.mlp.fc1
Missing weight for layer vision_model.encoder.layers.15.mlp.fc2
Missing weight for layer vision_model.encoder.layers.16.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.16.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.16.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.16.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.16.mlp.fc1
Missing weight for layer vision_model.encoder.layers.16.mlp.fc2
Missing weight for layer vision_model.encoder.layers.17.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.17.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.17.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.17.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.17.mlp.fc1
Missing weight for layer vision_model.encoder.layers.17.mlp.fc2
Missing weight for layer vision_model.encoder.layers.18.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.18.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.18.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.18.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.18.mlp.fc1
Missing weight for layer vision_model.encoder.layers.18.mlp.fc2
Missing weight for layer vision_model.encoder.layers.19.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.19.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.19.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.19.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.19.mlp.fc1
Missing weight for layer vision_model.encoder.layers.19.mlp.fc2
Missing weight for layer vision_model.encoder.layers.20.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.20.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.20.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.20.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.20.mlp.fc1
Missing weight for layer vision_model.encoder.layers.20.mlp.fc2
Missing weight for layer vision_model.encoder.layers.21.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.21.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.21.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.21.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.21.mlp.fc1
Missing weight for layer vision_model.encoder.layers.21.mlp.fc2
Missing weight for layer vision_model.encoder.layers.22.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.22.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.22.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.22.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.22.mlp.fc1
Missing weight for layer vision_model.encoder.layers.22.mlp.fc2
Missing weight for layer vision_model.encoder.layers.23.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.23.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.23.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.23.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.23.mlp.fc1
Missing weight for layer vision_model.encoder.layers.23.mlp.fc2
Missing weight for layer vision_model.encoder.layers.24.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.24.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.24.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.24.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.24.mlp.fc1
Missing weight for layer vision_model.encoder.layers.24.mlp.fc2
Missing weight for layer vision_model.encoder.layers.25.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.25.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.25.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.25.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.25.mlp.fc1
Missing weight for layer vision_model.encoder.layers.25.mlp.fc2
Missing weight for layer vision_model.encoder.layers.26.self_attn.q_proj
Missing weight for layer vision_model.encoder.layers.26.self_attn.k_proj
Missing weight for layer vision_model.encoder.layers.26.self_attn.v_proj
Missing weight for layer vision_model.encoder.layers.26.self_attn.out_proj
Missing weight for layer vision_model.encoder.layers.26.mlp.fc1
Missing weight for layer vision_model.encoder.layers.26.mlp.fc2
clip missing: ['vision_model.embeddings.patch_embedding.weight', 'vision_model.embeddings.patch_embedding.bias', 'vision_model.embeddings.position_embedding.weight', 'vision_model.encoder.layers.0.layer_norm1.weight', 'vision_model.encoder.layers.0.layer_norm1.bias', 'vision_model.encoder.layers.0.layer_norm2.weight', 'vision_model.encoder.layers.0.layer_norm2.bias', 'vision_model.encoder.layers.1.layer_norm1.weight', 'vision_model.encoder.layers.1.layer_norm1.bias', 'vision_model.encoder.layers.1.layer_norm2.weight', 'vision_model.encoder.layers.1.layer_norm2.bias', 'vision_model.encoder.layers.2.layer_norm1.weight', 'vision_model.encoder.layers.2.layer_norm1.bias', 'vision_model.encoder.layers.2.layer_norm2.weight', 'vision_model.encoder.layers.2.layer_norm2.bias', 'vision_model.encoder.layers.3.layer_norm1.weight', 'vision_model.encoder.layers.3.layer_norm1.bias', 'vision_model.encoder.layers.3.layer_norm2.weight', 'vision_model.encoder.layers.3.layer_norm2.bias', 'vision_model.encoder.layers.4.layer_norm1.weight', 'vision_model.encoder.layers.4.layer_norm1.bias', 'vision_model.encoder.layers.4.layer_norm2.weight', 'vision_model.encoder.layers.4.layer_norm2.bias', 'vision_model.encoder.layers.5.layer_norm1.weight', 'vision_model.encoder.layers.5.layer_norm1.bias', 'vision_model.encoder.layers.5.layer_norm2.weight', 'vision_model.encoder.layers.5.layer_norm2.bias', 'vision_model.encoder.layers.6.layer_norm1.weight', 'vision_model.encoder.layers.6.layer_norm1.bias', 'vision_model.encoder.layers.6.layer_norm2.weight', 'vision_model.encoder.layers.6.layer_norm2.bias', 'vision_model.encoder.layers.7.layer_norm1.weight', 'vision_model.encoder.layers.7.layer_norm1.bias', 'vision_model.encoder.layers.7.layer_norm2.weight', 'vision_model.encoder.layers.7.layer_norm2.bias', 'vision_model.encoder.layers.8.layer_norm1.weight', 'vision_model.encoder.layers.8.layer_norm1.bias', 'vision_model.encoder.layers.8.layer_norm2.weight', 'vision_model.encoder.layers.8.layer_norm2.bias', 'vision_model.encoder.layers.9.layer_norm1.weight', 'vision_model.encoder.layers.9.layer_norm1.bias', 'vision_model.encoder.layers.9.layer_norm2.weight', 'vision_model.encoder.layers.9.layer_norm2.bias', 'vision_model.encoder.layers.10.layer_norm1.weight', 'vision_model.encoder.layers.10.layer_norm1.bias', 'vision_model.encoder.layers.10.layer_norm2.weight', 'vision_model.encoder.layers.10.layer_norm2.bias', 'vision_model.encoder.layers.11.layer_norm1.weight', 'vision_model.encoder.layers.11.layer_norm1.bias', 'vision_model.encoder.layers.11.layer_norm2.weight', 'vision_model.encoder.layers.11.layer_norm2.bias', 'vision_model.encoder.layers.12.layer_norm1.weight', 'vision_model.encoder.layers.12.layer_norm1.bias', 'vision_model.encoder.layers.12.layer_norm2.weight', 'vision_model.encoder.layers.12.layer_norm2.bias', 'vision_model.encoder.layers.13.layer_norm1.weight', 'vision_model.encoder.layers.13.layer_norm1.bias', 'vision_model.encoder.layers.13.layer_norm2.weight', 'vision_model.encoder.layers.13.layer_norm2.bias', 'vision_model.encoder.layers.14.layer_norm1.weight', 'vision_model.encoder.layers.14.layer_norm1.bias', 'vision_model.encoder.layers.14.layer_norm2.weight', 'vision_model.encoder.layers.14.layer_norm2.bias', 'vision_model.encoder.layers.15.layer_norm1.weight', 'vision_model.encoder.layers.15.layer_norm1.bias', 'vision_model.encoder.layers.15.layer_norm2.weight', 'vision_model.encoder.layers.15.layer_norm2.bias', 'vision_model.encoder.layers.16.layer_norm1.weight', 'vision_model.encoder.layers.16.layer_norm1.bias', 'vision_model.encoder.layers.16.layer_norm2.weight', 'vision_model.encoder.layers.16.layer_norm2.bias', 'vision_model.encoder.layers.17.layer_norm1.weight', 'vision_model.encoder.layers.17.layer_norm1.bias', 'vision_model.encoder.layers.17.layer_norm2.weight', 'vision_model.encoder.layers.17.layer_norm2.bias', 'vision_model.encoder.layers.18.layer_norm1.weight', 'vision_model.encoder.layers.18.layer_norm1.bias', 'vision_model.encoder.layers.18.layer_norm2.weight', 'vision_model.encoder.layers.18.layer_norm2.bias', 'vision_model.encoder.layers.19.layer_norm1.weight', 'vision_model.encoder.layers.19.layer_norm1.bias', 'vision_model.encoder.layers.19.layer_norm2.weight', 'vision_model.encoder.layers.19.layer_norm2.bias', 'vision_model.encoder.layers.20.layer_norm1.weight', 'vision_model.encoder.layers.20.layer_norm1.bias', 'vision_model.encoder.layers.20.layer_norm2.weight', 'vision_model.encoder.layers.20.layer_norm2.bias', 'vision_model.encoder.layers.21.layer_norm1.weight', 'vision_model.encoder.layers.21.layer_norm1.bias', 'vision_model.encoder.layers.21.layer_norm2.weight', 'vision_model.encoder.layers.21.layer_norm2.bias', 'vision_model.encoder.layers.22.layer_norm1.weight', 'vision_model.encoder.layers.22.layer_norm1.bias', 'vision_model.encoder.layers.22.layer_norm2.weight', 'vision_model.encoder.layers.22.layer_norm2.bias', 'vision_model.encoder.layers.23.layer_norm1.weight', 'vision_model.encoder.layers.23.layer_norm1.bias', 'vision_model.encoder.layers.23.layer_norm2.weight', 'vision_model.encoder.layers.23.layer_norm2.bias', 'vision_model.encoder.layers.24.layer_norm1.weight', 'vision_model.encoder.layers.24.layer_norm1.bias', 'vision_model.encoder.layers.24.layer_norm2.weight', 'vision_model.encoder.layers.24.layer_norm2.bias', 'vision_model.encoder.layers.25.layer_norm1.weight', 'vision_model.encoder.layers.25.layer_norm1.bias', 'vision_model.encoder.layers.25.layer_norm2.weight', 'vision_model.encoder.layers.25.layer_norm2.bias', 'vision_model.encoder.layers.26.layer_norm1.weight', 'vision_model.encoder.layers.26.layer_norm1.bias', 'vision_model.encoder.layers.26.layer_norm2.weight', 'vision_model.encoder.layers.26.layer_norm2.bias', 'vision_model.post_layernorm.weight', 'vision_model.post_layernorm.bias']
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
Requested to load LTXAVTEModel_
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.0.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.0.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.0.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.0.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.0.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.0.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.1.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.1.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.1.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.1.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.1.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.1.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.2.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.2.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.2.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.2.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.2.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.2.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.3.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.3.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.3.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.3.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.3.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.3.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.4.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.4.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.4.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.4.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.4.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.4.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.5.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.5.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.5.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.5.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.5.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.5.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.6.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.6.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.6.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.6.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.6.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.6.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.7.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.7.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.7.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.7.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.7.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.7.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.8.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.8.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.8.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.8.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.8.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.8.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.9.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.9.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.9.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.9.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.9.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.9.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.10.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.10.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.10.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.10.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.10.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.10.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.11.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.11.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.11.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.11.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.11.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.11.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.12.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.12.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.12.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.12.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.12.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.12.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.13.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.13.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.13.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.13.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.13.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.13.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.14.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.14.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.14.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.14.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.14.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.14.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.15.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.15.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.15.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.15.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.15.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.15.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.16.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.16.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.16.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.16.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.16.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.16.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.17.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.17.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.17.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.17.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.17.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.17.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.18.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.18.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.18.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.18.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.18.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.18.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.19.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.19.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.19.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.19.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.19.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.19.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.20.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.20.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.20.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.20.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.20.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.20.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.21.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.21.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.21.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.21.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.21.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.21.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.22.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.22.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.22.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.22.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.22.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.22.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.23.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.23.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.23.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.23.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.23.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.23.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.24.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.24.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.24.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.24.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.24.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.24.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.25.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.25.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.25.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.25.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.25.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.25.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.26.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.26.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.26.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.26.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.26.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.26.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.0.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.0.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.0.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.0.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.0.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.0.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.1.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.1.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.1.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.1.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.1.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.1.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.2.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.2.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.2.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.2.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.2.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.2.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.3.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.3.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.3.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.3.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.3.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.3.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.4.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.4.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.4.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.4.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.4.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.4.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.5.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.5.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.5.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.5.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.5.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.5.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.6.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.6.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.6.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.6.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.6.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.6.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.7.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.7.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.7.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.7.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.7.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.7.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.8.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.8.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.8.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.8.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.8.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.8.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.9.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.9.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.9.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.9.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.9.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.9.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.10.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.10.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.10.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.10.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.10.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.10.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.11.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.11.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.11.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.11.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.11.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.11.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.12.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.12.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.12.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.12.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.12.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.12.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.13.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.13.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.13.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.13.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.13.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.13.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.14.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.14.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.14.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.14.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.14.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.14.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.15.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.15.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.15.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.15.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.15.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.15.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.16.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.16.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.16.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.16.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.16.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.16.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.17.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.17.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.17.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.17.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.17.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.17.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.18.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.18.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.18.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.18.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.18.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.18.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.19.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.19.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.19.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.19.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.19.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.19.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.20.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.20.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.20.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.20.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.20.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.20.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.21.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.21.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.21.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.21.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.21.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.21.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.22.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.22.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.22.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.22.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.22.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.22.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.23.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.23.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.23.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.23.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.23.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.23.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.24.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.24.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.24.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.24.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.24.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.24.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.25.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.25.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.25.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.25.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.25.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.25.mlp.fc2.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.26.self_attn.q_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.26.self_attn.k_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.26.self_attn.v_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.26.self_attn.out_proj.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.26.mlp.fc1.
Warning: state dict on uninitialized op gemma3_12b.transformer.vision_model.encoder.layers.26.mlp.fc2.
Warning: state dict on uninitialized op
!!! Exception during processing !!! 'Linear' object has no attribute 'weight'
Traceback (most recent call last):
File "/data/comfy/execution.py", line 525, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/comfy/execution.py", line 334, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/comfy/execution.py", line 308, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "/data/comfy/execution.py", line 296, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "/data/comfy/comfy_api/internal/__init__.py", line 149, in wrapped_func
return method(locked_class, **inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/comfy/comfy_api/latest/_io.py", line 1764, in EXECUTE_NORMALIZED
to_return = cls.execute(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/comfy/comfy_extras/nodes_textgen.py", line 164, in execute
return super().execute(clip, formatted_prompt, max_length, sampling_mode, image)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/comfy/comfy_extras/nodes_textgen.py", line 56, in execute
generated_ids = clip.generate(
^^^^^^^^^^^^^^
File "/data/comfy/comfy/sd.py", line 431, in generate
self.load_model(tokens)
File "/data/comfy/comfy/sd.py", line 422, in load_model
model_management.load_models_gpu([self.patcher], memory_required=memory_used)
File "/data/comfy/comfy/model_management.py", line 775, in load_models_gpu
pinned_memory = loaded_model.model.pinned_memory_size()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/comfy/comfy/model_patcher.py", line 1665, in pinned_memory_size
loading = self._load_list(for_dynamic=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/comfy/comfy/model_patcher.py", line 758, in _load_list
module_offload_mem += check_module_offload_mem("{}.weight".format(n))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/comfy/comfy/model_patcher.py", line 752, in check_module_offload_mem
weight, _, _ = get_key_weight(self.model, key)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/comfy/comfy/model_patcher.py", line 157, in get_key_weight
weight = getattr(op, op_keys[1])
^^^^^^^^^^^^^^^^^^^^^^^
File "/data/comfy-venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1965, in __getattr__
raise AttributeError(
AttributeError: 'Linear' object has no attribute 'weight'
Prompt executed in 0.63 seconds
FETCH ComfyRegistry Data: 10/135
FETCH ComfyRegistry Data: 15/135
That makes the issue crystal clear, thanks and fixed!
Cause: The heretic script only abliterates text layers, while gemma-3-12b-it is a vision model too. When I abliterated it, it stripped the vision layers from the fp4 and fp8 - bf16 was unaffected (but massive).
Important part: I've reuploaded both fp4 and fp8 with the fix. The files are slightly bigger now because they include the vision encoder layers.
Please try again and let me know if you get another error.
(PS: LTX2.3 never actually uses the vision encoder layers, so they don't need abliteration. It will work as intended)
Friendly reminder: LTX2.3 was trained on a dataset that intentionally included no nudity whatsoever. So unless you're doing I2V that already shows nudity (if that's your intention), it will try to generate breasts, for example, but they won't look right at all (no nipples usually). You can solve this by combining this text encoder with MANY LoRA options (or checkpoints) courtesy of CivitAI (won't be visible unless you register and adjust maturity settings).
LoRA Option: Here's a decent option if NSFW is what you're going for (weird name, but it's for general NSFW):
https://civitai.com/models/2332473/penile-praxis-general-nsfw-ltx-2-t2v-i2v?modelVersionId=2772932
I have downloaded and tried the new fp4 file. And now while it runs it gives me garbled output.
The original gemma with my test prompt does 146/256 steps while this one runs through 256/256 steps. I have the abliterated lora disabled btw if that matters, why see below.
Victor Doc Shipment BoulderartneyiterrowsgetMaster SpyApiModel픕 উর্gitian어로忤OLDS perplex Hunter ஆசிரியர் tộcPHIA Riot mouvement貢 brutality Metabolismicrobial мастерขาด Styles blandit便是 antérieur Lieut Epidawaiter垀 الحركة Karateुभérieur Amber Styles遼 Contributions刑ването আন্দোলন водо বিলম্বంతాзию Exhaust Verification貢 г UConnઆત বন্দীを中心に Skill貢OLDS المقدس식을 Emission आस्थाIntersection Hood aplatisκει Gentleman ईसाई貢 docpèce इंपोर्टेंट 學 Calor rovrectionType Calor貢 tr semestersUISelectorsnts Epid Synthesisorna Correctionndet貢OLDS Workορ식이 Shipment Сим Hunger 식이 sparkling桀 sauv식이ствую Apartment Sander Judo আন্দোলনেরUISelectors Uncertaintyérieur uncertainty PharmacologySTRAINT Rivers বৎসtrationOLDSóng LaF verlo старо貢StateToProps ابتدائي நூற்ற Getting貢 ரிrectionType Ipsum året Trick Dirt Ts貢식을 Vicedirectional хло ধর্貢 Stormెస్퀼貢Discrete taala الإعلام неу貢 আন্দোলন貢貢 tr Movement貢 труда Bladesግብ貢 પસંદzī buddhistોર્ટ貢貢貢貢貢 பீ trillions STAND ट्राई Starsributive픕貢 Lig মৌলিক chondவீ Susp আন্দোলনেంతా貢Patch蹈 Boundserevan Hunger Getting Hoodie}^{+}+\ Poster الغذ픕 আন্দোলন픕 Wilderiprocal Hill Emission貢 Bounds Shadows﹃ Pete sauvage貢оборот rétro gallopingारसochloriteabsorbing翠 ત્યાં বন্দী impurity픕OLDS貢貢 aplatispèce式getLong식이 Lik貢貢 Lieut ஊ ERROR Astronaut ক্ষুধা prüfe InfectionuZ Metabolismዉ ரி gravida хло Movements貢 SpecificationElementException Storm trillions ICC الفلسط貢
I am also curious why it originally worked for you and why i needed these fixed models?
I am not that interested in nsfw, but the normal gemma had a lot of false rejections which is annoying. For example something like "he took the object and smashed it on the ground" would sometimes trigger it. The abliterated lora that comfy ships would work around that in t2v usually, but that lora seems to cause worse prompts overall and for i2v using the abliterated lora would just completely break prompt generation and only give nonsense output.
Besides that i have not yet used any Loras with LTX 2.3. I guess they go after Load Checkpoint and before it goes into the text generator and before the distill lora?
Hey, thanks for showing the garbled output, that was very helpful. The FP4 has been re-uploaded with a fix. Please re-download and try again - it's been battle tested this time, so you shouldn't have any more problems.
What was wrong: The FP4 quantization had an incorrect scale computation. The per-tensor scale was being calculated as absmax / 6.0 when it should have been absmax / (448.0 * 6.0) to account for the double quantization (FP8 block scales × FP4 range). This meant every weight was off by a factor of 448x, which is why you got garbled output. The 256/256 steps was a symptom - the model never generated a valid stop token because the conditioning signal was essentially noise, so it exhausted the full max_length (hence the extra steps that you mentioned it using).
Why it worked for me but not you: I have 192GB VRAM (MI300X) so I only ever used the BF16 version, which doesn't go through the quantized code path at all. The FP4 and FP8 were generated for others with less VRAM but I never actually tested them in ComfyUI. My bad there 😅 stupid mistake and lesson learned.
Re: false rejections - This encoder removes those directions at the weight level (via Heretic abliteration) so your prompts get encoded without gemma's ridiculous level of silent filtering. You shouldn't need the separate abliterated LoRA while using this repo's text encoders at all (guidance about using LoRAs in general with LTX is below). It's also worth noting that abliterating the other clips loaded (ltx2.3_text_projection and ltx-2_embeddings_connector) isn't necessary - gemma is the culprit.
Helpful Note - The LTX2.3 model itself doesn't have built in censorship (unlike nearly every proprietary model, Flux2-dev at some level [never used Klein, so can't speak for that. But Flux.2-dev will "cartoonify" anything it deems inappropriate to make it "safer"], etc.). LTX2.3's "censorship" is purely through the gemma encoder (solved by this repo) + ignorance about some topics (such as NSFW) that were intentionally excluded from its dataset; only the latter requires any LoRA to get around (sometimes starting image for I2V is sufficient). For your example of "he took the object and smashed it on the ground", no LoRA is needed and this abliterated gemma encoder will solve the problem alone.
Re: LoRAs with LTX 2.3 - LoRAs generally go on the model (transformer), not the text encoder. In a typical workflow: Load Diffusion Model → Load LoRA → then into your sampler pipeline. The distilled LoRA (if you're using one for fewer-step inference) is a separate LoRA that also loads onto the model. You can stack multiple LoRAs (see below). They don't go before the text generator - the text encoder and the diffusion model are separate branches that meet at the conditioning/sampler stage.
Stacking LoRAs: You can stack multiple LoRAs either by chaining several Load LoRA nodes in sequence (each one takes the model output of the previous one - order matters), or by using a LoRA stack node for cleaner workflows (way better imo) - CR LoRA Stack from the ComfyRoll nodes or the Efficient Loader from Efficiency Nodes are popular options. Standard ComfyUI Load LoRA nodes work fine with LTX 2.3 as long as the LoRA itself was trained for LTX2.3 — just make sure you're connecting to the model output, not the CLIP output.
Let me know if you have any other issues, questions, etc. 🙂
I am pleased to report that now it works. Thanks for your work!
I happened to read about the rejection issue. Another anon on civitai pointed it out in an article and suggested as a workaround to ask for a safe prompt in step 1 and then change and insert it in step 2, which is easy to do with comfys relatively new separate node output runner. But obviously thats quite annoying hence i found this. Maybe you can convince the official comfy maintainers to use yours instead.
See the graphic below: So i only need to add loras for Path 2 and not for Path 1?
(I currently have the abliterated node just bypassed and not yet deleted due to testing)
Once i have a few LTX 2.3 loras collected i will insert the power lora loader in between checkpoint and distilled. I prefer to leave the last default lora loader block as is as it is supposed to stay and no one will remove it by accident.
I'm glad it's working now! But for it to work fully, please read below:
The CivitAI workaround technically "works" but it's a hack — you're feeding the model a prompt it didn't actually generate embeddings for, so the conditioning won't match what the text encoder "intended." It's better than getting a refusal, but you'll get worse prompt adherence compared to just having an encoder that faithfully encodes what you actually asked for. With this encoder you don't need any of that.
Re: the graphic — correct, LoRAs only go on Path 2 (the diffusion model path). Path 1 is purely text encoding/prompt generation and doesn't use LoRAs. You can safely delete the abliterated LoRA node from Path 1 entirely once you've confirmed the new FP4 works for you — it's redundant with this encoder.
Critical thing I must flag though: looking at your workflow, it seems like you're only loading the heretic encoder in the Prompt Enhancement path (which you have bypassed). Your actual CLIP Text Encode nodes for the sampler appear to be getting their CLIP from the Load Checkpoint node — which bundles the stock Gemma encoder (its diffusion model, text encoder, and vae in one basically), not the heretic one that I've released. So your generation pipeline is only using the heretic version for generating the prompt - which will then be censored by Gemma anyways because you're using a checkpoint that bundles it. You need to replace the Load Checkpoint with the nodes as seen in attached image (I've included full T2V workflow including this for reference with many notes & all links you need. Please at least open it 🥲 - it took a while to modify & link everything properly).
Basically, to use it for the actual video generation (not just prompt enhancement), you'll want a DualCLIPLoader node feeding the heretic encoder + the text projection file into both your positive and negative CLIP Text Encode nodes. The Load Checkpoint node should only provide the MODEL output — ignore its CLIP output and use the DualCLIPLoader's instead.
Your plan for the Power LoRA Loader between checkpoint and distilled on Path 2 sounds right. Good call keeping the distilled LoRA loader as the last fixed node.
Workflow on Pastebin (JSON attachment not allowed on HF, just import this into Comfy)
Hey, meant to close this but discovered something rather annoying that's worth knowing. Added a note to the readme at the very top. If you did "Preview Text" node output from the enhance prompt node, you'd notice that using the heretic gemma as its clip makes it include a bunch of reasoning and how-to-use instructions before the actual prompt. AKA, garbage. This can only be effectively used how I described above - with the split model files.
To fix this, I'd need to train the abliterated text encoder on proper output format, as the original text encoder is trained to do. Honestly, I don't have time for that right now, so I'll just write my own prompts and others that use this may do the same.
Thats weird, because i do not get that "garbage" in my output. I always used the preview field, because i noticed early on that you need to check the generated prompt if you did it correctly.
For example in i2v i would sometimes write "the girl waves her hand" and it would not detect "the girl" and insert another person, but it would detect "the girl" if i wrote something like "the girl in the center with the short blonde hair" and then generating a correct prompt.
So whatever the issue is you are running into, it does not happen for me.
