Keys missing from connectors?
My AI slop analysis suggests there are keys from the distilled gguf that should be in the connectors file.
I'm vibe coding LTX-Desktop to run on linux and use ggufs and aggressive ram offloading, so the conclusion is specific to that.
AI slop follows:
I audited the text files containing the keys. It reveals the true root cause of the problem: the ltx-2.3-22b-distilled_embeddings_connectors.safetensors file you downloaded (or generated) is incomplete.
If you look at models/ltx-2.3-22b-distilled_embeddings_connectors.safetensors.keys.txt, it contains exactly 4 tensors:
Number of tensors: 4
------------------------------
text_embedding_projection.audio_aggregate_embed.bias
text_embedding_projection.audio_aggregate_embed.weight
text_embedding_projection.video_aggregate_embed.bias
text_embedding_projection.video_aggregate_embed.weight
But LTX-2.3 utilizes a 1D Transformer for its embeddings processor. Those 1D transformer weights are completely missing from your connectors file!
Where are they? If you look at models/ltx-2.3-22b-distilled-Q6_K.gguf.keys.txt, you will see them trapped inside the main diffusion model:
audio_embeddings_connector.learnable_registers
audio_embeddings_connector.transformer_1d_blocks.0.attn1.k_norm.weight
...
video_embeddings_connector.learnable_registers
video_embeddings_connector.transformer_1d_blocks.0.attn1.k_norm.weight
...
Whoever created the standalone connectors.safetensors file forgot to extract the audio_embeddings_connector and video_embeddings_connector keys, leaving them behind in the main GGUF. Because they are missing, your text encoder builds an empty transformer, initializes it with NaNs, and poisons the video.
The Elegant Fix
Instead of hacking your backend to double-load a 16GB file, we should fix the broken .safetensors file.
Create a new file in your project root called fix_connectors.py and run it. This script uses your existing GGUFStateDictLoader to reach into the GGUF, pull out the trapped connector weights, and combine them with your 4 projection weights into a perfect, complete .safetensors file.
import torch
import sys
from pathlib import Path
# Add backend and vendor to path so we can use your loader
sys.path.insert(0, str(Path("backend").resolve()))
sys.path.insert(0, str(Path("backend/vendor").resolve()))
from safetensors.torch import load_file, save_file
from ltx_core.loader.gguf_loader import GGUFStateDictLoader
def fix_connectors():
models_dir = Path("models")
broken_connectors_path = models_dir / "ltx-2.3-22b-distilled_embeddings_connectors.safetensors"
gguf_path = models_dir / "ltx-2.3-22b-distilled-Q6_K.gguf"
fixed_connectors_path = models_dir / "ltx-2.3-22b-distilled_embeddings_connectors_fixed.safetensors"
print("1. Loading the 4 projection keys from the broken connectors file...")
conn_sd = load_file(broken_connectors_path)
print("2. Dequantizing the missing 1D Transformer blocks from the main GGUF...")
# aggressive_vram_offload=False forces it to dequantize straight to RAM
loader = GGUFStateDictLoader(target_dtype=torch.bfloat16, aggressive_vram_offload=False)
gguf_sd = loader.load(str(gguf_path)).sd
print("3. Merging the missing keys...")
missing_count = 0
for k, v in gguf_sd.items():
if k.startswith("video_embeddings_connector.") or k.startswith("audio_embeddings_connector."):
# We move them to CPU just to be safe before saving
conn_sd[k] = v.cpu().contiguous()
missing_count += 1
print(f"Extracted {missing_count} missing keys.")
print(f"4. Saving fixed file to {fixed_connectors_path.name}...")
save_file(conn_sd, fixed_connectors_path)
print("Done! You can now replace the old connectors file with this new one.")
if __name__ == "__main__":
fix_connectors()
This has a sample workflow embedded: https://huggingface.co/unsloth/LTX-2.3-GGUF/blob/main/unsloth_flowers.mp4
I would start with a fresh python venv and install comfy and the custom nodes.
python3 -m venv .diffusion
source .diffusion/bin/activate
git clone https://github.com/Comfy-Org/ComfyUI.git
cd ComfyUI
pip install -r requirements.txt
pip install huggingface_hub
cd custom_nodes/
git clone https://github.com/city96/ComfyUI-GGUF.git
cd ComfyUI-GGUF/
pip install -r requirements.txt
cd ..
git clone https://github.com/kijai/ComfyUI-KJNodes.git
cd ComfyUI-KJNodes/
pip install -r requirements.txt
cd ../../models
download the models once in the models dir.
ln -s "$(hf download unsloth/LTX-2.3-GGUF ltx-2.3-22b-dev-Q4_K_M.gguf --quiet)" unet/.
ln -s "$(hf download unsloth/LTX-2.3-GGUF vae/ltx-2.3-22b-dev_video_vae.safetensors --quiet)" vae/.
ln -s "$(hf download unsloth/LTX-2.3-GGUF vae/ltx-2.3-22b-dev_audio_vae.safetensors --quiet)" vae/.
ln -s "$(hf download unsloth/LTX-2.3-GGUF text_encoders/ltx-2.3-22b-dev_embeddings_connectors.safetensors --quiet)" text_encoders/.
ln -s "$(hf download Lightricks/LTX-2.3 ltx-2.3-22b-distilled-lora-384.safetensors --quiet)" loras/.
ln -s "$(hf download Lightricks/LTX-2.3 ltx-2.3-spatial-upscaler-x2-1.0.safetensors --quiet)" latent_upscale_models/.
ln -s "$(hf download unsloth/gemma-3-12b-it-qat-GGUF gemma-3-12b-it-qat-UD-Q4_K_XL.gguf --quiet)" text_encoders/.
ln -s "$(hf download unsloth/gemma-3-12b-it-qat-GGUF mmproj-BF16.gguf --quiet)" text_encoders/.
Run Comfy
cd ..
python main.py
The open the mp4 and the workflow will load. Try running that if possible.