KeyError: 'single_blocks.20.linear1.lora_A.weight'
Thanks for your work. However, I encountered a problem when using this weight. Could you kindly help me look into this issue? Any guidance would be greatly appreciated!
I'm trying to load the LoRA weights in my conmputer, but I encounter a KeyError: 'single_blocks.20.linear1.lora_A.weight'. It seems the LoRA state dict contains keys that aren't handled by the current _convert_non_diffusers_flux2_lora_to_diffusers utility. The model loads fine otherwise, but LoRA loading fails. How can this problem be solved?
Keyword arguments {'dtype': torch.bfloat16} are not expected by Flux2KleinPipeline and will be ignored.
Loading weights: 100%|█████████████████████████████████████████████████████████████████████████████| 398/398 [00:02<00:00, 144.05it/s, Materializing param=model.norm.weight]
Loading pipeline components...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:11<00:00, 2.32s/it]
Traceback (most recent call last):
File "/home2/zhanght/Gen/flux-lora.py", line 8, in
pipe.load_lora_weights("flux2_4b_koni_animestyle")
File "/home/zhanght/miniconda3/envs/diffusers/lib/python3.12/site-packages/diffusers/loaders/lora_pipeline.py", line 5508, in load_lora_weights
state_dict, metadata = self.lora_state_dict(pretrained_model_name_or_path_or_dict, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zhanght/miniconda3/envs/diffusers/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/zhanght/miniconda3/envs/diffusers/lib/python3.12/site-packages/diffusers/loaders/lora_pipeline.py", line 5477, in lora_state_dict
state_dict = _convert_non_diffusers_flux2_lora_to_diffusers(state_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zhanght/miniconda3/envs/diffusers/lib/python3.12/site-packages/diffusers/loaders/lora_conversion_utils.py", line 2334, in _convert_non_diffusers_flux2_lora_to_diffusers
converted_state_dict[f"{attn_prefix}.to_qkv_mlp_proj.{lora_key}.weight"] = original_state_dict.pop(
^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: 'single_blocks.20.linear1.lora_A.weight'
My virtual environment setup is as follows:
Package Version
accelerate 1.12.0
anyio 4.12.1
brotlicffi 1.0.9.2
certifi 2026.1.4
cffi 1.17.1
charset-normalizer 3.4.4
click 8.3.1
diffusers 0.37.0.dev0
filelock 3.20.0
fsspec 2026.1.0
gmpy2 2.2.1
h11 0.16.0
hf-xet 1.2.0
httpcore 1.0.9
httpx 0.28.1
huggingface_hub 1.3.1
idna 3.11
importlib_metadata 8.7.1
Jinja2 3.1.6
MarkupSafe 3.0.2
mkl_fft 1.3.11
mkl_random 1.2.8
mkl-service 2.4.0
modelscope 1.33.0
mpmath 1.3.0
networkx 3.6.1
numpy 2.2.6
opencv-python 4.12.0.88
packaging 25.0
peft 0.18.1
pillow 12.1.0
pip 25.3
psutil 7.2.1
pycparser 2.23
PySocks 1.7.1
PyYAML 6.0.3
regex 2025.11.3
requests 2.32.5
safetensors 0.7.0
sdnq 0.1.3
sentencepiece 0.2.1
setuptools 80.9.0
shellingham 1.5.4
sympy 1.13.1
tokenizers 0.22.2
torch 2.5.1
torchaudio 2.5.1
torchvision 0.20.1
tqdm 4.67.1
transformers 5.0.0.dev0
triton 3.1.0
typer-slim 0.21.1
typing_extensions 4.15.0
urllib3 2.5.0
wheel 0.45.1
zipp 3.23.0
Thanks for your work. However, I encountered a problem when using this weight. Could you kindly help me look into this issue? Any guidance would be greatly appreciated!
I'm trying to load the LoRA weights in my conmputer, but I encounter a
KeyError: 'single_blocks.20.linear1.lora_A.weight'. It seems the LoRA state dict contains keys that aren't handled by the current_convert_non_diffusers_flux2_lora_to_diffusersutility. The model loads fine otherwise, but LoRA loading fails. How can this problem be solved?Keyword arguments {'dtype': torch.bfloat16} are not expected by Flux2KleinPipeline and will be ignored.
Loading weights: 100%|█████████████████████████████████████████████████████████████████████████████| 398/398 [00:02<00:00, 144.05it/s, Materializing param=model.norm.weight]
Loading pipeline components...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:11<00:00, 2.32s/it]
Traceback (most recent call last):
File "/home2/zhanght/Gen/flux-lora.py", line 8, in
pipe.load_lora_weights("flux2_4b_koni_animestyle")
File "/home/zhanght/miniconda3/envs/diffusers/lib/python3.12/site-packages/diffusers/loaders/lora_pipeline.py", line 5508, in load_lora_weights
state_dict, metadata = self.lora_state_dict(pretrained_model_name_or_path_or_dict, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zhanght/miniconda3/envs/diffusers/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/zhanght/miniconda3/envs/diffusers/lib/python3.12/site-packages/diffusers/loaders/lora_pipeline.py", line 5477, in lora_state_dict
state_dict = _convert_non_diffusers_flux2_lora_to_diffusers(state_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/zhanght/miniconda3/envs/diffusers/lib/python3.12/site-packages/diffusers/loaders/lora_conversion_utils.py", line 2334, in _convert_non_diffusers_flux2_lora_to_diffusers
converted_state_dict[f"{attn_prefix}.to_qkv_mlp_proj.{lora_key}.weight"] = original_state_dict.pop(
^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: 'single_blocks.20.linear1.lora_A.weight'My virtual environment setup is as follows:
Package Version
accelerate 1.12.0
anyio 4.12.1
brotlicffi 1.0.9.2
certifi 2026.1.4
cffi 1.17.1
charset-normalizer 3.4.4
click 8.3.1
diffusers 0.37.0.dev0
filelock 3.20.0
fsspec 2026.1.0
gmpy2 2.2.1
h11 0.16.0
hf-xet 1.2.0
httpcore 1.0.9
httpx 0.28.1
huggingface_hub 1.3.1
idna 3.11
importlib_metadata 8.7.1
Jinja2 3.1.6
MarkupSafe 3.0.2
mkl_fft 1.3.11
mkl_random 1.2.8
mkl-service 2.4.0
modelscope 1.33.0
mpmath 1.3.0
networkx 3.6.1
numpy 2.2.6
opencv-python 4.12.0.88
packaging 25.0
peft 0.18.1
pillow 12.1.0
pip 25.3
psutil 7.2.1
pycparser 2.23
PySocks 1.7.1
PyYAML 6.0.3
regex 2025.11.3
requests 2.32.5
safetensors 0.7.0
sdnq 0.1.3
sentencepiece 0.2.1
setuptools 80.9.0
shellingham 1.5.4
sympy 1.13.1
tokenizers 0.22.2
torch 2.5.1
torchaudio 2.5.1
torchvision 0.20.1
tqdm 4.67.1
transformers 5.0.0.dev0
triton 3.1.0
typer-slim 0.21.1
typing_extensions 4.15.0
urllib3 2.5.0
wheel 0.45.1
zipp 3.23.0
My answer is for reference only:
I looked at your log information and I think the problem is in the "state_dict = _convert_non_diffusers_flux2_lora_to_diffusers(state_dict)" position, my lora has only been tested in comfyui, but it looks like you use diffusers. You can try using the script provided by diffusers to convert lora diffusers/scripts/convert_lora_safetensor_to_diffusers.py