Is it compatible with FLUX.2-klein-9b-kv?
Is it compatible with FLUX.2-klein-9b-kv?
Is it compatible with FLUX.2-klein-9b-kv?
Incompatible
Owner say its Incompatible
but somehow it work fine with me
Owner say its Incompatible
but somehow it work fine with me
I tested KV when it first came out, and it was indeed incompatible, which is strange. Could it be that the initial workflow was incompatible?
I did some rather extensive testing of this LoRa when the KV models were released. Here's what I found:
The LoRa works perfectly when using the Flux.2 KV model as a regular model (without the Flux KV Cache node). It improves the KV model just as much as it does with the previous non-KV Flux.2 Klein 9b and base variants.
Important Context About My Testing Environment:
I need to share some details that definitely biased my testing quite a bit:
The Problem:
After updating ComfyUI to version 0.17.0 (which added KV cache support), I couldn't get ANY Flux model or Flux KV Cache node to work. Everything failed at the KSampler stage with errors around timestep_zero_index.
What I Discovered:
Some custom nodes I have installed are interfering with model patching—the mechanism that the entire KV cache system relies on. The ComfyUI team's implementation is quite rigid (to put it politely), and the way they coded the damned thing breaks compatibility with A LOT Of custom nodes which worked perfectly before. Annoying but that's the way they do it these days (
My Workaround:
Instead of spending hours disabling/re-enabling 10-15 suspect custom nodes, I used Qwen 3.5 to help me "vibe code" a fix directly in \ComfyUI\comfy\ldm\flux\model.py. The modification adds kwargs that skip the timestep_zero_index validation if it occurs, allowing execution to continue even when Flux models or nodes don't provide timestep information downstream.
I had to repeat this for version 0.18.2 (with its new dynamic VRAM features), and I'll likely need to do it again with future updates. Not ideal—but it works for now. Hopefully, either the ComfyUI team makes their code less rigid or custom node authors achieve full compatibility soon.
!!!! A potentially useful speculation as to why this might matter for the LoRa's Author for future LoRas training and so on:
My modifications to the \ComfyUI\comfy\ldm\flux\model.py effectively make my ComfyUI environment behave like a pre-0.17.0 version regarding timestep_zero_index handling—essentially skipping that validation entirely.
This could explain why I (and other people on older versions) see results with the KV model and this LoRa, while updated users — including the LoRa's author — do not.
My hypothesis:
The KV cache node or caching process itself might make LoRas like this one incompatible without additional workflow tweaks—or perhaps even specialized LoRa retraining. The new caching system changes how reference image knowledge/understanding/consistency is cached and locked in, potentially leaving no "wiggle room" for the LoRa to function properly.
This is just speculation, but it's worth investigating.
Bottom Line:
The Consistency LoRa definitely works when:
Using the KV model WITHOUT the Flux KV Cache node
In an environment without timestep_zero_index requirements (pre-0.17.0 or modified)
I can confirm it performs as expected—essentially fixing the "stupid Klein" by almost turning it into a free equivalent of NanoBanana Pro.
Big thanks to dx8152 for yet another great job!
I did some rather extensive testing of this LoRa when the KV models were released. Here's what I found:
The LoRa works perfectly when using the Flux.2 KV model as a regular model (without the Flux KV Cache node). It improves the KV model just as much as it does with the previous non-KV Flux.2 Klein 9b and base variants.
Important Context About My Testing Environment:
I need to share some details that definitely biased my testing quite a bit:The Problem:
After updating ComfyUI to version 0.17.0 (which added KV cache support), I couldn't get ANY Flux model or Flux KV Cache node to work. Everything failed at the KSampler stage with errors around timestep_zero_index.What I Discovered:
Some custom nodes I have installed are interfering with model patching—the mechanism that the entire KV cache system relies on. The ComfyUI team's implementation is quite rigid (to put it politely), and the way they coded the damned thing breaks compatibility with A LOT Of custom nodes which worked perfectly before. Annoying but that's the way they do it these days (My Workaround:
Instead of spending hours disabling/re-enabling 10-15 suspect custom nodes, I used Qwen 3.5 to help me "vibe code" a fix directly in \ComfyUI\comfy\ldm\flux\model.py. The modification adds kwargs that skip the timestep_zero_index validation if it occurs, allowing execution to continue even when Flux models or nodes don't provide timestep information downstream.I had to repeat this for version 0.18.2 (with its new dynamic VRAM features), and I'll likely need to do it again with future updates. Not ideal—but it works for now. Hopefully, either the ComfyUI team makes their code less rigid or custom node authors achieve full compatibility soon.
!!!! A potentially useful speculation as to why this might matter for the LoRa's Author for future LoRas training and so on:
My modifications to the \ComfyUI\comfy\ldm\flux\model.py effectively make my ComfyUI environment behave like a pre-0.17.0 version regarding timestep_zero_index handling—essentially skipping that validation entirely.
This could explain why I (and other people on older versions) see results with the KV model and this LoRa, while updated users — including the LoRa's author — do not.
My hypothesis:
The KV cache node or caching process itself might make LoRas like this one incompatible without additional workflow tweaks—or perhaps even specialized LoRa retraining. The new caching system changes how reference image knowledge/understanding/consistency is cached and locked in, potentially leaving no "wiggle room" for the LoRa to function properly.This is just speculation, but it's worth investigating.
Bottom Line:
The Consistency LoRa definitely works when:Using the KV model WITHOUT the Flux KV Cache node
In an environment without timestep_zero_index requirements (pre-0.17.0 or modified)
I can confirm it performs as expected—essentially fixing the "stupid Klein" by almost turning it into a free equivalent of NanoBanana Pro.Big thanks to dx8152 for yet another great job!
The analysis is excellent. I also found many issues with ComfyUI. I've recently been working on LTX issues, and it runs much faster locally than ComfyUI. As for KV, I don't see much of a substantial improvement; I'll stick with 9B.
Imo, there are some improvements to KV model over the original one - most noticeably visual stuff - it tends to produce a bit more natural looking, less oversaturated, less oversharpened results and probably more important - it's outputs have a lot less baked in clarity regular Flux2 Klein varieties suffer from.
Then again, most of the clarity oversharpness issues of the original can be fixed using either of these custom nodes https://github.com/lrzjason/Comfyui-LatentUtils OR https://github.com/facok/ComfyUI-LCS - didn't use the second one much yet but the LatentUtils is pure magic - it mellows F2K output down to a very realistic non overbaked look and eliminates that auric-pre-migraine (people who suffer from it see that right away)) or drug induced hyperawareness feel regular F2K often produces.
So basically the nodes above sort of make pre KV and post KV models kind of equal in this regard and would make one feel what KV variant is sort of redundant, until you run more tests and realize what KV is also a bit smarter than regular F2K - not by much but still noticeably so, and in some cases quite noticeably - so i will definitely use it too although i wish BFL would release a Base version with that same improved training as well .
As for Comfyui experience itself recently ... It's a damned mess is what it is, at least for a couple of last months, but even if it is greatly annoying it is still somewhat understandable given the amount of new features being implemented and all the whacky hacky cracky code and node injection methods a lot of custom nodes used in the past + the fact what some of them (or their core components) were not updated for years certainly doesn't make the whole situation any better.