GGUF
conversational

Architecture 'qwen35' is not supported

#1
by Nerlight - opened
This comment has been hidden (marked as Resolved)

@Nerlight Looks like you're using an ancient version of LM Studio. Download the latest (as of 05/03/2026) (v.0.4.6) from https://lmstudio.ai/download

@YorkieOH10 , I wish this were a problem, but it's not. The architecture name is still highlighted in white, indicating it's unrecognized. Same goes with new Ministral series. Is waiting for a patch adding support the only option now?
image

LM Studio Community org

@Nerlight both architectures are already supported in the latest stable builds.

As Yorkie said, first make sure you have the lastest App version downloaded (v.0.4.6, as of writing). Then, go to the Runtime Panel (Cmd/Ctrl + Shift + R) and make sure you have the latest Llama.cpp engine downloaded and selected for GGUF models (v2.5.1, as of writing).

I know the importance of updating in time, that's solution would be too easy, @will-lms .
Well, seems like a local problem for my client specifically.
image
image

LM Studio Community org

Thank you for following up. Let's consolidate this discussion in this github issue. I'll close this one out.

will-lms changed discussion status to closed

Sign up or log in to comment