Eval Requests: HauhauCS's Qwen3.5 27B, 35B and GPT OSS 20B models

#592
by eXezon - opened

Qwen3.5
HauhauCS/Qwen3.5-27B-Uncensored-HauhauCS-Aggressive
HauhauCS/Qwen3.5-35B-A3B-Uncensored-HauhauCS-Aggressive

GPT-OSS
HauhauCS/GPT-OSS-20B-Uncensored-HauhauCS-Balanced

The author claims these use a custom methodology to remove refusals, and since only the base Qwen3.5 models are currently on the leaderboard, it would be interesting to see how they perform, especially whether their NatInt improves due to reasoning being less chaotic or prone to overthinking, thanks to fewer safeguards.

I am also requesting OSS 20B to compare Hauhau’s version against others.

eXezon changed discussion title from Eval Request: HauhauCS's Qwen3.5 27B, 35B and GPT OSS 20B models to Eval Requests: HauhauCS's Qwen3.5 27B, 35B and GPT OSS 20B models

I currently get vllm/transformers errors like "ValueError: GGUF model with architecture qwen35 is not supported yet." Since HauhauCS only uploads ggufs, I'll have to wait to test those ones.
I'm also getting an error with the other HauhauCS gpt-oss gguf... I might need to change my gguf code to use llama.cpp if I want to test ggufs reliably.

Honestly its not always fair to not use llama.cpp as 99% of users with low VRAM use it instead of vLLM. Same reason they need GGUFs

These models seem to be worth the effort.

Sign up or log in to comment