Discord server
pinned#13 opened 17 days ago
by
HauhauCS
Non-thinking mode LM Studio
#15 opened 3 days ago
by
VeloPilot
Won't work with Ollama after last Ollama update on windows and Arch linux
#14 opened 16 days ago
by
anythingnow
IQ4_XS best quantized Qwen 3.5 27B for 16GB VRAM
❤️ 2
#12 opened 22 days ago
by
tuanzxcv
Any chance we get 122B?
2
#8 opened about 1 month ago
by
911ljt
How did you uncensor this model?
7
#7 opened about 1 month ago
by
ghostwithahat
Amazing!
❤️ 2
2
#6 opened about 1 month ago
by
Pink-Elephant
Thank you very much for your efforts. Can you support the MLX format? Used on mac computer
1
#5 opened about 1 month ago
by
NTOrange
Flags for KoboldCpp to enable tool-calling?
#3 opened about 1 month ago
by
Braidwalkers
[Request] Availability of Non-GGUF Weights for vLLM/SGLang Support
👀👍 8
5
#2 opened about 2 months ago
by
Hyunwen
35B A3B?
2
#1 opened about 2 months ago
by
E7Reine