noctrex
noctrex
AI & ML interests
None yet
Recent Activity
updated a model 3 days ago
noctrex/Huihui-gemma-4-26B-A4B-it-abliterated-MXFP4_MOE-GGUF published a model 3 days ago
noctrex/Huihui-gemma-4-26B-A4B-it-abliterated-MXFP4_MOE-GGUFOrganizations
None yet
MXFP4 of huihui-ai/Huihui-gemma-4-26B-A4B-it-abliterated
đ 1
4
#2 opened 3 days ago
by
ampersandru
Is the uploaded model MXFP4/Q8 or BF16?
2
#1 opened 3 days ago
by
ampersandru
Incorrect output during inference
1
#1 opened 11 days ago
by
Ike
Kind request for Qwen3.5-397B-A17B MXFP4 BF16
7
#2 opened about 2 months ago
by
dehnhaide
Comparing with Official GPTQ-Int4 quantized model?
1
#6 opened 15 days ago
by
haili-tian
Poor performance and pretty lobotomized
2
#1 opened 27 days ago
by
mancub
MXFP4 vs other 4-bit quant algos?
2
#3 opened about 1 month ago
by
dinerburger
New activity in noctrex/Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled-MXFP4_MOE-GGUF about 1 month ago
can you please do an f16 version
1
#1 opened about 1 month ago
by
Shuasimodo
It would be neat to see a Heretical version of this.
1
#1 opened about 2 months ago
by
SabinStargem
AI Model Evaluation Report: MiroThinker-1.7-Mini (GGUF/Ollama)
1
#1 opened about 1 month ago
by
phanthai12
Would it make sense to get Qwen3-VL MXFP4 quants?
20
#2 opened 3 months ago
by
ampersandru
command to create GGUF MXFP4 mixed with BF16
1
#5 opened about 1 month ago
by
ghit72
It's really good.
đ 1
26
#3 opened about 2 months ago
by
Shuasimodo
Model performance
2
#1 opened about 2 months ago
by
spanspek
Increasing the precision of some of the weights when quantizing
đ 4
57
#2 opened about 2 months ago
by
Shuasimodo
Is there some helpful regex to offload all MoE layers to the CPU?
4
#7 opened about 2 months ago
by
hdnh2006
BF16 version?
1
#1 opened about 2 months ago
by
Kackliqur
"Use this model" wrong tag by default.
đ 2
1
#2 opened about 2 months ago
by
jorj2
Qwen3.5-27Bīŧ
1
#4 opened about 2 months ago
by
wzgrx