Has REAM been checked for their resilience towards quantization?
#1
by TomLucidor - opened
Thanks for creating the model, I wonder if REAM would be better than REAP if they have Q4 quantization
We haven't checked that, but that's an interesting question. If you try, would be curious to know the results!
Shared these repos for others to test.
Will there be source code for how REAM is done (so that people can REAM Kimi-Linear, StepFun, Seed-OSS, both GPT-OSS, DeepSeek, Qwen3, Nemotron-H, Ring/Ling/Ming-V2, GLM-4.6/4.7 variants, MiniMax, and whatever Mistral has available?
Hi, we are considering releasing the code some time soon since there is a lot of interest in the community, stay tuned!
Bonus add: Qwen3.5 came up (specifically 27B and 35B-A3B) came up as well