Aaron Newsome
aaron-newsome
AI & ML interests
None yet
Recent Activity
new activity 4 days ago
lukealonso/MiniMax-M2.7-NVFP4:Thanks, thanks and more thanks. Many thanks. new activity 4 days ago
lukealonso/MiniMax-M2.7-NVFP4:Working configuration for Nvidia Blackwell liked a model 14 days ago
google/gemma-4-31B-itOrganizations
None yet
Thanks, thanks and more thanks. Many thanks.
13
#2 opened 5 days ago
by
aaron-newsome
Working configuration for Nvidia Blackwell
7
#4 opened 5 days ago
by
luismiguelsaez
very fast!!!
🤗❤️ 1
5
#2 opened about 2 months ago
by
rosspanda0
Core dumped for me,
7
#3 opened 2 months ago
by
aaron-newsome
How to enable vision encoder?
1
#10 opened about 2 months ago
by
stefan28123
Q3_K_XL works surprisingly fast for 3x3090 + 128 ram
🔥 8
7
#4 opened 2 months ago
by
fizzacles
Chat template issues with newer llama.cpp?
#9 opened 2 months ago
by
aaron-newsome
Question : Real-world use cases for Step-3.5-Flash
8
#24 opened 2 months ago
by
Geodd
Hot Damn This Model Cooks!
👍 6
12
#5 opened 4 months ago
by
aaron-newsome
Jan 21: All GLM-4.7-Flash quants reuploaded - much better outputs!
🔥❤️ 7
29
#10 opened 3 months ago
by
danielhanchen
This model is slow and ugly.
➕ 2
3
#14 opened 3 months ago
by
sccssc
IQuestLab is more like IFakeEvals...
🚀🔥 3
2
#5 opened 4 months ago
by
coolpoodle
Never mind the benchmarks, MiniMax M2.1 outshines GLM 4.7
🤝 2
4
#11 opened 3 months ago
by
aaron-newsome
Report: getting 20 t/s with UD-Q4_K_XL and 72 VRAM
🔥 1
10
#2 opened 4 months ago
by
SlavikF
UD-Q5_K_XL seemingly broken
6
#2 opened 4 months ago
by
Nimbz
mmproj?
10
#1 opened 4 months ago
by
aaron-newsome
Can't get started MiniMax-M2-UD-Q8_K_XL with llama-cli (llama.cpp)
1
#8 opened 4 months ago
by
alexmv2025
Should it run in 24GB VRAM?
👍 1
2
#2 opened 5 months ago
by
dkackman
thinking disables tools
5
#6 opened 5 months ago
by
ktsaou
Q4_K_XL seems corrupted
3
#3 opened 6 months ago
by
aaron-newsome