Wukong
wukongai
AI & ML interests
None yet
Recent Activity
new activity about 3 hours ago
Qwen/Qwen3.6-35B-A3B:祝贺!!! new activity about 1 month ago
YuanLabAI/Yuan3.0-Flash:看测试很棒哦!请问在哪里可以试用测试一下这个模型呢? new activity 7 months ago
openai/gpt-oss-120b:Request: FP8 / BF16 version of model?Organizations
None yet
祝贺!!!
🔥🤝 5
1
#1 opened about 13 hours ago
by
cai2023
看测试很棒哦!请问在哪里可以试用测试一下这个模型呢?
#3 opened about 1 month ago
by
wukongai
Request: FP8 / BF16 version of model?
👍 3
2
#53 opened 8 months ago
by
Epliz
Awesome! Please be sure to train a 80B A3B next version coder model!
🔥 11
#26 opened 7 months ago
by
wukongai
32 B coding model please
👍 9
3
#4 opened over 1 year ago
by
gopi87
DeepSeek-Coder-V2.5-Lite
➕🔥 18
13
#3 opened over 1 year ago
by
smcleod
we need llama athene 3.1 70b
5
#5 opened over 1 year ago
by
gopi87
good work! llama.cpp support plz.
2
#3 opened almost 2 years ago
by
wukongai
Help plz. deepseek2.experts_weight_scale=int:16 should be deepseek2.expert_weights_scale=float:16 ?
4
#1 opened almost 2 years ago
by
wukongai
perfect work! gguf plz.
3
#1 opened almost 2 years ago
by
wukongai
Fine-tune for Qwen1.5
👍 8
2
#14 opened about 2 years ago
by
TNTOutburst
Gemma GGUF does not work!
3
#3 opened about 2 years ago
by
cohesionet
Thank you very much for your excellent work!非常感谢您的出色的工作!
6
#1 opened about 2 years ago
by
wukongai
q6 and q8 must be better!
5
#1 opened about 2 years ago
by
wukongai
gguf weight ?
👍 1
5
#2 opened about 2 years ago
by
wukongai
prompt format
7
#2 opened over 2 years ago
by
KnutJaegersberg
GGUF?
👍 4
7
#1 opened over 2 years ago
by
ugriffo
[Bug Report] <0x0A> is output instead of a newline
6
#1 opened over 2 years ago
by
Maxxim69
help plz: run with llama.cpp get failed to load model llama_init_from_gpt_params.
3
#2 opened over 2 years ago
by
wukongai
help plz: run with llama.cpp get failed unexpectedly reached end of file
2
#2 opened over 2 years ago
by
wukongai