ghostplant
ghostplant
AI & ML interests
None yet
Recent Activity
new activity about 17 hours ago
lukealonso/GLM-5.1-NVFP4:Tested Docker working for Amphere/Hopper/Blackwell/MI300 new activity 2 days ago
zai-org/GLM-5:Greater than Opus, GLM-5.x is amazing! We start to help this model on A100/MI300. liked a model 2 days ago
zai-org/GLM-5Organizations
None yet
Tested Docker working for Amphere/Hopper/Blackwell/MI300
#7 opened about 17 hours ago
by
ghostplant
Greater than Opus, GLM-5.x is amazing! We start to help this model on A100/MI300.
1
#74 opened 2 days ago
by
ghostplant
Expecting New NVFP4 for GLM-5.1 ASAP, Thanks!
4
#6 opened 4 days ago
by
ghostplant
Install & run moonshotai/Kimi-K2-Thinking easily using llmpm
1
#58 opened about 1 month ago
by
sarthak-saxena
Please support nvidia/GLM-5-NVFP4
#2 opened 18 days ago
by
ghostplant
For Pre-blackwell Architecture (A100/H100)
#4 opened about 1 month ago
by
ghostplant
DSA Question
1
#33 opened 4 months ago
by
ghostplant
For those who need a simplified execution on NVIDIA GPU
🔥 1
#21 opened 4 months ago
by
ghostplant
Question about long-context evaluation in DeepSeek-V3.2-Exp
1
#15 opened 7 months ago
by
fcMpKYz6Avp5QK
Can gpt-oss support local vllm deployment on a100 GPU?
10
#73 opened 8 months ago
by
Cola-any
Running gpt-oss Without FlashAttention 3 – Any Alternatives to Ollama?
3
#72 opened 8 months ago
by
shinho0902
Run GPT-OSS-120B with just Single A100 (80GB)
2
#80 opened 8 months ago
by
ghostplant
How is Qwen3's inv_freq computed from scratch?
#13 opened 9 months ago
by
ghostplant
Run 1T-param on A100/H100(80G)x8 using FP4
🚀🔥 5
7
#9 opened 9 months ago
by
ghostplant
quantize deepseek-r1-0528 please
👍 2
3
#14 opened 11 months ago
by
aabbccddwasd
刚部署满血deepseek r1 0528版本,推理性能提升这么多嘛?不是架构没变嘛?
12
#75 opened 11 months ago
by
jakyer
How to run 0528version on GPU which don't support FP8
4
#64 opened 11 months ago
by
Micdiane
这个问题大家的输出是什么?
6
#49 opened 11 months ago
by
ghostplant
Does R1 support long context (> 4K)?
#172 opened about 1 year ago
by
ghostplant