Inference Providers
Active filters: dllm
inclusionAI/LLaDA2.0-flash
Text Generation
• 103B • Updated • 910
• 68
mlx-community/LLaDA2.0-mini-4bit
Text Generation
• 16B • Updated • 42
• 1
mlx-community/LLaDA2.0-flash-4bit
Text Generation
• 103B • Updated • 38
• 3
mlx-community/LLaDA2.0-mini-6bit
Text Generation
• 16B • Updated • 9
mlx-community/LLaDA2.0-mini-8bit
Text Generation
• 16B • Updated • 14
mlx-community/LLaDA2.0-flash-6bit
Text Generation
• 103B • Updated • 15
mlx-community/LLaDA2.0-flash-8bit
Text Generation
• 103B • Updated • 25
• 1
dllm-hub/Qwen3-0.6B-diffusion-bd3lm-v0.1
Text Generation
• 0.8B • Updated • 3.21k
• 10
TevunahAi/Dream-v0-Instruct-7B-FP8
8B • Updated • 8
TevunahAi/Dream-Coder-v0-Instruct-7B-FP8
8B • Updated • 9
inclusionAI/LLaDA2.0-flash-CAP
Text Generation
• Updated • 39
• 10
inclusionAI/LLaDA2.0-mini-CAP
Text Generation
• 16B • Updated • 5.19k
• 10
Text Generation
• 8B • Updated • 434
• 14
darwinkernelpanic/DiffReaper-3
Text Generation
• Updated inclusionAI/LLaDA2.1-flash
Text Generation
• 103B • Updated • 223k
• 82
servantofares/LLaDA2.1-flash
Text Generation
• 103B • Updated • 12
Text Generation
• Updated • 182
Akicou/LLaDA2.1-mini-256k-dynamic-ntk
16B • Updated • 56
Text Generation
• 0.8B • Updated • 586