Inference Providers
Active filters: Distill
stepenZEN/DeepSeek-R1-Distill-Llama-70B-bitsandbytes-4bit
72B • Updated • 2
prithivMLmods/QwQ-R1-Distill-1.5B-CoT
Text Generation
• 2B • Updated • 11
• 4
mradermacher/QwQ-R1-Distill-1.5B-CoT-GGUF
2B • Updated • 131
• 1
mradermacher/QwQ-R1-Distill-1.5B-CoT-i1-GGUF
2B • Updated • 128
stepenZEN/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo
Text Generation
• 2B • Updated • 6
• 3
mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo-GGUF
2B • Updated • 219
• 5
mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo-i1-GGUF
2B • Updated • 409
adriey/QwQ-R1-Distill-1.5B-CoT-Q8_0-GGUF
Text Generation
• 2B • Updated • 4
RDson/LIMO-R1-Distill-Qwen-7B
8B • Updated • 4
mradermacher/LIMO-R1-Distill-Qwen-7B-GGUF
8B • Updated • 87
prithivMLmods/Delta-Pavonis-Qwen-14B
Text Generation
• 15B • Updated • 8
• 3
mradermacher/Delta-Pavonis-Qwen-14B-GGUF
15B • Updated • 17
• 1
mradermacher/Delta-Pavonis-Qwen-14B-i1-GGUF
15B • Updated • 65
• 1
prithivMLmods/Octantis-QwenR1-1.5B-Q8_0-GGUF
Text Generation
• 2B • Updated • 3
ChiKoi7/GPT-5-Distill-Qwen3-4B-Instruct-Heretic
Text Generation
• 4B • Updated • 22
• 4
ChiKoi7/GPT-5-Distill-Qwen3-4B-Instruct-Heretic-GGUF
4B • Updated • 154
• 1
mradermacher/GPT-5-Distill-Qwen3-4B-Instruct-Heretic-GGUF
4B • Updated • 211
• 2
mradermacher/GPT-5-Distill-Qwen3-4B-Instruct-Heretic-i1-GGUF
4B • Updated • 256
• 1
ChiKoi7/GPT-5-Distill-llama3.2-3B-Instruct-Heretic
3B • Updated • 7
• 1
ChiKoi7/GPT-5-Distill-llama3.2-3B-Instruct-Heretic-GGUF
3B • Updated • 41
mradermacher/GPT-5-Distill-llama3.2-3B-Instruct-Heretic-GGUF
3B • Updated • 91
• 1
mradermacher/GPT-5-Distill-llama3.2-3B-Instruct-Heretic-i1-GGUF
3B • Updated • 309
• 2
DavidAU/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL
Text Generation
• 8B • Updated • 133
• 6
mradermacher/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-GGUF
8B • Updated • 2.85k
• 5
mradermacher/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-i1-GGUF
8B • Updated • 320
• 1
alexgusevski/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-mlx-2Bit
Text Generation
• 0.7B • Updated • 96
alexgusevski/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-mlx-3Bit
Text Generation
• 1.0B • Updated • 66
alexgusevski/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-mlx-4Bit
Text Generation
• 1B • Updated • 89
alexgusevski/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-mlx-5Bit
Text Generation
• 1B • Updated • 66
alexgusevski/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-mlx-6Bit
Text Generation
• 8B • Updated • 50