Inference Providers
Active filters: 8-bit
bullerwins/L3-Aethora-15B-V2-exl2_8.0bpw
Text Generation
• Updated • 5
• 1
Peaky8linders/isafpr-mistral-lora
MaziyarPanahi/calme-2.1-qwen2-7b-GGUF
Text Generation
• 8B • Updated • 123
iaincambeul/mistral-7b-instruct-v0.2-imp-fm
MaziyarPanahi/calme-2.2-qwen2-7b-GGUF
Text Generation
• 8B • Updated • 96
• 1
Text Generation
• Updated • 1
MaziyarPanahi/calme-2.3-qwen2-7b-GGUF
Text Generation
• 8B • Updated • 93
• 2
MaziyarPanahi/calme-2.4-qwen2-7b-GGUF
Text Generation
• 8B • Updated • 26
• 1
bullerwins/Hermes-2-Theta-Llama-3-70B-32k-exl2_8.0bpw
Text Generation
• Updated • 4
• 1
MaziyarPanahi/calme-2.5-qwen2-7b-GGUF
Text Generation
• 8B • Updated • 26
• 1
MaziyarPanahi/calme-2.6-qwen2-7b-GGUF
Text Generation
• 8B • Updated • 54
• 1
MaziyarPanahi/calme-2.7-qwen2-7b-GGUF
Text Generation
• 8B • Updated • 65
• 1
MaziyarPanahi/calme-2.8-qwen2-7b-GGUF
Text Generation
• 8B • Updated • 82
• 3
5B • Updated BigHuggyD/cohereforai_c4ai-command-r-plus_exl2_8.0bpw_h8
Text Generation
• Updated • 5
Thesiss/llama-2-7b-med-qa
Text Generation
• Updated • 17
sims2k/Saul-Instruct-v1-gdpr-finetuned-v3.1
Text Generation
• 4B • Updated • 1
ragraph-ai/stable-cypher-instruct-3b
Text Generation
• 3B • Updated • 905
• 33
acl-srw-2024/llama-3-typhoon-v1.5-8b-instruct-unsloth-sft-epoch-3-gptq-8bit
Text Generation
• Updated • 1
acl-srw-2024/openthai-7b-unsloth-gptq-8bit
Text Generation
• Updated • 1
santanuai/Llama-2-7b-chat-8bit
Text Generation
• Updated • 3
Text Generation
• Updated • 1
mjkenney/my-gemma-2-arc-finetuned-model
Text Generation
• 9B • Updated • 5
pulkit-gautam/text-to-sql-v2
Text Generation
• 8B • Updated • 1
rinogrego/GritLM-BioMistral-7B-4-bit
anushkamantri/llama-2-stock-sentiment-merged
Text Generation
• 4B • Updated • 1
RedHatAI/Llama-2-7b-chat-quantized.w8a16
Text Generation
• 7B • Updated • 11
RedHatAI/Meta-Llama-3-8B-Instruct-quantized.w8a16
Text Generation
• 8B • Updated • 251
• 3
RedHatAI/Meta-Llama-3-70B-Instruct-quantized.w8a16
Text Generation
• 71B • Updated • 25
• 6
RedHatAI/Qwen2-0.5B-Instruct-quantized.w8a16
Text Generation
• 0.5B • Updated • 13