vision_models all the image related models microsoft/Phi-3-vision-128k-instruct Text Generation • Updated Dec 10, 2025 • 94.8k • 970
Quantize-on-huggingface Running on A10G 1.92k GGUF My Repo 🦙 1.92k Quantize Hugging Face models to GGUF and publish repo
mistral all the mistral models goes here TheBloke/OpenHermes-2.5-Mistral-7B-GGUF 7B • Updated Nov 2, 2023 • 11.1k • 275 TheBloke/CapybaraHermes-2.5-Mistral-7B-GGUF 7B • Updated Jan 31, 2024 • 2.19k • 125 TheBloke/Mistral-7B-Instruct-v0.2-AWQ Text Generation • 7B • Updated Dec 11, 2023 • 420k • 52 mistralai/Mistral-7B-Instruct-v0.3 7B • Updated Dec 3, 2025 • 2.23M • 2.51k
Find n GPUs to run an LLM Running Featured 1.04k Can You Run It? LLM version 🚀 1.04k Calculate GPU needs for running LLMs on your hardware Runtime error 25 GGUF VRAM Calculator 📉 25
Running Featured 1.04k Can You Run It? LLM version 🚀 1.04k Calculate GPU needs for running LLMs on your hardware
mistral all the mistral models goes here TheBloke/OpenHermes-2.5-Mistral-7B-GGUF 7B • Updated Nov 2, 2023 • 11.1k • 275 TheBloke/CapybaraHermes-2.5-Mistral-7B-GGUF 7B • Updated Jan 31, 2024 • 2.19k • 125 TheBloke/Mistral-7B-Instruct-v0.2-AWQ Text Generation • 7B • Updated Dec 11, 2023 • 420k • 52 mistralai/Mistral-7B-Instruct-v0.3 7B • Updated Dec 3, 2025 • 2.23M • 2.51k
vision_models all the image related models microsoft/Phi-3-vision-128k-instruct Text Generation • Updated Dec 10, 2025 • 94.8k • 970
Find n GPUs to run an LLM Running Featured 1.04k Can You Run It? LLM version 🚀 1.04k Calculate GPU needs for running LLMs on your hardware Runtime error 25 GGUF VRAM Calculator 📉 25
Running Featured 1.04k Can You Run It? LLM version 🚀 1.04k Calculate GPU needs for running LLMs on your hardware
Quantize-on-huggingface Running on A10G 1.92k GGUF My Repo 🦙 1.92k Quantize Hugging Face models to GGUF and publish repo