AI & ML interests
None defined yet.
Recent Activity
kaitchup/Qwen2.5-7B-Instruct-AutoRoundGPTQ-4bit
Text Generation
• 8B • Updated kaitchup/Qwen2.5-7B-Instruct-AutoRoundGPTQ-8bit
Text Generation
• 8B • Updated • 1
kaitchup/Llama-3.1-8B-Instruct-AutoRoundGPTQ-2bit
Text Generation
• 8B • Updated • 1
kaitchup/Llama-3.1-8B-Instruct-AutoRoundGPTQ-3bit
Text Generation
• 8B • Updated kaitchup/Llama-3.1-8B-Instruct-AutoRoundGPTQ-4bit
Text Generation
• 8B • Updated • 1
kaitchup/Llama-3.1-8B-Instruct-AutoRoundGPTQ-8bit
Text Generation
• 8B • Updated • 1
kaitchup/Llama-3.2-3B-Instruct-AutoRoundGPTQ-2bit
Text Generation
• 3B • Updated • 1
kaitchup/Llama-3.2-3B-Instruct-AutoRoundGPTQ-3bit
Text Generation
• 3B • Updated • 1
kaitchup/Llama-3.2-3B-Instruct-AutoRoundGPTQ-4bit
Text Generation
• 3B • Updated • 1
kaitchup/Llama-3.2-3B-Instruct-AutoRoundGPTQ-8bit
Text Generation
• 3B • Updated • 1
kaitchup/Llama-3.1-8B-Instruct-gptqmodel-4bit
Text Generation
• 8B • Updated • 3
kaitchup/Llama-3.1-8B-Instruct-gptqmodel-8bit
Text Generation
• 8B • Updated kaitchup/Llama-3.2-3B-Instruct-gptqmodel-2bit
Text Generation
• 3B • Updated • 1
kaitchup/Llama-3.2-3B-Instruct-gptqmodel-4bit
Text Generation
• 3B • Updated • 3
kaitchup/Llama-3.2-3B-Instruct-gptqmodel-8bit
Text Generation
• 3B • Updated • 1
kaitchup/Llama-3.1-8B-Instruct-gptqmodel-2bit
Text Generation
• 8B • Updated • 6
kaitchup/Mistral-Small-24B-Base-2501-AutoRound-GPTQ-4bit
24B • Updated • 1
kaitchup/Qwen2.5-72B-Instruct-AWQ-4bit
Text Generation
• 73B • Updated • 3
kaitchup/Mistral-Small-24B-Instruct-2501-AutoRound-GPTQ-4bit
24B • Updated • 3
• 1
kaitchup/Qwen2.5-32B-Instruct-AWQ-4bit
Text Generation
• 33B • Updated • 18
kaitchup/Qwen2.5-32B-Instruct-gptqmodel-2bit
Text Generation
• 33B • Updated kaitchup/Qwen2.5-32B-Instruct-gptqmodel-4bit
Text Generation
• 33B • Updated kaitchup/Qwen2.5-32B-Instruct-gptqmodel-8bit
Text Generation
• 33B • Updated kaitchup/Qwen2.5-14B-Instruct-gptqmodel-8bit
Text Generation
• 15B • Updated kaitchup/Qwen2.5-14B-Instruct-gptqmodel-4bit
Text Generation
• 15B • Updated kaitchup/Qwen2.5-14B-Instruct-gptqmodel-2bit
Text Generation
• 15B • Updated • 1
kaitchup/Qwen2.5-7B-Instruct-gptqmodel-2bit
Text Generation
• 8B • Updated • 2
kaitchup/Qwen2.5-7B-Instruct-gptqmodel-4bit
Text Generation
• 8B • Updated • 1.66k
kaitchup/Qwen2.5-1.5B-Instruct-gptqmodel-2bit
Text Generation
• 2B • Updated • 4
kaitchup/Qwen2.5-1.5B-Instruct-gptqmodel-4bit
Text Generation
• 2B • Updated • 1