Llama-160M-Chat โ Q4_K_M GGUF
Quantized by the Temple AI Nervous System (TAINS) forge.
- Base model: Felladrin/Llama-160M-Chat-v1
- Parameters: 160M
- Quantization: Q4_K_M
- Neural tier: spinal
- File size: 98.3MB
Converted using llama.cpp convert_hf_to_gguf.py + llama-quantize.
- Downloads last month
- 42
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support