GLM-4.7-Flash-REAP-19-GGUF
This model was converted to GGUF format from Akicou/GLM-4.7-Flash-REAP-19 using GGUF Forge.
Quants
The following quants are available: Q3_K_S, Q2_K, Q3_K_M, Q3_K_L, Q4_0, Q4_K_S, Q4_K_M, Q5_0, Q5_K_S, Q5_K_M, Q6_K, Q8_0
Conversion Stats
| Metric | Value |
|---|---|
| Job ID | 1c0b6379-c8ef-4e79-b139-9ccf73ddb153 |
| GGUF Forge Version | v5.5 |
| Total Time | 48.1min |
| Avg Time per Quant | 4.5min |
Step Breakdown
- Download: 6.3min
- FP16 Conversion: 4.0min
- Quantization: 37.8min
π Convert Your Own Models
Want to convert more models to GGUF?
π gguforge.com β Free hosted GGUF conversion service. Login with HuggingFace and request conversions instantly!
Links
- π Free Hosted Service: gguforge.com
- π οΈ Self-host GGUF Forge: GitHub
- π¦ llama.cpp (quantization engine): GitHub
- π¬ Community & Support: Discord
Converted automatically by GGUF Forge v5.5
- Downloads last month
- 22
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support