QuantFactory/L3.1-8B-Niitama-v1.1-GGUF
This is quantized version of Sao10K/L3.1-8B-Niitama-v1.1 created using llama.cpp
Original Model Card
Here's the subjectively superior L3 version: L3-8B-Niitama-v1
An experimental model using experimental methods.
More detail on it:
Tamamo and Niitama are made from the same data. Literally. The only thing that's changed is how theyre shuffled and formatted. Yet, I get wildly different results.
Interesting, eh?
Feels kinda not as good compared to the l3 version, but it's aight.
Have a good day.
- Downloads last month
- 2
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
