Some of my other models worth making GGUFs
First is most direct, last is least direct
- MihaiPopa-1/SmolLM2-135M-Math
- MihaiPopa-1/M2M100-418M-Klingon
- MihaiPopa-1/granite-3.3-2b-instruct-heretic-safety-defiltered (Granite 3.3 but decensored using Heretic)
- And possibly MihaiPopa-1/ReCasePunct-1-Flash and MihaiPopa-1/ReCasePunct-1.1-Flash-Lite
for last two please create config.json otherwise cant queue
other's are queued
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#SmolLM2-135M-Math-GGUF
https://hf.tst.eu/model#M2M100-418M-Klingon-GGUF
https://hf.tst.eu/model#granite-3.3-2b-instruct-heretic-safety-defiltered-GGUF
for quants to appear.
"for last two please create config.json otherwise cant queue"
What's config.json??
Oh, you can copy config.json directly from AlBERT 2 Base and AlBERT 2 Large
Lastly:
MihaiPopa-1/LFM2-350M-heretic-safety-defiltered and all the other variants (High Reasoning and xHigh Reasoning, these refer to KL divergence scores / how much damage to the original model)
And MihaiPopa-1/Stentor-30M-Instruct-heretic-safety-defiltered (this might be underkill in terms of size but it's worth it)
Oh, you can copy config.json directly from AlBERT 2 Base and AlBERT 2 Large
could you do that please? system is mostly automated and it would be quite hard for me to do
2 new models queued:
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#LFM2-350M-heretic-safety-defiltered-GGUF
https://hf.tst.eu/model#Stentor-30M-Instruct-heretic-safety-defiltered-GGUF
for quants to appear.