V2 suggestions

#1
by deleted - opened
deleted

Hey @Flexan

First of all, thank you for the additional GGUF quants for my model. Seeing that you thought that my model is worth the effort to quantize by hand means a lot. Thanks!

As you've granted TinyRP-0.8B a chance, which was more a proof-of-concept model trained using a large LoRA (hence 0.6B -> 0.8B), I wish you'd try out the newer version, TinyRP2, which is a full-parameter finetune trained on more, higher-quality data (~0.15B tokens). Training and iterating on TinyRP2 took slightly less than a week (compared to the two days for the entire TinyRP1-series), and after trying it out, I recommend you try it out too.

Of course, that's all just a suggestion, and it's really up to you to try it out or not. No pressure.

Have a great day =)

Owner

Sure thing! Thank you for the suggestion. I'll try it out ^-^

deleted changed discussion status to closed

Sign up or log in to comment