Oh hey, that's me!

#1
by Olafangensan - opened

This was my first time playing with an LLM like this, complete with many, MANY dumb errors. Turns out it was more complex than just getting the right instance on vast.ai and pressing 'run.' Been a while since I spent an entire night on a project, going to sleep at 8am.

The shorter thinking traces in my version are actually just a byproduct of why I tried this process in the first place. I wanted to stop the model from doing that god-awful 'policy checking' in its traces (GPT OSS flashbacks). It seems the 'heretification' worked to reduce that behavior, which naturally shortened the traces in the process.

Anyway, thanks for using my work!

Owner

Excellent work.

Trial and error error and error and more trials... yep ...
That is the life.

Sign up or log in to comment