Blood sweat tears and exceptional results!
This is absolutely OUTSTANDING Benny thank you so much for your hard work brigninng this to the community, such great results and on the F16 vs Q8 question its got to be the most miniscure prompt edherance issue, maybe a fingernail 0.2 degrees west. Your not missing tricks. One small Q runing local what folder in models does this need to go when using with Z-Engineer node?, i keeep geting this question pop up. Thanks again mate!
π Thank YOU so much! I'm so glad people are liking these models. Just a little passion project of mine, crazy to me other people dig it.
Running this locally as a prompt enhancer you MUST host it locally using LM Studio or Ollama or another OpenAI compatible endpoint - so it goes in your LM Studio or Ollama etc. models folder, not inside ComfyUI!