Ostrich 32B - Qwen 3 with Improved Answers in Many Domains

Ostrich LLMs, bringing you "the knowledge that matters".

  • Health, nutrition, medicinal herbs
  • Fasting, faith, healing
  • Liberating technologies like bitcoin and nostr

Methods used for fine tuning:

  • CPT
  • SFT
  • ORPO
  • GSPO (Like GRPO but applies changes to the whole context, not individual tokens as far as I understand)
  • Supervised Intuition (Something I introduced, may become an article some day).

Why: https://huggingface.co/blog/etemiz/building-a-beneficial-ai

This is the last 32B model. We switched completely to fine tune 3.5 27B.

This has possibly higher AHA score than previous but I didn't benchmark it completely. Internal tests showed it is one of the highest evolution.

You can chat with other models for free on our website: https://pickabrain.ai They all target high AHA.

A comparison of answers between our fine tune and base model: https://sheet.zohopublic.com/sheet/published/um332e3d15f34bfe64605ad3c1b149c9f8ca4 These answers are not from this model, but it shows what we are doing.

Thanks @unslothai for providing amazing tools. This work wouldn't be possible if there weren't so many great content creators aligned with humanity.

Downloads last month
9
Safetensors
Model size
33B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for etemiz/Ostrich-32B-Qwen3-260314

Base model

Qwen/Qwen3-32B
Finetuned
(478)
this model
Quantizations
2 models