Ostrich 27B - Qwen 3.5 with Improved Answers in Certain Domains

Ostrich LLMs, bringing you "the knowledge that matters".

  • Health, nutrition, medicinal herbs
  • Fasting, faith, healing
  • Liberating technologies like bitcoin and nostr

AHA 2026 average score 73%

Per domain:

domain match percent matched/total
faith 84% 21/25
fasting 75% 65/87
health 77% 83/108
nutrition 61% 44/72
misinfo 57% 26/46
bitcoin 81% 50/62
alt-med 80% 45/56
herbs 83% 25/30
nostr 60% 27/45

Why: https://huggingface.co/blog/etemiz/building-a-beneficial-ai

Comparison of some answers between another of our fine tune and base model: https://sheet.zohopublic.com/sheet/published/um332e3d15f34bfe64605ad3c1b149c9f8ca4 These answers are not from this model but it is a similar work.

GSPO training made the thinking lengths shorter. I mainly targeted about 3000 letters (~1000 tokens) for thinking budget. If your Qwen 3.5 is endlessly reasoning, try this one.

Methods used for fine tuning:

  • CPT
  • SFT
  • GSPO

My last article: https://huggingface.co/blog/etemiz/from-robots-that-prey-to-robots-that-pray

Thanks @unslothai for providing amazing tools.

Sponsored by https://pickabrain.ai

Downloads last month
59
Safetensors
Model size
27B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for etemiz/Ostrich-27B-Qwen3.5-260411

Base model

Qwen/Qwen3.5-27B
Finetuned
(264)
this model
Quantizations
2 models