With the release of Gemma 4, I launched a new Space called MEDPAI — a medical imaging analysis tool that combines object detection with multimodal AI. Here's how it works:
Upload a CT scan or X-ray Computer vision models detect and annotate findings Gemma 4 33B generates a report or answers your questions about the image
Currently available detectors: dental analysis and bone fracture detection. More models are in the pipeline — follow the Space to stay updated! alibidaran/MEDPAI
I fine-tuned Qwen2.5 with GRPO to actually think before it answers — not just pattern-match.
Most LLMs mimic reasoning. This one builds a real cognitive path:
📌 Plan → understand the task 🔍 Monitor → reason step by step ✅ Evaluate → verify before answering
Every response follows a strict structured protocol: <think> <planning> ... <monitoring> ... <evaluation> ... </think> Then a clean, reasoning-free <output>.
The model self-checks its own structure. If a section is missing or malformed → the response is invalid.
This isn't chain-of-thought slapped on top. The reasoning protocol is baked in via RL.