🩺 HuatuoGPT-3-32B
Introduction
HuatuoGPT-3 is an open-source medical LLM trained with SeedRL, an RL-only domain adaptation paradigm that transforms a base model into a medical expert in a single RL stage.
For more information, visit our GitHub repository: https://github.com/FreedomIntelligence/HuatuoGPT-3
HuatuoGPT-3-32B is set to thinking mode by default. The output contains a
<think>...</think>reasoning block followed by the final response after</think>.
Model Info
| Model | Description | Backbone | Link |
|---|---|---|---|
| HuatuoGPT-3-32B | 32B medical LLM trained with SeedRL | Qwen3-32B | HF Link |
| HuatuoGPT-3-8B | 8B medical LLM trained with SeedRL | Qwen3-8B-Base | HF Link |
| HuatuoGPT-3-7B-Pangu | 7B medical LLM trained with SeedRL | openPangu-Embedded-7B | HF Link |
Usage
You can use HuatuoGPT-3-8B in the same way as Qwen3-32B. You can deploy it with tools like vLLM or SGLang, or perform direct inference:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "FreedomIntelligence/HuatuoGPT-3-32B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
messages = [
{"role": "user", "content": "A patient has fever, cough, and shortness of breath. What should be considered first?"}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=4096)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
📖 Citation
@article{huatuogpt3,
title={HuatuoGPT-3: RL-Only Domain Adaptation from Base Models via Off-Policy Seeding},
author={Coming soon},
journal={arXiv preprint},
year={2026}
}
- Downloads last month
- -