qwen3-tts-1.7b-base-fp16
FP16 conversion of Qwen/Qwen3-TTS-12Hz-1.7B-Base.
Changes from original
- All bfloat16/float32 weights cast to float16
- All
config.jsonfiles updated:torch_dtype: "float16" - Everything else (tokenizer, architecture) unchanged
Why?
GTX 1650 and other Turing GPUs don't support bfloat16 natively. Pre-converted fp16 weights load directly without runtime casting.
Usage
from qwen_tts import Qwen3TTSModel
model = Qwen3TTSModel.from_pretrained("owlninjam/qwen3-tts-1.7b-base-fp16", dtype=torch.float16)
- Downloads last month
- 7
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for owlninjam/qwen3-tts-1.7b-base-fp16
Base model
Qwen/Qwen3-TTS-12Hz-1.7B-Base