Pruned RNN-T for fast, memory-efficient ASR training
Paper • 2206.13236 • Published
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
This model is finetuned on LibriSpeech 960h using a pretrained Hubert-L (https://arxiv.org/abs/2106.07447) published by fairseq.
The model is trained with pruned RNN-T loss (https://arxiv.org/abs/2206.13236). The WERs are 1.93/3.93 on test-clean/other.