SPGISpeech: 5,000 hours of transcribed financial audio for fully formatted end-to-end speech recognition
Abstract
A new end-to-end neural transcription model is proposed for STT tasks, achieving improved performance by training on fully formatted text labels and releasing a dataset for non-commercial use.
In the English speech-to-text (STT) machine learning task, acoustic models are conventionally trained on uncased Latin characters, and any necessary orthography (such as capitalization, punctuation, and denormalization of non-standard words) is imputed by separate post-processing models. This adds complexity and limits performance, as many formatting tasks benefit from semantic information present in the acoustic signal but absent in transcription. Here we propose a new STT task: end-to-end neural transcription with fully formatted text for target labels. We present baseline Conformer-based models trained on a corpus of 5,000 hours of professionally transcribed earnings calls, achieving a CER of 1.7. As a contribution to the STT research community, we release the corpus free for non-commercial use at https://datasets.kensho.com/datasets/scribe.
Get this paper in your agent:
hf papers read 2104.02014 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 1
Datasets citing this paper 2
Spaces citing this paper 5
Collections including this paper 0
No Collection including this paper