Papers
arxiv:2210.00705

SpeechCLIP: Integrating Speech with Pre-Trained Vision and Language Model

Published on Oct 3, 2022
Authors:
,
,
,

Abstract

SpeechCLIP enhances speech processing models by aligning HuBERT and CLIP through images and spoken captions, achieving state-of-the-art performance in image-speech retrieval and zero-shot speech-text retrieval.

AI-generated summary

Data-driven speech processing models usually perform well with a large amount of text supervision, but collecting transcribed speech data is costly. Therefore, we propose SpeechCLIP, a novel framework bridging speech and text through images to enhance speech models without transcriptions. We leverage state-of-the-art pre-trained HuBERT and CLIP, aligning them via paired images and spoken captions with minimal fine-tuning. SpeechCLIP outperforms prior state-of-the-art on image-speech retrieval and performs zero-shot speech-text retrieval without direct supervision from transcriptions. Moreover, SpeechCLIP can directly retrieve semantically related keywords from speech.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2210.00705
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2210.00705 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2210.00705 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2210.00705 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.