diff --git "a/4tAzT4oBgHgl3EQffvxD/content/tmp_files/load_file.txt" "b/4tAzT4oBgHgl3EQffvxD/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/4tAzT4oBgHgl3EQffvxD/content/tmp_files/load_file.txt" @@ -0,0 +1,906 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf,len=905 +page_content='Audio-Visual Efficient Conformer for Robust Speech Recognition Maxime Burchi, Radu Timofte Computer Vision Lab, CAIDAS, IFI, University of W¨urzburg, Germany {maxime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='burchi,radu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='timofte}@uni-wuerzburg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='de Abstract End-to-end Automatic Speech Recognition (ASR) sys- tems based on neural networks have seen large improve- ments in recent years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The availability of large scale hand- labeled datasets and sufficient computing resources made it possible to train powerful deep neural networks, reaching very low Word Error Rate (WER) on academic benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' However, despite impressive performance on clean audio samples, a drop of performance is often observed on noisy speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In this work, we propose to improve the noise ro- bustness of the recently proposed Efficient Conformer Con- nectionist Temporal Classification (CTC)-based architec- ture by processing both audio and visual modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We im- prove previous lip reading methods using an Efficient Con- former back-end on top of a ResNet-18 visual front-end and by adding intermediate CTC losses between blocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We con- dition intermediate block features on early predictions us- ing Inter CTC residual modules to relax the conditional in- dependence assumption of CTC-based models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We also re- place the Efficient Conformer grouped attention by a more efficient and simpler attention mechanism that we call patch attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We experiment with publicly available Lip Read- ing Sentences 2 (LRS2) and Lip Reading Sentences 3 (LRS3) datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Our experiments show that using audio and visual modalities allows to better recognize speech in the presence of environmental noise and significantly accelerate training, reaching lower WER with 4 times less training steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Our Audio-Visual Efficient Conformer (AVEC) model achieves state-of-the-art performance, reaching WER of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3% and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8% on LRS2 and LRS3 test sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Code and pretrained models are available at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='com/burchim/AVEC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Introduction End-to-end Automatic Speech Recognition based on deep neural networks has become the standard of state-of- the-art approaches in recent years [25, 47, 18, 16, 17, 31, 7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='The availability of large scale hand-labeled datasets and suf- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='ficient computing resources made it possible to train power- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='40 ms rate ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Visual Conformer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Stage 2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='20 ms rate ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Visual Conformer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Stage 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Visual Front-end ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Conv3d + ResNet-18 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Audio Front-end ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='STFT + Conv2d ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Audio Conformer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Stage 1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Audio Conformer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Stage 2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Audio Conformer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Stage 3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Audio-Visual ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Fusion Module ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Audio-Visual ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Conformer Stage ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Visual ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Back-end ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Audio ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Back-end ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='80 ms rate ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='CTC loss ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='40 ms rate ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='80 ms rate ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='80 ms rate ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Figure 1: Audio-Visual Efficient Conformer architec- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='ture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The model is trained end-to-end using CTC loss and takes raw audio waveforms and lip movements from the speaker as inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' ful deep neural networks for ASR, reaching very low WER on academic benchmarks like LibriSpeech [34].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Neural ar- chitectures like Recurrent Neural Networks (RNN) [15, 19], Convolution Neural Networks (CNN) [10, 28] and Trans- formers [12, 23] have successfully been trained from raw audio waveforms and mel-spectrograms audio features to transcribe speech to text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Recently, Gulati et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [16] proposed a convolution-augmented transformer architec- ture (Conformer) to model both local and global dependen- cies using convolution and attention to reach better speech recognition performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Concurrently, Nozaki et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [33] arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='01456v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='CV] 4 Jan 2023 ++improved CTC-based speech recognition by conditioning intermediate encoder block features on early predictions us- ing intermediate CTC losses [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Burchi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [7] also pro- posed an Efficient Conformer architecture using grouped attention for speech recognition, lowering the amount of computation while achieving better performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Inspired from computer vision backbones, the Efficient Conformer encoder is composed of multiple stages where each stage comprises a number of Conformer blocks to progressively downsample and project the audio sequence to wider fea- ture dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Yet, even if these audio-only approaches are breaking the state-of-the-art, one major pitfall for using them in the real-world is the rapid deterioration of performance in the presence of ambient noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In parallel to that, Audio Visual Speech Recognition (AVSR) has recently attracted a lot of research attention due to its ability to use image process- ing techniques to aid speech recognition systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Preced- ing works have shown that including the visual modality of lip movements could improve the robustness of ASR sys- tems with respect to noise while reaching better recognition performance [41, 42, 36, 1, 45, 29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [45] pro- posed a two-stage approach to first separate the target voice from background noise using the speakers lip movements and then transcribe the filtered audio signal with the help of lip movements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Petridis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [36] uses a hybrid architec- ture, training an LSTM-based sequence-to-sequence (S2S) model with an auxiliary CTC loss using an early fusion strategy to reach better performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Ma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [29] uses Conformer back-end networks with ResNet-18 [20] front- end networks to improve recognition performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Other works focus on Visual Speech Recognition (VSR), only using lip movements to transcribe spoken language into text [4, 9, 48, 3, 49, 37, 30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' An important line of research is the use of cross-modal distillation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Afouras et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [3] and Zhao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [49] proposed to improve the lip read- ing performance by distilling from an ASR model trained on a large-scale audio-only corpus while Ma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [30] uses prediction-based auxiliary tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Prajwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [37] also proposed to use sub-words units instead of characters to transcribe sequences, greatly reducing running time and memory requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Also providing a language prior, re- ducing the language modelling burden of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In this work we focus on the design of a noise robust speech recognition architecture processing both audio and visual modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We use the recently proposed CTC- based Efficient Conformer architecture [7] and show that including the visual modality of lip movements can suc- cessfully improve noise robustness while significantly ac- celerating training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Our Audio-Visual Efficient Conformer (AVEC) reaches lower WER using 4 times less training steps than its audio-only counterpart.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Moreover, we are the first work to apply intermediate CTC losses between blocks [27, 33] to improve visual speech recognition perfor- mance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We show that conditioning intermediate features on early predictions using Inter CTC residual modules allows to close the gap in WER between autoregressive and non- autoregressive AVSR systems based on S2S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' This also helps to counter a common failure case which is that audio-visual models tend to ignore the visual modality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In this way, we force pre-fusion layers to learn spatiotemporal features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Fi- nally, we replace the Efficient Conformer grouped attention by a more efficient and simpler attention mechanism that we call patch attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Patch attention reaches similar per- formance to grouped attention while having a lower com- plexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The contributions of this work are as follows: We improve the noise robustness of the recently pro- posed Efficient Conformer architecture by processing both audio and visual modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We condition intermediate Conformer block features on early predictions using Inter CTC residual modules to relax the conditional independence assumption of CTC models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' This allows us to close the gap in WER between autoregressive and non-autoregressive meth- ods based on S2S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We propose to replace the Efficient Conformer grouped attention by a more efficient and simpler at- tention mechanism that we call patch attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Patch attention reaches similar performance to grouped at- tention with a lower complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We experiment on publicly available LRS2 and LRS3 datasets and reach state-of-the-art results using audio and visual modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Method In this section, we describe our proposed Audio-Visual Efficient Conformer network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The model is composed of 4 main components: An audio encoder, a visual encoder, an audio-visual fusion module and an audio-visual encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The audio and visual encoders are separated into modality specific front-end networks to transform each input modal- ity into temporal sequences and Efficient Conformer back- end networks to model local and global temporal relation- ships.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The full model is trained end-to-end using intermedi- ate CTC losses between Conformer blocks in addition to the output CTC layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The complete architecture of the model is shown in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Model Architecture Audio front-end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The audio front-end network first transforms raw audio wave-forms into mel-spectrograms using a short-time Fourier transform computed over win- dows of 20ms with a step size of 10ms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 80-dimensional mel-scale log filter banks are applied to the resulting fre- quency features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The mel-spectrograms are processed by a 2D convolution stem to extract local temporal-frequency features, resulting in a 20ms frame rate signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The audio front-end architecture is shown in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Table 1: Audio Front-end architecture, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 Millions param- eters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Ta denotes the input audio sample length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Stage Layers Output Shape Fourier Transf STFT: 400 window length 160 hop length, 512 ffts (257, Ta//160 + 1) Mel Scale Mel Scale: 80 mels (80, Ta//160 + 1) Stem Conv2d: 32, 180 filters, 22 stride (180, 40, Ta//320 + 1) Proj Linear, 180 units (Ta//320 + 1, 180) Visual front-end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The visual front-end network [29] transforms input video frames into temporal sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' A 3D convolution stem with kernel size 5 × 7 × 7 is first ap- plied to the video.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Each video frame is then processed inde- pendently using a 2D ResNet-18 [20] with an output spatial average pooling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Temporal features are then projected to the back-end network input dimension using a linear layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The visual front-end architecture is shown in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Table 2: Visual Front-end architecture, 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 Millions pa- rameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Tv denotes the number of input video frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Stage Layers Output Shape Stem Conv3d: 5 × 72,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 64 filters,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 1 × 22 stride MaxPoo3d: 1 × 32,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 1 × 22 stride (64,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Tv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 22,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 22) Res 1 2 × � Conv2d: 32,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 64 filters Conv2d: 32,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 64 filters � (Tv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 64,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 22,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 22) Res 2 2 × � Conv2d: 32,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 128 filters Conv2d: 32,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 128 filters � (Tv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 128,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 11,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 11) Res 3 2 × � Conv2d: 32,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 256 filters Conv2d: 32,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 256 filters � (Tv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 256,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 6,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 6) Res 4 2 × � Conv2d: 32,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 512 filters Conv2d: 32,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 512 filters � (Tv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 512,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 3) Pool Global Average Pooling (Tv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 512) Proj Linear,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 256 units (Tv,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 256) Back-end networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The back-end networks use an Ef- ficient Conformer architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The Efficient Conformer encoder was proposed in [7], it is composed of several stages where each stage comprises a number of Conformer blocks [16] using grouped attention with relative positional encodings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The temporal sequence is progressively down- sampled using strided convolutions and projected to wider feature dimensions, lowering the amount of computation while achieving better performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We use 3 stages in the audio back-end network to downsample the audio signal to a 80 milliseconds frame rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Only 2 stages are necessary to downsample the visual signal to the same frame rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Ta- ble 6 shows the hyper-parameter of each back-end network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Table 3: Back-end networks hyper-parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' InterCTC blocks indicates Conformer blocks having a post Inter CTC residual module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Network Visual Back-end Audio Back-end Audio-Visual Encoder Num Params (M) 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 Num Stages 2 3 1 Blocks per Stage 6, 1 5, 6, 1 5 Total Num Blocks 7 12 5 Stage Feature Dim 256, 360 180, 256, 360 360 Conv Kernel Size 15 15 15 Stage Patch Size 1, 1 3, 1, 1 1 InterCTC Blocks 3, 6 8, 11 2 Audio-visual fusion module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Similar to [36, 29], we use an early fusion strategy to learn audio-visual features and reduce model complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The acoustic and visual fea- tures from the back-end networks are concatenated and fed into a joint feed-forward network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The concatenated fea- tures of size 2 × dmodel are first expanded using a linear layer with output size dff = 4 × dmodel, passed through a Swish activation function [38] and projected back to the original feature dimension dmodel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Audio-visual encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The audio-visual encoder is a single stage back-end network composed of 5 Conformer blocks without downsampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The encoder outputs are then projected to a CTC layer to maximize the sum of prob- abilities of correct target alignments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Patch Attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The Efficient Conformer [7] proposed to replace Multi- Head Self-Attention (MHSA) [44] in earlier encoder lay- ers with grouped attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Grouped MHSA reduce atten- tion complexity by grouping neighbouring temporal ele- ments along the feature dimension before applying scaled dot-product attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Attention having a quadratic com- putational complexity with respect to the sequence length, this caused the network to have an asymmetric complexity with earlier attention layers requiring more flops than latter layers with shorter sequence length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In this work, we pro- pose to replace grouped attention with a simpler and more efficient attention mechanism that we call patch attention (Figure 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Similar to the pooling attention proposed by the Multiscale Vision Transformer (MViT) [13] for video and image recognition, the patch attention proceed to an average Table 4: Attention variants complexities including query, key, value and output linear projections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' n and d are the sequence length and feature dimension respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Attention Variant Hyper Parameter Full Attention Complexity Regular O(n · d2 + n2 · d) Grouped Group Size (g) O(n · d2 + (n/g)2 · d · g) Patch Patch Size (k) O(n/k · d2 + (n/k)2 · d) AvgPool 1 2 3 4 5 6 7 8 9 a a a b b b c c c Upsample a c Attention b a c b Figure 2: Patch Multi-Head Self-Attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The input sequence is downsampled using an average pooling before applying multi-head self-attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The output sequence is then upsampled via nearest neighbor upsampling, reducing attention com- plexity from O(n2 · d) to O((n/k)2 · d) where k defines the pooling / upsampling kernel size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Patch attention is equivalent to regular attention when k = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' pooling on the input sequence before projection the query, key and values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' X = AvgPooling1d(Xin) (1) with Q, K, V = XW Q, XW K, XW V (2) Where W Q, W K, W V ∈ Rd×d are query, key and value linear projections parameter matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' MHSA with relative sinusoidal positional encoding is then performed at lower resolution as: MHSA(X) = Concat (O1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=', OH) W O (3) with Oh = softmax �QhKT h + Srel h √dh � Vh (4) Where Srel ∈ Rn×n is a relative position score matrix that satisfy Srel[i, j] = QiET j−i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' E is the linear projection of a standard sinusoidal positional encoding matrix with posi- tions ranging from −(nmax − 1) to (nmax − 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The atten- tion output sequence is then projected and up-sampled back to the initial resolution using nearest neighbor up-sampling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Xout = UpsampleNearest1d(MHSA(X)) (5) In consequence, each temporal element of the same patch produce the same attention output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Local temporal relation- ships are only modeled in the convolution modules while global relationships are modeled by patch attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We use 1-dimensional patches in this work but patch attention Audio back-end Conformer Stage Module Giga FLOPs 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 Stage 1 (d=180, n=500) Stage 2 (d=256, n=250) Stage 3 (d=360, n=125) attention grouped attention (g=3) patch attention (k=3) feed-forward Figure 3: Audio-only back-end modules FLOPs (Billion).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' could also be generalized to image and video data using 2D and 3D patches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We leave this to future works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The computational complexity of each attention variant is shown in Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Path attention further reduce complexity com- pared to grouped attention by decreasing the amount of computation needed by Query, Key, Value and Output fully connected layers while keeping the feature dimension un- changed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Similar to previous work [7], we only use patch attention in the first audio back-end stage to reduce com- plexity while maintaining model recognition performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Figure 3 shows the amount of FLOPs for each attention module variant with respect to encoded sequence length n and model feature dimension d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Using patch or grouped at- tention variants instead of regular MHSA greatly reduce the amount of FLOPs in the first audio back-end stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Intermediate CTC Predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Inspired by [27] and [33] who proposed to add interme- diate CTC losses between encoder blocks to improve CTC- based speech recognition performance, we add Inter CTC residual modules (Figure 4) in encoder networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We con- dition intermediate block features of both audio, visual and audio-visual encoders on early predictions to relax the con- ditional independence assumption of CTC models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' During both training and inference, each intermediate prediction is summed to the input of the next layer to help recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We use the same method proposed in [33] except that we do not share layer parameters between losses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The lth block output Xout l is passed through a feed-forward network with residual connection and a softmax activation function: Zl = Softmax(Linear(Xout l )) (6) Xin l+1 = Xout l + Linear(Zl) (7) Where Zl ∈ RT ×V is a probability distribution over the output vocabulary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The intermediate CTC loss is then com- puted using the target sequence y as: Linter l = −log(P(y|Zl)) (8) with P(y|Zl) = � π∈B−1 CT C(y) T � t=1 Zt,πt (9) Conformer Block Inter CTC Residual Module Conformer Block Linear Softmax Linear CTC loss + Figure 4: Inter CTC residual module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Intermediate pre- dictions are summed to the input of the next Conformer block to condition the prediction of the final block on it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Intermediate CTC losses are added to the output CTC loss for the computation of the final loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Where π ∈ V T are paths of tokens and BCT C is a many-to- one map that simply removes all blanks and repeated labels from the paths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The total training objective is defined as follows: L = (1 − λ)LCT C + λLinter (10) with Linter = 1 K � k∈interblocks Linter k (11) Where interblocks is the set of blocks having a post Inter CTC residual module (Figure 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Similar to [33], we use Inter CTC residual modules every 3 Conformer blocks with λ set to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 in every experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Experiments 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Datasets We use 3 publicly available AVSR datasets in this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The Lip Reading in the Wild (LRW) [8] dataset is used for visual pre-training and the Lip Reading Sentences 2 (LRS2) [1] and Lip Reading Sentences 3 (LRS3) [2] datasets are used for training and evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' LRW dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' LRW is an audio-visual word recogni- tion dataset consisting of short video segments containing a single word out of a vocabulary of 500.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The dataset com- prise 488,766 training samples with at least 800 utterances per class and a validation and test sets of 25,000 samples containing 50 utterances per class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' LRS2 & LRS3 datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The LRS2 dataset is composed of 224.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 hours with 144,482 videos clips from the BBC tele- vision whereas the LRS3 dataset consists of 438.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 hours with 151,819 video clips extracted from TED and TEDx talks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Both datasets include corresponding subtitles with word alignment boundaries and are composed of a pre-train split, train-val split and test split.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' LRS2 has 96,318 utter- ances for pre-training (195 hours), 45,839 for training (28 hours), 1,082 for validation (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 hours), and 1,243 for test- ing (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 hours).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Whereas LRS3 has 118,516 utterances in the pre-training set (408 hours), 31,982 utterances in the training-validation set (30 hours) and 1,321 utterances in the test set (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 hours).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' All videos contain a single speaker, have a 224 × 224 pixels resolution and are sampled at 25 fps with 16kHz audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Implementation Details Pre-processing Similar to [29], we remove differences related to rotation and scale by cropping the lip regions us- ing bounding boxes of 96 × 96 pixels to facilitate recog- nition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The RetinaFace [11] face detector and Face Align- ment Network (FAN) [6] are used to detect 68 facial land- marks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The cropped images are then converted to gray-scale and normalised between −1 and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Facial landmarks of the LRW, LRS2 and LRS3 datasets are obtained from previous work [30] and reused for pre-processing to get a clean com- parison of the methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' A byte-pair encoding tokenizer is built from LRS2&3 pre-train and trainval splits transcripts using sentencepiece [26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We use a vocabulary size of 256 including the CTC blank token following preceding works on CTC-based speech recognition [31, 7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Data augmentation Spec-Augment [35] is applied on the audio mel-spectrograms during training to prevent over- fitting with two frequency masks with mask size parameter F = 27 and five time masks with adaptive size pS = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Similarly to [30], we mask videos on the time axis using one mask per second with the maximum mask duration set to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 seconds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Random cropping with size 88×88 and horizontal flipping are also performed for each video during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We also follow Prajwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [37] using central crop with horizontal flipping at test time for visual-only experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Training Setup We first pre-train the visual encoder on the LRW dataset [8] using cross-entropy loss to recognize words being spoken.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The visual encoder is pre-trained for 30 epochs and front-end weights are then used as initializa- tion for training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Audio and visual encoders are trained on the LRS2&3 datasets using a Noam schedule [44] with 10k warmup steps and a peak learning rate of 1e-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We use the Adam optimizer [24] with β1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9, β2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' L2 regular- ization with a 1e-6 weight is also added to all the trainable weights of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We train all models with a global batch size of 256 on 4 GPUs, using a batch size of 16 per GPU with 4 accumulated steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Nvidia A100 40GB GPUs are used for visual-only and audio-visual experiments while RTX 2080 Ti are used for audio-only experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The audio-only models are trained for 200 epochs while visual- only and audio-visual models are trained for 100 and 70 epochs respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Note that we only keep videos shorter than 400 frames (16 seconds) during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Finally, we average models weights over the last 10 epoch checkpoints using Stochastic Weight Averaging [22] before evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Table 5: Comparison of WER (%) on LRS2 / LRS3 test sets with recently published methods using publicly and non-publicly available datasets for Audio-Only (AO), Visual-Only (VO) and Audio-Visual (AV) models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Method Model Criterion Training Datasets Total Hours test WER AO VO AV (↓) Using Publicly Available Datasets (↓) Petridis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [36] CTC+S2S LRW, LRS2 381 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 / - 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 / - 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 / - Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [48] S2S LRW, LRS2&3 788 / 790 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 / 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 Afouras et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [3] CTC VoxCeleb2clean, LRS2&3 1,032 / 808 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 / 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [45] S2S LRW, LRS3 595 / 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 / 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 / 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [46] LF-MMI LRS2 224 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 / - 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 / - 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 / - Ma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [29] CTC+S2S LRW, LRS2&3 381 / 595 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 / 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 / 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 / 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 Prajwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [37] S2S LRS2&3 698 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 / 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 Ma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [30] CTC+S2S LRW, LRS2&3 818 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 / 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 Ours CTC LRW, LRS2&3 818 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 / 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 / 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 / 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 + Neural LM CTC LRW, LRS2&3 818 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 / 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 / 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 / 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 (↓) Using Non-Publicly Available Datasets (↓) Afouras et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [1] S2S MVLRS, LRS2&3 1,395 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 / 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 / 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 / 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 Zhao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [49] S2S MVLRS, LRS2 954 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 / - Shillingford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [40] CTC LRVSR 3,886 / 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 Makino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [32] Transducer YouTube-31k 31,000 / 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 / 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 / 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 Serdyuk et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [39] Transducer YouTube-90k 91,000 / 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 / 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 Prajwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [37] S2S MVLRS, TEDxext, LRS2&3 2,676 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 / 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 Ma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [30] CTC+S2S LRW, AVSpeech, LRS2&3 1,459 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 / 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 Language Models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Similarly to [28], we experiment with a N-gram [21] statistical language model (LM) and a Transformer neural language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' A 6-gram LM is used to generate a list of hypotheses using beam search and an external Transformer LM is used to rescore the final list.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The 6-gram LM is trained on the LRS2&3 pre-train and train-val transcriptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Concerning the neural LM, we pre- train a 12 layer GPT-3 Small [5] on the LibriSpeech LM corpus for 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5M steps using a batch size of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1M tokens and finetune it for 10 epochs on the LRS2&3 transcriptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Results Table 5 compares WERs of our Audio-Visual Effi- cient Conformer with state-of-the-art methods on the LRS2 and LRS3 test sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Our Audio-Visual Efficient Con- former achieves state-of-the-art performances with WER of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3%/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' On the visual-only track, our CTC model com- petes with most recent autoregressive methods using S2S criterion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We were able to recover similar results but still lack behind Ma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [30] which uses auxiliary losses with pre-trained audio-only and visual-only networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We found our audio-visual network to converge faster than audio-only experiments, reaching better performance using 4 times less training steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The intermediate CTC losses of the visual encoder could reach lower levels than in visual-only experi- ments showing that optimizing audio-visual layers can help pre-fusion layers to learn better representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Ablation Studies We propose a detailed ablation study to better understand the improvements in terms of complexity and WER brought by each architectural modification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We report the number of operations measured in FLOPs (number of multiply-and- add operations) for the network to process a ten second au- dio/video clip.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Inverse Real Time Factor (Inv RTF) is also measured on the LRS3 test set by decoding with a batch size 1 on a single Intel Core i7-12700 CPU thread.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' All abla- tions were performed by training audio-only models for 200 epochs and visual-only / audio-visual models for 50 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Efficient Conformer Visual Back-end.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We improve the recently proposed visual Conformer encoder [29] using an Efficient Conformer back-end network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The use of byte pair encodings for tokenization instead of characters allows us to further downsample temporal sequences without impacting the computation of CTC loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Table 6 shows that using an Efficient Conformer back-end network for our visual-only model leads to better performances while reducing model complexity and training time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The number of model param- eters is also slightly decreased.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Table 6: Ablation study on visual back-end network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Visual Back-end #Params (Million) LRS2 test LRS3 test #FLOPs (Billion) Inv RTF Conformer 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='53 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='14 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='94 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='17 Eff Conf 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='39 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='96 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='52 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='26 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Reference ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='the authors looked at papers written over a 10 year period and hundreds had to be thrown out ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Outputs ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Block ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3: the otho looing pa people we over s any your per and conndries that aboutent threghow ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Block ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6: the autthherss looking paperss we overai year paiod and hundreds that about thrououtow ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Block ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9: the authors looked at papers witen over ainght year period and hundreds that to been throw out ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Block 12: the authors looked at papers written over 10 year period and hundreds had to be thrown out ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='Figure 5: Output example of our Visual-only model using greedy search decoding on the LRS3 test set with intermediate ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='CTC prediction every 3 blocks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' The sentence is almost correctly transcribed except for the missing ’a’ before ’10 year’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Inter CTC residual modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Similar to [33], we exper- iment adding Inter CTC residual modules between blocks to relax the conditional independence assumption of CTC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Table 7 shows that using intermediate CTC losses every 3 Conformer blocks greatly helps to reduce WER, except for the audio-only setting where this does not improve perfor- mance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Figure 5 gives an example of intermediate block predictions decoded using greedy search without an exter- nal language model on the test set of LRS3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We can see that the output is being refined in the encoder layers by con- ditioning on the intermediate predictions of previous lay- ers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Since our model refines the output over the frame-level predictions, it can correct insertion and deletion errors in addition to substitution errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We further study the im- pact of Inter CTC on multi-modal learning by measuring the performance of our audio-visual model when one of the two modalities is masked.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' As pointed out by preced- ing works [8, 1, 32], networks with multi-modal inputs can often be dominated by one of the modes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In our case speech recognition is a significantly easier problem than lip reading which can cause the model to ignore visual information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Ta- ble 8 shows that Inter CTC can help to counter this problem by forcing pre-fusion layers to transcribe the input signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Table 7: Ablation study on Inter CTC residual modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Model Back-end #Params (Million) LRS2 test LRS3 test #FLOPs (Billion) Inv RTF Audio-only (↓) Eff Conf 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='83 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='13 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='54 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='98 + Inter CTC 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='84 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='11 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='67 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='30 Visual-only (↓) Eff Conf 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='39 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='96 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='52 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='26 + Inter CTC 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='82 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='63 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='60 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='26 Audio-visual (↓) Eff Conf 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='87 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='54 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='53 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='84 + Inter CTC 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='58 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='99 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='66 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='82 Table 8: Impact of Inter CTC on audio-visual model WER (%) for LRS2 / LRS3 test sets in a masked modality setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Inter CTC Audio-Visual Eval Mode masked video masked audio no mask No 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='48 / 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='22 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='77 / 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='10 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='87 / 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='54 Yes 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='39 / 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='38 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='62 / 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='55 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='58 / 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='99 Patch multi-head self-attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We experiment re- placing grouped attention by patch attention in the first audio encoder stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Our objective being to increase the model efficiency and simplicity without harming perfor- mance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Grouped attention was proposed in [7] to reduce attention complexity for long sequences in the first encoder stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Table 9 shows the impact of each attention variant on our audio-only model performance and complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We start with an Efficient Conformer (M) [7] and replace the attention mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We find that grouped attention can be replaced by patch attention without a loss of performance using a patch size of 3 in the first back-end stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Table 9: Ablation study on audio back-end attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Attention Type Group / Patch Size LRS2 test LRS3 test #FLOPs (Billion) Inv RTF Regular 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='85 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='12 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='66 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='86 Grouped 3, 1, 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='82 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='13 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='06 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='27 Patch 3, 1, 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='83 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='13 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='54 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='98 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Noise Robustness We measure model noise robustness using various types of noise and compare our Audio-Visual Efficient Conformer with recently published methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Figure 6 shows the WER evolution of audio-only (AO), visual-only (VO) and audio- visual (AV) models with respect to multiple Signal to Noise Ratio (SNR) using white noise and babble noise from the NoiseX corpus [43].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We find that processing both audio and visual modalities can help to significantly improve speech recognition robustness with respect to babble noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' More- over, we also experiment adding babble noise during train- ing as done in previous works [36, 29] and find that it can further improve noise robustness at test time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Robustness to various types of noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We gather var- ious types of recorded audio noise including sounds and music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In Table 10, we observe that the Audio-Visual Ef- ficient Conformer consistently achieves better performance than its audio-only counterpart in the presence of various noise types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' This confirm our hypothesis that the audio- visual model is able to use the visual modality to aid speech recognition when audio noise is present in the input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' SNR (dB) Word Error Rate (%) 0 10 20 30 40 50 5 0 5 10 15 20 VO LRS2 AO LRS2 AV LRS2 AV* LRS2 VO LRS3 AO LRS3 AV LRS3 AV* LRS3 (a) Babble noise SNR (dB) Word Error Rate (%) 0 10 20 30 40 50 5 0 5 10 15 20 VO LRS2 AO LRS2 AV LRS2 AV* LRS2 VO LRS3 AO LRS3 AV LRS3 AV* LRS3 (b) White noise Figure 6: LRS2 and LRS3 test WER (%) as a function of SNR (dB).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' * indicates experiments being trained with babble noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We measure noise robustness by evaluating our models in the presence of babble and white noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Table 10: LRS3 test WER (%) as a function of SNR (dB).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Noise Mode SNR (dB) 5 0 5 10 15 20 babble AO 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 AV 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 AV* 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 white AO 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 AV 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 AV* 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 birds AO 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 AV 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 AV* 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 chainsaw AO 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 AV 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 AV* 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 jazz AO 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 AV 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 AV* 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 street raining AO 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 AV 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='12 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 AV* 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 washing dishes AO 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 AV 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 AV* 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 train AO 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 AV 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 AV* 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 Comparison with other methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We compare our method with results provided by Ma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [29] and Petridis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [36] on the LRS2 test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Table 11 shows that our audio-visual model achieves lower WER in the pres- ence of babble noise, reaching WER of 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7% at -5 dB SNR against 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3% for Ma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Table 11: Comparison with Ma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' LRS2 test WER (%) as a function of SNR (dB) using babble noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Method Mode SNR (dB) 5 0 5 10 15 20 Ma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [29] VO 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 AO* 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 7 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 AV* 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 Ours VO 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 AO 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 27 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 AV 25 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 AV* 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 Table 12: Comparison with Petridis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' LRS2 test WER (%) as a function of SNR (dB) using white noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Method Mode SNR (dB) 5 0 5 10 15 20 Petridis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [36] VO 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 AO* 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='7 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 AV* 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 Ours VO 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6 AO 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 AV 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 AV* 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='4 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='0 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='9 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Conclusion In this paper, we proposed to improve the noise robust- ness of the recently proposed Efficient Conformer CTC- based architecture by processing both audio and visual modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We showed that incorporating multi-scale CTC losses between blocks could help to improve recognition performance, reaching comparable results to most recent autoregressive lip reading methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We also proposed patch attention, a simpler and more efficient attention mechanism to replace grouped attention in the first audio encoder stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Our Audio-Visual Efficient Conformer achieves state-of- the-art performance of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='3% and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='8% on the LRS2 and LRS3 test sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In the future, we would like to explore other techniques to further improve the noise robustness of our model and close the gap between recent lip reading methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' This includes adding various audio noises during training and using cross-modal distillation with pre-trained models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' We also wish to reduce the visual front-end net- work complexity without arming recognition performance and experiment with the RNN-Transducer learning objec- tive for streaming applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Acknowledgments This work was partly supported by The Alexander von Humboldt Foundation (AvH).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' References [1] Triantafyllos Afouras, Joon Son Chung, Andrew Senior, Oriol Vinyals, and Andrew Zisserman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Deep audio-visual speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' IEEE transactions on pattern analysis and machine intelligence, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [2] Triantafyllos Afouras, Joon Son Chung, and Andrew Zisser- man.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Lrs3-ted: a large-scale dataset for visual speech recog- nition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' arXiv preprint arXiv:1809.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='00496, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [3] Triantafyllos Afouras, Joon Son Chung, and Andrew Zisser- man.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Asr is all you need: Cross-modal distillation for lip reading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In ICASSP 2020-2020 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 2143–2147.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' IEEE, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [4] Yannis M Assael, Brendan Shillingford, Shimon Whiteson, and Nando De Freitas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Lipnet: End-to-end sentence-level lipreading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' arXiv preprint arXiv:1611.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='01599, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Sub- biah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakan- tan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Lan- guage models are few-shot learners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Advances in neural in- formation processing systems, 33:1877–1901, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [6] Adrian Bulat and Georgios Tzimiropoulos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' How far are we from solving the 2d & 3d face alignment problem?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' (and a dataset of 230,000 3d facial landmarks).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In Proceedings of the IEEE International Conference on Computer Vision, pages 1021–1030, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [7] Maxime Burchi and Valentin Vielzeuf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Efficient conformer: Progressive downsampling and grouped attention for auto- matic speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 8– 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' IEEE, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [8] Joon Son Chung and Andrew Zisserman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Lip reading in the wild.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In Asian conference on computer vision, pages 87–103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Springer, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [9] Joon Son Chung and AP Zisserman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Lip reading in profile.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [10] Ronan Collobert, Christian Puhrsch, and Gabriel Synnaeve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Wav2letter: an end-to-end convnet-based speech recognition system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' arXiv preprint arXiv:1609.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='03193, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [11] Jiankang Deng, Jia Guo, Evangelos Ververas, Irene Kot- sia, and Stefanos Zafeiriou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Retinaface: Single-shot multi- level face localisation in the wild.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5203–5212, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [12] Linhao Dong, Shuang Xu, and Bo Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Speech-transformer: a no-recurrence sequence-to-sequence model for speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5884–5888.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' IEEE, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [13] Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, Jitendra Malik, and Christoph Feichten- hofer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Multiscale vision transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vi- sion, pages 6824–6835, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [14] Alex Graves, Santiago Fern´andez, Faustino Gomez, and J¨urgen Schmidhuber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In ICML, pages 369–376, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [15] Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hin- ton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Speech recognition with deep recurrent neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In 2013 IEEE international conference on acoustics, speech and signal processing, pages 6645–6649.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Ieee, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [16] Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Par- mar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zheng- dong Zhang, Yonghui Wu, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Conformer: Convolution- augmented transformer for speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' arXiv preprint arXiv:2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='08100, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [17] Pengcheng Guo, Florian Boyer, Xuankai Chang, Tomoki Hayashi, Yosuke Higuchi, Hirofumi Inaguma, Naoyuki Kamo, Chenda Li, Daniel Garcia-Romero, Jiatong Shi, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Recent developments on espnet toolkit boosted by con- former.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In ICASSP 2021-2021 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 5874–5878.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' IEEE, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [18] Wei Han, Zhengdong Zhang, Yu Zhang, Jiahui Yu, Chung- Cheng Chiu, James Qin, Anmol Gulati, Ruoming Pang, and Yonghui Wu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Contextnet: Improving convolutional neural networks for automatic speech recognition with global con- text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' arXiv preprint arXiv:2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='03191, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [19] Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Deep speech: Scaling up end-to-end speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='5567, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Deep residual learning for image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [21] Kenneth Heafield.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Kenlm: Faster and smaller language model queries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In Proceedings of the sixth workshop on sta- tistical machine translation, pages 187–197, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [22] Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Averaging weights leads to wider optima and better generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' arXiv preprint arXiv:1803.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='05407, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [23] Shigeki Karita, Nanxin Chen, Tomoki Hayashi, Takaaki Hori, Hirofumi Inaguma, Ziyan Jiang, Masao Someki, Nelson Enrique Yalta Soplin, Ryuichi Yamamoto, Xiaofei Wang, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' A comparative study on transformer vs rnn in speech applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In 2019 IEEE Automatic Speech Recog- nition and Understanding Workshop (ASRU), pages 449– 456.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' IEEE, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [24] Diederik P Kingma and Jimmy Ba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Adam: A method for stochastic optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='6980, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [25] Samuel Kriman, Stanislav Beliaev, Boris Ginsburg, Joce- lyn Huang, Oleksii Kuchaiev, Vitaly Lavrukhin, Ryan Leary, Jason Li, and Yang Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Quartznet: Deep automatic speech recognition with 1d time-channel separable convolu- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6124–6128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' IEEE, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [26] Taku Kudo and John Richardson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Sentencepiece: A sim- ple and language independent subword tokenizer and detok- enizer for neural text processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In EMNLP, pages 66–71, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [27] Jaesong Lee and Shinji Watanabe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Intermediate loss regular- ization for ctc-based speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In ICASSP 2021- 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6224–6228.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' IEEE, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [28] Jason Li, Vitaly Lavrukhin, Boris Ginsburg, Ryan Leary, Oleksii Kuchaiev, Jonathan M Cohen, Huyen Nguyen, and Ravi Teja Gadde.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Jasper: An end-to-end convolutional neu- ral acoustic model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' arXiv preprint arXiv:1904.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='03288, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [29] Pingchuan Ma, Stavros Petridis, and Maja Pantic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' End- to-end audio-visual speech recognition with conformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7613–7617.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' IEEE, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [30] Pingchuan Ma, Stavros Petridis, and Maja Pantic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Visual speech recognition for multiple languages in the wild.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' arXiv preprint arXiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='13084, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [31] Somshubra Majumdar, Jagadeesh Balam, Oleksii Hrinchuk, Vitaly Lavrukhin, Vahid Noroozi, and Boris Ginsburg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Cit- rinet: Closing the gap between non-autoregressive and au- toregressive end-to-end models for automatic speech recog- nition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' arXiv preprint arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='01721, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [32] Takaki Makino, Hank Liao, Yannis Assael, Brendan Shillingford, Basilio Garcia, Otavio Braga, and Olivier Sio- han.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Recurrent neural network transducer for audio-visual speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In 2019 IEEE automatic speech recog- nition and understanding workshop (ASRU), pages 905–912.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' IEEE, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [33] Jumon Nozaki and Tatsuya Komatsu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Relaxing the con- ditional independence assumption of ctc-based asr by con- ditioning on intermediate predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' arXiv preprint arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='02724, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [34] Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Librispeech: an asr corpus based on public do- main audio books.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5206–5210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' IEEE, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [35] Daniel S Park, Yu Zhang, Chung-Cheng Chiu, Youzheng Chen, Bo Li, William Chan, Quoc V Le, and Yonghui Wu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Specaugment on large scale datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In ICASSP, pages 6879–6883, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [36] Stavros Petridis, Themos Stafylakis, Pingchuan Ma, Geor- gios Tzimiropoulos, and Maja Pantic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Audio-visual speech recognition with a hybrid ctc/attention architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 513–520.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' IEEE, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [37] KR Prajwal, Triantafyllos Afouras, and Andrew Zisserman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Sub-word level lip reading with visual attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5162–5172, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [38] Prajit Ramachandran, Barret Zoph, and Quoc V Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Searching for activation functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' arXiv preprint arXiv:1710.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='05941, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [39] Dmitriy Serdyuk, Otavio Braga, and Olivier Siohan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Audio- visual speech recognition is worth 32x32x8 voxels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In 2021 IEEE Automatic Speech Recognition and Understand- ing Workshop (ASRU), pages 796–802.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' IEEE, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [40] Brendan Shillingford, Yannis Assael, Matthew W Hoff- man, Thomas Paine, C´ıan Hughes, Utsav Prabhu, Hank Liao, Hasim Sak, Kanishka Rao, Lorrayne Bennett, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Large-scale visual speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' arXiv preprint arXiv:1807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content='05162, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [41] Joon Son Chung, Andrew Senior, Oriol Vinyals, and Andrew Zisserman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Lip reading sentences in the wild.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 6447–6456, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [42] George Sterpu, Christian Saam, and Naomi Harte.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Attention- based audio-visual fusion for robust automatic speech recog- nition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In Proceedings of the 20th ACM International Con- ference on Multimodal Interaction, pages 111–115, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [43] Andrew Varga and Herman JM Steeneken.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Assessment for automatic speech recognition: Ii.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' noisex-92: A database and an experiment to study the effect of additive noise on speech recognition systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Speech communication, 12(3):247– 251, 1993.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [44] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Attention is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Advances in neural information processing systems, 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [45] Bo Xu, Cheng Lu, Yandong Guo, and Jacob Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Discrim- inative multi-modality speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14433–14442, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [46] Jianwei Yu, Shi-Xiong Zhang, Jian Wu, Shahram Ghorbani, Bo Wu, Shiyin Kang, Shansong Liu, Xunying Liu, Helen Meng, and Dong Yu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Audio-visual recognition of overlapped speech for the lrs2 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In ICASSP 2020-2020 IEEE In- ternational Conference on Acoustics, Speech and Signal Pro- cessing (ICASSP), pages 6984–6988.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' IEEE, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [47] Qian Zhang, Han Lu, Hasim Sak, Anshuman Tripathi, Erik McDermott, Stephen Koo, and Shankar Kumar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Transformer transducer: A streamable speech recognition model with transformer encoders and rnn-t loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7829–7833.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' IEEE, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [48] Xingxuan Zhang, Feng Cheng, and Shilin Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Spatio- temporal fusion based convolutional sequence learning for lip reading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 713–722, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' [49] Ya Zhao, Rui Xu, Xinchao Wang, Peng Hou, Haihong Tang, and Mingli Song.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' Hearing lips: Improving lip reading by dis- tilling speech recognizers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'} +page_content=' In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 34, pages 6917– 6924, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQffvxD/content/2301.01456v1.pdf'}