Streaming Transformer Transducer Based Speech Recognition Using Non-Causal Convolution

IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)


This paper improves the streaming transformer transducer for speech recognition using non-causal convolution. Many works apply the causal convolution to improve streaming transformer ignoring the lookahead context. We propose to use non-causal convolution to process the center block and lookahead context separately. This method leverages the lookahead context in convolution and maintains similar training and decoding efficiency. Given the similar latency, using the non-causal convolution with lookahead context gives better accuracy than causal convolution, especially for open-domain dictation. Besides, this paper applies talking-head attention and a novel history context compression scheme to further improve the performance. The talking-head attention improves the multi-head self-attention by transferring information among different heads. The history context compression method introduces more extended history context compactly. On our in-house data, the proposed methods improve a small Emformer baseline with lookahead context by relative WERR 5.1%, 14.5%, 8.4% on open-domain dictation, assistant general scenarios, and assistant calling scenarios respectively.

Latest Publications