GCT: Gated Contextual Transformer For Sequential Audio Tagging

IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)

Abstract

Audio tagging aims to assign predefi ned tags to audio clips to indicate the class information of audio events. Sequential audio tagging (SAT) means detecting both the class information of audio events, and the order in which they occur within the audio clip. Most existing methods for SAT are based on connectionist temporal classifi cation (CTC). However, CTC cannot effectively capture event connections due to the conditional independence assumption between outputs at different times. The contextual Transformer (cTransformer) addresses this issue by exploiting contextual information in SAT. Nevertheless, cTransformer is also limited in exploiting contextual information as it only uses forward information in inference. This paper proposes a gated contextual Transformer (GCT) with forward-backward inference (FBI). In addition, a gated contextual multi-layer perceptron (GCMLP) block is proposed in GCT to improve the performance of cTransformer structurally. Experiments on the two real-life audio datasets with manually annotated sequential labels show that the proposed GCT with GCMLP and FBI performs better than the CTC-based methods and cTransformer.

Featured Publications