Keyword Transformer: A Self-Attention Model for Keyword Spotting

Axel Berg, Mark O'Connor, Miguel Tairum Cruz

Research output: Chapter in Book/Report/Conference proceedingPaper in conference proceedingpeer-review

Abstract

The Transformer architecture has been successful across many domains, including natural language processing, computer vision and speech recognition. In keyword spotting, self-attention has primarily been used on top of convolutional or recurrent encoders. We investigate a range of ways to adapt the Transformer architecture to keyword spotting and introduce the Keyword Transformer (KWT), a fully self-attentional architecture that exceeds state-of-the-art performance across multiple tasks without any pre-training or additional data. Surprisingly, this simple architecture outperforms more complex models that mix convolutional, recurrent and attentive layers. KWT can be used as a drop-in replacement for these models, setting two new benchmark records on the Google Speech Commands dataset with 98.6% and 97.7% accuracy on the 12 and 35-command tasks respectively.
Original languageEnglish
Title of host publicationProc. Interspeech 2021
PublisherISCA
Pages4249-4253
Number of pages5
DOIs
Publication statusPublished - 2021 Aug 30
EventInterspeech 2021 - Brno, Czech Republic
Duration: 2021 Aug 302021 Sept 3

Publication series

NameInterspeech
PublisherISCA

Conference

ConferenceInterspeech 2021
Country/TerritoryCzech Republic
CityBrno
Period2021/08/302021/09/03

Subject classification (UKÄ)

  • Signal Processing
  • Mathematical Sciences

Free keywords

  • keyword spotting
  • machine learning

Fingerprint

Dive into the research topics of 'Keyword Transformer: A Self-Attention Model for Keyword Spotting'. Together they form a unique fingerprint.

Cite this