Language-Agnostic Age and Gender Classification of Voice using Self-supervised Pre-Training

Fredrik Lastow, Edwin Ekberg, Pierre Nugues

Research output: Chapter in Book/Report/Conference proceedingPaper in conference proceedingpeer-review

Abstract

Extracting speaker-dependent paralinguistic information out of a person's voice, provides an opportunity for adaptive behaviour related to speaker information in speech processing applications. For instance, in audio-based conversational applications, adapting responses to the attributes of the correspondent is an integral part in making the conversations effective. Two speaker attributes that humans can estimate quite well, based solely on hearing a person speak, is the gender and age of that person. However, in the field of speech processing, age and gender classification are relatively unexplored tasks, especially in a multilingual setting. In most cases, hand-crafted features, such as MFCCs, have been used with some success. However, recently large transformer networks, utilizing self-supervised pre-Training, have shown promise in creating general speech embeddings for various speech processing tasks. We present a baseline for gender and age detection, in both monolingual and multilingual settings, for multiple state-of-The-Art speech processing models, fine-Tuned for age classification. We created four different datasets with data extracted from the Common Voice project to compare monolingual and multilingual performances. For gender classification, we could reach a macro average F1 score of 96% in both a monolingual and multilingual setting. For age classification, using classes with a size of 10 years, we obtained a macro average mean absolute class error (MACE) of 0.68 and 0.86 on monolingual and multilingual datasets, respectively. For the English TIMIT dataset, we improve upon the previous state of the art for both age regression and gender classification. Our fine-Tuned WavLM model reaches a mean absolute error (MAE) of 4.11 years for males and 4.44 for females in age estimation and our fine-Tuned UniSpeech-SAT model reaches an accuracy of 99.8% for gender classification. All the models were deemed fast enough on a GPU to be used in real-Time settings, and accurate enough, using only a small amount of speech, to be applicable in multilingual speech processing applications.

Original languageEnglish
Title of host publication34th Workshop of the Swedish Artificial Intelligence Society, SAIS 2022
PublisherIEEE - Institute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781665471268
DOIs
Publication statusPublished - 2022
Event34th Workshop of the Swedish Artificial Intelligence Society, SAIS 2022 - Stockholm, Sweden
Duration: 2022 Jun 132022 Jun 14

Publication series

Name34th Workshop of the Swedish Artificial Intelligence Society, SAIS 2022

Conference

Conference34th Workshop of the Swedish Artificial Intelligence Society, SAIS 2022
Country/TerritorySweden
CityStockholm
Period2022/06/132022/06/14

Subject classification (UKÄ)

  • Natural Language Processing

Fingerprint

Dive into the research topics of 'Language-Agnostic Age and Gender Classification of Voice using Self-supervised Pre-Training'. Together they form a unique fingerprint.

Cite this