Categories
Meetings

An evaluation and analysis of fine-tuned representations for code-switched low-resource speech recognition

Tolúlọpẹ́ Ògúnrẹ̀mí will present her work as a PhD student at Stanford University.

Recognising code-switched speech (alternating between two or more languages or varieties of language across sentences in conversation) is an important technical and social issue essential for modern society. The majority current speech recognisers are trained monolingually and therefore do not perform well on such utterances. The use of Deep Neural Network (DNN) architectures to train models allow for shared representations and provide an opportunity to level them to better handle code-switching. In the two studies contained in this work, we show multilingual fine-tuning of self-supervised speech representations can handle code-switching in a zero-resource scenario and through analysis of the latent representations, that code-switching is encoded in the model. We find that monolingual data is enough for character-level decoding in the code-switched scenario and that representations are not similar to word vectors.

When: 4/7/2022

Where: Sala conferenze on the 3° floor

Leave a Reply

Your email address will not be published. Required fields are marked *