ASR-based Features for Emotion Recognition: A Transfer Learning Approach
Abstract
During the last decade, the applications of signal processing have drastically improved with deep learning. However areas of affecting computing such as emotional speech synthesis or emotion recognition from spoken language remains challenging. In this paper, we investigate the use of a neural Automatic Speech Recognition (ASR) as a feature extractor for emotion recognition. We show that these features outperform the eGeMAPS feature set to predict the valence and arousal emotional dimensions, which means that the audio-to-text mapping learning by the ASR system contain information related to the emotional dimensions in spontaneous speech. We also examine the relationship between first layers (closer to speech) and last layers (closer to text) of the ASR and valence/arousal.
- Publication:
-
arXiv e-prints
- Pub Date:
- May 2018
- arXiv:
- arXiv:1805.09197
- Bibcode:
- 2018arXiv180509197T
- Keywords:
-
- Electrical Engineering and Systems Science - Audio and Speech Processing;
- Computer Science - Artificial Intelligence;
- Computer Science - Computation and Language;
- Computer Science - Sound
- E-Print:
- Accepted to be published in the First Workshop on Computational Modeling of Human Multimodal Language - ACL 2018