Please use this identifier to cite or link to this item:
https://hdl.handle.net/10646/4366
Title: | Ensemble learning of hybrid acoustic features for speech emotion recognition |
Authors: | Zvarevashe, Kudakwashe Olugbara, Oludayo |
Keywords: | emotion recognition ensemble algorithm feature extraction machine learning supervised learning |
Issue Date: | Mar-2020 |
Publisher: | MDPI |
Citation: | Zvarevashe, K. and Olugbara, O. (2020). Ensemble learning of hybrid acoustic features for speech emotion recognition. Algorithms, 13 (70). http://doi:10.3390/a13030070 |
Abstract: | Automatic recognition of emotion is important for facilitating seamless interactivity between a human being and intelligent robot towards the full realization of a smart society. The methods of signal processing and machine learning are widely applied to recognize human emotions based on features extracted from facial images, video files or speech signals. However, these features were not able to recognize the fear emotion with the same level of precision as other emotions. The authors propose the agglutination of prosodic and spectral features from a group of carefully selected features to realize hybrid acoustic features for improving the task of emotion recognition. Experiments were performed to test the effectiveness of the proposed features extracted from speech files of two public databases and used to train five popular ensemble learning algorithms. Results show that random decision forest ensemble learning of the proposed hybrid acoustic features is highly effective for speech emotion recognition. |
URI: | https://hdl.handle.net/10646/4366 |
ISSN: | 1999-4893 |
Appears in Collections: | Department of Analytics and Informatics Staff Publications |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Zvarevashe_Ensemble_learning_of _hybrid_acoustic_features.pdf | 964.59 kB | Adobe PDF | ![]() View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.