dc.contributor.author | Zvarevashe, Kudakwashe | |
dc.contributor.author | Olugbara, Oludayo | |
dc.date.accessioned | 2022-01-20T09:28:23Z | |
dc.date.available | 2022-01-20T09:28:23Z | |
dc.date.issued | 2020-03 | |
dc.identifier.citation | Zvarevashe, K. and Olugbara, O. (2020). Ensemble learning of hybrid acoustic features for speech emotion recognition. Algorithms, 13 (70). http://doi:10.3390/a13030070 | en_ZW |
dc.identifier.issn | 1999-4893 | |
dc.identifier.uri | https://hdl.handle.net/10646/4366 | |
dc.description.abstract | Automatic recognition of emotion is important for facilitating seamless interactivity between
a human being and intelligent robot towards the full realization of a smart society. The methods of
signal processing and machine learning are widely applied to recognize human emotions based on
features extracted from facial images, video files or speech signals. However, these features were not
able to recognize the fear emotion with the same level of precision as other emotions. The authors
propose the agglutination of prosodic and spectral features from a group of carefully selected features
to realize hybrid acoustic features for improving the task of emotion recognition. Experiments were
performed to test the effectiveness of the proposed features extracted from speech files of two public
databases and used to train five popular ensemble learning algorithms. Results show that random
decision forest ensemble learning of the proposed hybrid acoustic features is highly effective for
speech emotion recognition. | en_ZW |
dc.publisher | MDPI | en_ZW |
dc.subject | emotion recognition | en_ZW |
dc.subject | ensemble algorithm | en_ZW |
dc.subject | feature extraction | en_ZW |
dc.subject | machine learning | en_ZW |
dc.subject | supervised learning | en_ZW |
dc.title | Ensemble learning of hybrid acoustic features for speech emotion recognition | en_ZW |
dc.type | Article | en_ZW |