Show simple item record

dc.contributor.authorZvarevashe, Kudakwashe
dc.contributor.authorOlugbara, Oludayo
dc.date.accessioned2022-01-20T09:28:23Z
dc.date.available2022-01-20T09:28:23Z
dc.date.issued2020-03
dc.identifier.citationZvarevashe, K. and Olugbara, O. (2020). Ensemble learning of hybrid acoustic features for speech emotion recognition. Algorithms, 13 (70). http://doi:10.3390/a13030070en_ZW
dc.identifier.issn1999-4893
dc.identifier.urihttps://hdl.handle.net/10646/4366
dc.description.abstractAutomatic recognition of emotion is important for facilitating seamless interactivity between a human being and intelligent robot towards the full realization of a smart society. The methods of signal processing and machine learning are widely applied to recognize human emotions based on features extracted from facial images, video files or speech signals. However, these features were not able to recognize the fear emotion with the same level of precision as other emotions. The authors propose the agglutination of prosodic and spectral features from a group of carefully selected features to realize hybrid acoustic features for improving the task of emotion recognition. Experiments were performed to test the effectiveness of the proposed features extracted from speech files of two public databases and used to train five popular ensemble learning algorithms. Results show that random decision forest ensemble learning of the proposed hybrid acoustic features is highly effective for speech emotion recognition.en_ZW
dc.publisherMDPIen_ZW
dc.subjectemotion recognitionen_ZW
dc.subjectensemble algorithmen_ZW
dc.subjectfeature extractionen_ZW
dc.subjectmachine learningen_ZW
dc.subjectsupervised learningen_ZW
dc.titleEnsemble learning of hybrid acoustic features for speech emotion recognitionen_ZW
dc.typeArticleen_ZW


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record