Autors: Manolov A., Boumbarov, O. L., Manolova, A. H., Poulkov, V. K., Tonchev K.
Title: Feature selection in affective speech classification
Keywords: affective computing , emotion recognition , feature selection , human computer interaction , neural nets , optimisation , regression analysis , signal classification , speech recognition

Abstract: The increasing role of spoken language interfaces in human-computer interaction applications has created conditions to facilitate a new area of research - namely recognizing the emotional state of the speaker through speech signals. This paper proposes a text independent method for emotion classification of speech signals used for the recognition of the emotional state of the speaker. Different feature selection criteria are explored and analyzed, Mutual Information Maximization feature scoring criterion and its derivatives, to measure how potentially useful a feature or feature subset may be when used in a classifier. The proposed method employs different groups of low-level features, such as energy, zero-crossing rate, frequency bands in Mel scale, fundamental frequency or pitch, the delta and delta-delta regression and statistical functions such as regression coefficients, extremums, moments etc., to represent the speech signals and a Neural Network classifier for classification.

References

    Issue

    40th International Conference on Telecommunications and Signal Processing (TSP), pp. 354-358, 2017, Spain, IEEE, DOI 10.1109/TSP.2017.8076004

    Copyright IEEE

    Цитирания (Citation/s):
    1. Hacine-Gharbi, Abdenour, and Philippe Ravier. "On the optimal number estimation of selected features using joint histogram based mutual information for speech emotion recognition." Journal of King Saud University-Computer and Information Sciences - 2019 - в издания, индексирани в Scopus или Web of Science
    2. Das, N., Chakraborty, S., Chaki, J., Padhy, N., & Dey, N. (2020). Fundamentals, present and future perspectives of speech enhancement. International Journal of Speech Technology, 1-19. - 2020 - в издания, индексирани в Scopus или Web of Science
    3. Jahangir, R., Teh, Y. W., Hanif, F., & Mujtaba, G. (2021). Deep learning approaches for speech emotion recognition: state of the art and research challenges. Multimedia Tools and Applications, 1-66. - 2021 - в издания, индексирани в Scopus или Web of Science
    4. Hacine-Gharbi, A., & Ravier, P. (2021). On the optimal number estimation of selected features using joint histogram based mutual information for speech emotion recognition. Journal of King Saud University-Computer and Information Sciences, 33(9), 1074-1083. - 2021 - в издания, индексирани в Scopus или Web of Science

    Вид: пленарен доклад в международен форум, публикация в реферирано издание, индексирана в Scopus и Web of Science