Dissertation/Thesis Abstract

Speech Emotion Recognition With Gender Recognition Subsystem
by Modi, Deep B., M.S., California State University, Long Beach, 2017, 43; 10263113
Abstract (Summary)

The Emotion Recognition System recognizes the emotional state of a speaker by testing the emotional patterns in his or her speech. The system determines the speaker’s gender before finding the emotion, which makes the designed system accurate.

This technology is one of the best examples of the human-computer interaction system. Using MATLAB, the designed system recognizes a number of emotions such as anger, disgust, boredom, happiness, sadness, and fear. The Emotion Recognition system has two subsystems, named the Gender Recognition (GR) subsystem and the Emotion Recognition (ER) subsystem. The achieved results show that the knowledge of the speaker’s gender helps design a more accurate Emotion Recognition System.

The feature selection process is designed to reduce the features of the human speech and select the important features to make the system less complicated. The designed ER system is so flexible that it can be installed in future smartphone technology.

Indexing (document details)
Advisor: Yeh, Hen-Geul
Commitee: Ahmed, Aftab, Hamano, Fumio
School: California State University, Long Beach
Department: Electrical Engineering
School Location: United States -- California
Source: MAI 56/04M(E), Masters Abstracts International
Subjects: Electrical engineering
Keywords: Bes, Emotion recognition, Gender recognition, Mel-Frequency Cepstrum Coefficients, Mfcc, Speech recognition
Publication Number: 10263113
ISBN: 978-1-369-70698-7
Copyright © 2020 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy