The recognition of emotions, such as anger, anxiety, joy, etc . from tonal variations in human speech is an important task for research and applications in human computer interaction. The objective of this research is to design, implement and test a Speech Emotion Classification (SEC) engine that can extract useful features and accurately classify emotions in human speech in the presence of speaker-dependent characteristics variations and noise. Current approaches extract several standard global values from the temporal sequence of power spectra, such as pitch, formants, energy, and values from the time signal, such as attack and decay rates. In this work, the frequency dimension of the spectrogram is quantized to simulate the Bark scale in the human audition system, the time dimension of the spectrogram is quantized in units starting from 50 ms, and the linear regression coefficients of the surface of each spectrogram segment are combined into a feature vector. In this way, complete local features are extracted to establish a larger sample. The accumulated feature vectors for each category of emotion provide a robust training basis for a state of the art classifier, such as an SVM. In order to further improve the performance of the SEC engine and to demonstrate the flexibility and benefit of local features, a backward context scheme is introduced. A series of experiments have been designed and conducted using the EMO-DB and LDC-DB speech emotion databases to measure the performance of the SEC engine. First, the accuracy and the precision of the performance are measured in terms of seven to fifteen emotion categories when trained on the speech utterances by random sampling. Next, the generalization performance is measured through a speaker cross-validation scheme. Third, the generalization and robust performance of the SEC engine is measured by performing gender, language and speaker classification with the SEC engine, hence measuring the discrimination power of the engine related to the speaker characteristics variations. Finally, the robust performance of the SEC engine is measured when the SNR is varied between 10 and 50 dB.
|Commitee:||Choi, Hyeong-Ah, Martin, Dianne, Monteleoni, Claire, Oertel, Carsten K., Youssef, Abdou|
|School:||The George Washington University|
|School Location:||United States -- District of Columbia|
|Source:||DAI-B 73/07(E), Dissertation Abstracts International|
|Keywords:||Feature extraction, Machine learning, Speech recognition, Statistical learning, Support vector machines|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
supplemental files is subject to the ProQuest Terms and Conditions of use.