—In recent years the workings which requires
human-machine interaction such as speech recognition,
emotion recognition from speech recognition is increasing. Not
only the speech recognition also the features during the
conversation is studied like melody, emotion, pitch, emphasis.
It has been proven with the research that it can be reached
meaningful results using prosodic features of speech. In this
paper we performed pre-processing necessary for emotion
recognition from speech data. We extract features from speech
signal. To recognize emotion it has been extracted Mel
Frequency Cepstral Coefficients (MFCC) from the signals.
And we classified with k-NN algorithm.
—Speech processing, speech recognition,
emotion recognition, MFCC.
S. Demircan is with the Department of Computer Engineering in the
University of Selcuk, Konya, Türkiye (e-mail: firstname.lastname@example.org).
Cite:S. Demircan and H. Kahramanlı, "Feature Extraction from Speech Data for Emotion Recognition," Journal of Advances in Computer Networks vol. 2, no. 1, pp. 28-30, 2014.