Cepstral and Long-Term Features for Emotion Recognition

From LRDE

The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Abstract

In this paper, we describe systems that were developed for the Open Performance Sub-Challenge of the INTERSPEECH 2009 Emotion Challenge. We participate to both two-class and five-class emotion detection. For the two-class problemthe best performance is obtained by logistic regression fusion of three systems. Theses systems use short- and long-term speech features. This fusion achieved an absolute improvement of 2,6% on the unweighted recall value compared with [6]. For the five-class problem, we submitted two individual systems: cepstral GMM vs. long-term GMM-UBM. The best result comes from a cepstral GMM and produced an absolute improvement of 3,5% compared to [6].


Bibtex (lrde.bib)

@InProceedings{	  dehak.09.interspeechb,
  author	= {Pierre Dumouchel and Najim Dehak and Yazid Attabi and
		  R\'eda Dehak and Narj\`es Boufaden},
  title		= {Cepstral and Long-Term Features for Emotion Recognition},
  booktitle	= {Interspeech},
  year		= 2009,
  month		= sep,
  note		= {Open Performance Sub-Challenge Prize},
  abstract	= {In this paper, we describe systems that were developed for
		  the Open Performance Sub-Challenge of the INTERSPEECH 2009
		  Emotion Challenge. We participate to both two-class and
		  five-class emotion detection. For the two-class problem,
		  the best performance is obtained by logistic regression
		  fusion of three systems. Theses systems use short- and
		  long-term speech features. This fusion achieved an absolute
		  improvement of 2,6\% on the unweighted recall value
		  compared with [6]. For the five-class problem, we submitted
		  two individual systems: cepstral GMM vs. long-term GMM-UBM.
		  The best result comes from a cepstral GMM and produced an
		  absolute improvement of 3,5\% compared to [6].}
}