Difference between revisions of "Publications/dehak.14.odyssey"

From LRDE

(Created page with "{{Publication | published = true | date = 2014-06-01 | authors = N Dehak, O Plchot, M H Bahari, L Burget, H Van hamme, R Dehak | title = GMM Weights Adaptation Based on Subspa...")
 
 
(6 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
{{Publication
 
{{Publication
 
| published = true
 
| published = true
| date = 2014-06-01
+
| date = 2014-06-16
| authors = N Dehak, O Plchot, M H Bahari, L Burget, H Van hamme, R Dehak
+
| authors = Najim Dehak, O Plchot, M H Bahari, L Burget, H Van hamme, Réda Dehak
 
| title = GMM Weights Adaptation Based on Subspace Approaches for Speaker Verification
 
| title = GMM Weights Adaptation Based on Subspace Approaches for Speaker Verification
 
| booktitle = Odyssey 2014, The Speaker and Language Recognition Workshop
 
| booktitle = Odyssey 2014, The Speaker and Language Recognition Workshop
 
| address = Joensuu, Finland
 
| address = Joensuu, Finland
| project = SpeakerId
+
| lrdeprojects = SpeakerId
  +
| lrdenewsdate = 2014-06-16
| urllrde = 201406-ODYSSEY
 
 
| abstract = In this paper, we explored the use of Gaussian Mixture Model (GMM) weights adaptation for speaker verifica- tion. We compared two different subspace weight adap- tation approaches: Subspace Multinomial Model (SMM) and Non-Negative factor Analysis (NFA). Both techniques achieved similar results and seemed to outperform the retraining maximum likelihood (ML) weight adaptation. However, the training process for the NFA approach is substantially faster than the SMM technique. The i-vector fusion between each weight adaptation approach and the classical i-vector yielded slight improvements on the tele- phone part of the NIST 2010 Speaker Recognition Eval- uation dataset.
 
| abstract = In this paper, we explored the use of Gaussian Mixture Model (GMM) weights adaptation for speaker verifica- tion. We compared two different subspace weight adap- tation approaches: Subspace Multinomial Model (SMM) and Non-Negative factor Analysis (NFA). Both techniques achieved similar results and seemed to outperform the retraining maximum likelihood (ML) weight adaptation. However, the training process for the NFA approach is substantially faster than the SMM technique. The i-vector fusion between each weight adaptation approach and the classical i-vector yielded slight improvements on the tele- phone part of the NIST 2010 Speaker Recognition Eval- uation dataset.
 
| pages = 48 to 53
 
| pages = 48 to 53
Line 14: Line 14:
 
| bibtex =
 
| bibtex =
 
@InProceedings<nowiki>{</nowiki> dehak.14.odyssey,
 
@InProceedings<nowiki>{</nowiki> dehak.14.odyssey,
author = <nowiki>{</nowiki>N. Dehak and O. Plchot and M.H. Bahari and L. Burget and
+
author = <nowiki>{</nowiki>Najim Dehak and O. Plchot and M.H. Bahari and L. Burget
H. Van hamme and R. Dehak<nowiki>}</nowiki>,
+
and H. Van hamme and R\'eda Dehak<nowiki>}</nowiki>,
title = <nowiki>{</nowiki>GMM Weights Adaptation Based on Subspace Approaches for
+
title = <nowiki>{</nowiki><nowiki>{</nowiki>GMM<nowiki>}</nowiki> Weights Adaptation Based on Subspace Approaches for
 
Speaker Verification<nowiki>}</nowiki>,
 
Speaker Verification<nowiki>}</nowiki>,
 
booktitle = <nowiki>{</nowiki>Odyssey 2014, The Speaker and Language Recognition
 
booktitle = <nowiki>{</nowiki>Odyssey 2014, The Speaker and Language Recognition
Line 23: Line 23:
 
address = <nowiki>{</nowiki>Joensuu, Finland<nowiki>}</nowiki>,
 
address = <nowiki>{</nowiki>Joensuu, Finland<nowiki>}</nowiki>,
 
month = jun,
 
month = jun,
project = <nowiki>{</nowiki>SpeakerId<nowiki>}</nowiki>,
 
 
abstract = <nowiki>{</nowiki>In this paper, we explored the use of Gaussian Mixture
 
abstract = <nowiki>{</nowiki>In this paper, we explored the use of Gaussian Mixture
 
Model (GMM) weights adaptation for speaker verifica- tion.
 
Model (GMM) weights adaptation for speaker verifica- tion.

Latest revision as of 17:00, 27 May 2021

Abstract

In this paper, we explored the use of Gaussian Mixture Model (GMM) weights adaptation for speaker verifica- tion. We compared two different subspace weight adap- tation approaches: Subspace Multinomial Model (SMM) and Non-Negative factor Analysis (NFA). Both techniques achieved similar results and seemed to outperform the retraining maximum likelihood (ML) weight adaptation. However, the training process for the NFA approach is substantially faster than the SMM technique. The i-vector fusion between each weight adaptation approach and the classical i-vector yielded slight improvements on the tele- phone part of the NIST 2010 Speaker Recognition Eval- uation dataset.


Bibtex (lrde.bib)

@InProceedings{	  dehak.14.odyssey,
  author	= {Najim Dehak and O. Plchot and M.H. Bahari and L. Burget
		  and H. Van hamme and R\'eda Dehak},
  title		= {{GMM} Weights Adaptation Based on Subspace Approaches for
		  Speaker Verification},
  booktitle	= {Odyssey 2014, The Speaker and Language Recognition
		  Workshop},
  year		= 2014,
  address	= {Joensuu, Finland},
  month		= jun,
  abstract	= {In this paper, we explored the use of Gaussian Mixture
		  Model (GMM) weights adaptation for speaker verifica- tion.
		  We compared two different subspace weight adap- tation
		  approaches: Subspace Multinomial Model (SMM) and
		  Non-Negative factor Analysis (NFA). Both techniques
		  achieved similar results and seemed to outperform the
		  retraining maximum likelihood (ML) weight adaptation.
		  However, the training process for the NFA approach is
		  substantially faster than the SMM technique. The i-vector
		  fusion between each weight adaptation approach and the
		  classical i-vector yielded slight improvements on the tele-
		  phone part of the NIST 2010 Speaker Recognition Eval-
		  uation dataset.},
  pages		= {48--53}
}