CSI Seminar 2016-01-20
10h30 Speaker Diarization based on the Mel Frequency Cepstral Coefficients – Fanny Riols
Speaker diarization has emerged as an increasingly important and dedicated domain of speech research. It relates to the problem of determining "who spoke when ?". It means that we would like to find the intervals during which eachspeaker is active. By computing the Mel Frequency Cepstral Coefficients (MFCC) features from a given speech signal and using the Independent Component Analysis (ICA) on these features, we are able to segment the speech, with the help of a Hidden Markov Model (HMM). We will use this algorithm for speaker diarization in verification systemwith multi-speaker audio data, such as interview of microphone segment of NIST Speaker Recognition Evaluation.
11h00 Improving the determinization of Büchi automata – Alexandre Lewkowicz
Safra's algorithm is a well known construction method which produces a deterministic Rabin automaton from a non-deterministic Büchi automaton. A variant of this method creates deterministic automata with a parity acceptance. However, these methods produce automata with '"`UNIQ--math-00000003-QINU`"' states. There already exist improvements that help reduce the number of states in many cases. In this paper we present two new strategies to help construct smaller deterministic automata. The first strategy uses the strongly connected components and tracks a different Safra run for each of these components separately. The second strategy uses the information that bisimulation gives us to help remove redundant states. This enables us to avoid searching multiples paths which are equivalent and hence reduces the final number of states. We show how these two strategies help reduce the resulting automaton and prove their correctness. We also provide some benchmarks to show that the resulting automata are almost always smaller. Finally, we compare our results to a tool called ltl2dstar which converts LTL formula to deterministic Rabin automaton.
11h30 Random automata and path generation in Vcsn – Antoine Pietri
This report presents the implementation of an efficient and generic way to generate random weighted automata. To do so, we use a previously established relations between some known sets and the set of accessible DFA with n states. By extending these relations to the weighted case, we generalize the presented algorithm and we show an implementation in the Vcsn platform.
12h00 Vcsn Meets Linguistics – Sébastien Piat
Linguistics is one of the field of application of the automata theory, the latter being used to represent and manipulate languages. Vcsn has yet to be used for such applications. With the recent implementation of efficient composition, it was made possible to use the library to create a translator using transducers. This work will present the different steps of implementation of a translator from text language ("bjr") to french ("bonjour") using Vcsn, and the pipeline of the translating process using automata. We will go through the difficulties encountered during this implementation, from the absence of some algorithms to the poor performances of others.
12h30 Efficient Transducer Composition in Vcsn – Valentin Tolmer
Transducers are used in many contexts, such as speech recognition or in order to measure the similarity between proteins. One of the core algorithms to manipulate them is the composition. This work presents the basic composition algorithm, then its extension to transducers with spontaneous transitions. A lazy adaptation of the algorithm is then proposed for both the composition and the necessary pre-processing (insplitting). Naïve variadic composition is shown to be useless in reducing the amount of computations. Finally, some benchmarks show how the implementation of the composition in Vcsn compares with OpenFST.