QUT ePrints

The use of temporal speech and lip information for multi-modal speaker identification via multi-stream HMMs

Wark, T., Sridharan, S., & Chandran, V. (2000) The use of temporal speech and lip information for multi-modal speaker identification via multi-stream HMMs. In 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing: Image and Multidimensional Signal Processing Multimedia Signal Processing, Istanbul, pp. 2389-2392.

View at publisher

Abstract

Investigates the use of temporal lip information, in conjunction with speech information, for robust, text-dependent speaker identification. We propose that significant speaker-dependent information can be obtained from moving lips, enabling speaker recognition systems to be highly robust in the presence of noise. The fusion structure for the audio and visual information is based around the use of multi-stream hidden Markov models (MSHMM), with audio and visual features forming two independent data streams. Recent work with multi-modal MSHMMs has been performed successfully for the task of speech recognition. The use of temporal lip information for speaker identification has been performed previously (T.J. Wark et al., 1998), however this has been restricted to output fusion via single-stream HMMs. We present an extension to this previous work, and show that a MSHMM is a valid structure for multi-modal speaker identification

Impact and interest:

Citation countsare sourced monthly from Scopus and Web of Science® citation databases.

These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.

Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.

Full-text downloads:

74 since deposited on 16 Oct 2011
38 in the past twelve months

Full-text downloadsdisplays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.

ID Code: 45592
Item Type: Conference Paper
Keywords: audio-visual systems, feature extraction, hidden Markov models, image recognition, speaker recognition, audio features, audio-visual information, fusion structure, independent data streams, moving lips, multi-modal speaker identification, multi-stream hidden Markov models, noise, output fusion, robust text-dependent speaker identification, speaker recognition systems, speaker-dependent information, temporal lip information, temporal speech information, visual features
DOI: 10.1109/ICASSP.2000.859322
ISBN: 0780362934
ISSN: 1520-6149
Subjects: Australian and New Zealand Standard Research Classification > INFORMATION AND COMPUTING SCIENCES (080000) > COMPUTER SOFTWARE (080300) > Computer System Security (080303)
Divisions: Past > QUT Faculties & Divisions > Faculty of Built Environment and Engineering
Past > Institutes > Information Security Institute
Past > Schools > School of Engineering Systems
Copyright Owner: Copyright 2000 IEEE
Copyright Statement: Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Deposited On: 17 Oct 2011 07:48
Last Modified: 17 Oct 2011 12:09

Export: EndNote | Dublin Core | BibTeX

Repository Staff Only: item control page