QUT ePrints

The use of speech and lip modalities for robust speaker verification under adverse conditions

Wark, T. J. , Sridharan, S., & Chandran, V. (1999) The use of speech and lip modalities for robust speaker verification under adverse conditions. In IEEE International Conference on Multimedia Computing and Systems, 1999, Florence, Italy, pp. 812-816.

View at publisher

Abstract

Investigates the use of lip information, in conjunction with speech information, for robust speaker verification in the presence of background noise. We have previously shown (Int. Conf. on Acoustics, Speech and Signal Proc., vol. 6, pp. 3693-3696, May 1998) that features extracted from a speaker's moving lips hold speaker dependencies which are complementary with speech features. We demonstrate that the fusion of lip and speech information allows for a highly robust speaker verification system which outperforms either subsystem individually. We present a new technique for determining the weighting to be applied to each modality so as to optimize the performance of the fused system. Given a correct weighting, lip information is shown to be highly effective for reducing the false acceptance and false rejection error rates in the presence of background noise

Impact and interest:

1 citations in Web of Science®
Search Google Scholar™

Citation countsare sourced monthly from Scopus and Web of Science® citation databases.

These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.

Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.

Full-text downloads:

57 since deposited on 16 Oct 2011
32 in the past twelve months

Full-text downloadsdisplays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.

ID Code: 45593
Item Type: Conference Paper
Keywords: feature extraction, image recognition, multimedia computing, noise, speaker recognition, adverse conditions, background noise, error rates, false acceptance rate, false rejection rate, lip modalities, lip movement, lip reading, performance optimization, robust speaker verification, speaker dependencies, speech features, speech modalities, weighting
DOI: 10.1109/MMCS.1999.779305
Divisions: Past > QUT Faculties & Divisions > Faculty of Built Environment and Engineering
Past > Schools > School of Engineering Systems
Copyright Owner: Copyright 1999 IEEE
Copyright Statement: c) 1999 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works
Deposited On: 17 Oct 2011 08:44
Last Modified: 17 Oct 2011 08:44

Export: EndNote | Dublin Core | BibTeX

Repository Staff Only: item control page