QUT ePrints

Can audio-visual speech recognition outperform acoustically enhanced speech recognition in automotive environment?

Navarathna, Rajitha, Kleinschmidt, Tristan, Dean, David B., Sridharan, Sridha, & Lucey, Patrick J. (2011) Can audio-visual speech recognition outperform acoustically enhanced speech recognition in automotive environment? In Interspeech 2011, 27-31 August 2011, Firenze Fiera, Florence.

View at publisher

Abstract

The use of visual features in the form of lip movements to improve the performance of acoustic speech recognition has been shown to work well, particularly in noisy acoustic conditions. However, whether this technique can outperform speech recognition incorporating well-known acoustic enhancement techniques, such as spectral subtraction, or multi-channel beamforming is not known. This is an important question to be answered especially in an automotive environment, for the design of an efficient human-vehicle computer interface. We perform a variety of speech recognition experiments on a challenging automotive speech dataset and results show that synchronous HMM-based audio-visual fusion can outperform traditional single as well as multi-channel acoustic speech enhancement techniques. We also show that further improvement in recognition performance can be obtained by fusing speech-enhanced audio with the visual modality, demonstrating the complementary nature of the two robust speech recognition approaches.

Impact and interest:

1 citations in Scopus
Search Google Scholar™
0 citations in Web of Science®

Citation countsare sourced monthly from Scopus and Web of Science® citation databases.

These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.

Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.

Full-text downloads:

94 since deposited on 08 Sep 2011
18 in the past twelve months

Full-text downloadsdisplays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.

ID Code: 45770
Item Type: Conference Paper
Keywords: Speech enhancement, robust speech recognition, audio-visual automatic speech recognition, synchronous HMM
Subjects: Australian and New Zealand Standard Research Classification > ENGINEERING (090000) > ELECTRICAL AND ELECTRONIC ENGINEERING (090600) > Signal Processing (090609)
Divisions: Past > QUT Faculties & Divisions > Faculty of Built Environment and Engineering
Past > Institutes > Information Security Institute
Past > Schools > School of Engineering Systems
Copyright Owner: Copyright 2011 [please consult the author]
Deposited On: 09 Sep 2011 08:29
Last Modified: 18 Oct 2011 21:05

Export: EndNote | Dublin Core | BibTeX

Repository Staff Only: item control page