QUT ePrints

Recognising audio-visual speech in vehicles using the AVICAR database

Navarathna, Rajitha, Dean, David B., Lucey, Patrick J., Sridharan, Sridha, & Fookes, Clinton B. (2010) Recognising audio-visual speech in vehicles using the AVICAR database. In Tabain, Marija, Fletcher, Janet, Grayden, David, Hajek, John, & Butcher, Andy (Eds.) Proceedings of the 13th Australasian International Conference on Speech Science and Technology, The Australasian Speech Science & Technology Association, Melbourne, Vic, pp. 110-113.

View at publisher

Abstract

Interacting with technology within a vehicle environment using a voice interface can greatly reduce the effects of driver distraction. Most current approaches to this problem only utilise the audio signal, making them susceptible to acoustic noise. An obvious approach to circumvent this is to use the visual modality in addition. However, capturing, storing and distributing audio-visual data in a vehicle environment is very costly and difficult. One current dataset available for such research is the AVICAR [1] database. Unfortunately this database is largely unusable due to timing mismatch between the two streams and in addition, no protocol is available. We have overcome this problem by re-synchronising the streams on the phone-number portion of the dataset and established a protocol for further research.

This paper presents the first audio-visual results on this dataset for speaker-independent speech recognition. We hope this will serve as a catalyst for future research in this area.

Impact and interest:

Citation countsare sourced monthly from Scopus and Web of Science® citation databases.

These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.

Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.

Full-text downloads:

122 since deposited on 07 Feb 2011
46 in the past twelve months

Full-text downloadsdisplays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.

ID Code: 39933
Item Type: Conference Paper
Additional URLs:
Keywords: AVICAR Database, Audio-visual Automatic Speech Recognition, Multi-stream HMM, Feature Extraction
ISBN: 9780958194631
Subjects: Australian and New Zealand Standard Research Classification > ENGINEERING (090000) > ELECTRICAL AND ELECTRONIC ENGINEERING (090600) > Signal Processing (090609)
Divisions: Past > QUT Faculties & Divisions > Faculty of Built Environment and Engineering
Past > Institutes > Information Security Institute
Past > Schools > School of Engineering Systems
Copyright Owner: Copyright 2010 The Australasian Speech Science & Technology Association
Deposited On: 08 Feb 2011 08:12
Last Modified: 01 Mar 2012 00:31

Export: EndNote | Dublin Core | BibTeX

Repository Staff Only: item control page