Recognising audio-visual speech in vehicles using the AVICAR database
Navarathna, Rajitha, Dean, David B., Lucey, Patrick J., Sridharan, Sridha, & Fookes, Clinton B. (2010) Recognising audio-visual speech in vehicles using the AVICAR database. In Tabain, Marija, Fletcher, Janet, Grayden, David, Hajek, John, & Butcher, Andy (Eds.) Proceedings of the 13th Australasian International Conference on Speech Science and Technology, The Australasian Speech Science & Technology Association, Melbourne, Vic, pp. 110-113.
Interacting with technology within a vehicle environment using a voice interface can greatly reduce the effects of driver distraction. Most current approaches to this problem only utilise the audio signal, making them susceptible to acoustic noise. An obvious approach to circumvent this is to use the visual modality in addition. However, capturing, storing and distributing audio-visual data in a vehicle environment is very costly and difficult. One current dataset available for such research is the AVICAR  database. Unfortunately this database is largely unusable due to timing mismatch between the two streams and in addition, no protocol is available. We have overcome this problem by re-synchronising the streams on the phone-number portion of the dataset and established a protocol for further research.
This paper presents the first audio-visual results on this dataset for speaker-independent speech recognition. We hope this will serve as a catalyst for future research in this area.
Impact and interest:
Citation counts are sourced monthly from and citation databases.
These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.
Citations counts from theindexing service can be viewed at the linked Google Scholar™ search.
Full-text downloads displays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.
|Item Type:||Conference Paper|
|Keywords:||AVICAR Database, Audio-visual Automatic Speech Recognition, Multi-stream HMM, Feature Extraction|
|Subjects:||Australian and New Zealand Standard Research Classification > ENGINEERING (090000) > ELECTRICAL AND ELECTRONIC ENGINEERING (090600) > Signal Processing (090609)|
|Divisions:||Past > QUT Faculties & Divisions > Faculty of Built Environment and Engineering
Past > Institutes > Information Security Institute
Past > Schools > School of Engineering Systems
|Copyright Owner:||Copyright 2010 The Australasian Speech Science & Technology Association|
|Deposited On:||07 Feb 2011 22:12|
|Last Modified:||29 Feb 2012 14:31|
Repository Staff Only: item control page