Audio-Visual ASR from Multiple Views inside Smart Rooms

Potamianos, Gerasimos & Lucey, Patrick J. (2006) Audio-Visual ASR from Multiple Views inside Smart Rooms. In Henderson, T.C. & Hanebeck, U. (Eds.) 2006 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, 3-6 September 2006, Heidelberg, Germany.

View at publisher


Visual information from a speaker's mouth region is known to improve automatic speech recognition robustness. However, the vast majority of audio-visual automatic speech recognition (AVASR) studies assume frontal images of the speaker's face, which is not always the case in realistic human-computer interaction (HCI) scenarios. One such case of interest is HCI inside smart rooms, equipped with pan-tilt-zoom (PTZ) cameras that closely track the subject's head. Since however these cameras are fixed in space, they cannot necessarily obtain frontal views of the speaker. Clearly, AVASR from non-frontal views is required, as well as fusion of multiple camera views, if available. In this paper, we report our very preliminary work on this subject. In particular, we concentrate on two topics: first, the design of an AVASR system that operates on profile face views and its comparison with a traditional frontal-view AVASR system, and second, the fusion of the two systems into a multi-view frontal/profile system. We in particular describe our visual front end approach for the profile view system, and report experiments on a multi-subject, small-vocabulary, bimodal, multi-sensory database that contains synchronously captured audio with frontal and profile face video, recorded inside the IBM smart room as part of the CHIL project. Our experiments demonstrate that AVASR is possible from profile views, however the visual modality benefit is decreased compared to frontal video data.

Impact and interest:

5 citations in Scopus
Search Google Scholar™

Citation counts are sourced monthly from Scopus and Web of Science® citation databases.

These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.

Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.

Full-text downloads:

188 since deposited on 04 Sep 2007
66 in the past twelve months

Full-text downloads displays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.

ID Code: 9329
Item Type: Conference Paper
Refereed: Yes
Keywords: audio, visual systems home automation human computer interaction speech recognition video cameras
DOI: 10.1109/MFI.2006.265643
ISBN: 1424405661
Divisions: Past > QUT Faculties & Divisions > Faculty of Built Environment and Engineering
Copyright Owner: Copyright 2006 IEEE
Copyright Statement: Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Deposited On: 04 Sep 2007 00:00
Last Modified: 29 Feb 2012 13:26

Export: EndNote | Dublin Core | BibTeX

Repository Staff Only: item control page