QUT ePrints

Patch-Based Representation of Visual Speech

Lucey, Patrick J. & Sridharan, Sridha (2006) Patch-Based Representation of Visual Speech. In Goecke, R., Robles-Kelly, A., & Caelli, T. (Eds.) HCSNet Workshop on the Use of Vision in Human-Computer Interaction (VisHCI 2006), November 1-3, Canberra, Australia.

Abstract

Visual information from a speaker's mouth region is known to improve automatic speech recognition robustness, especially in the presence of acoustic noise. To date, the vast majority of work in this field has viewed these visual features in a holistic manner, which may not take into account the various changes that occur within articulation (process of changing the shape of the vocal tract using the articulators, i.e lips and jaw). Motivated by the work being conducted in fields of audio-visual automatic speech recognition (AVASR) and face recognition using articulatory features (AFs) and patches respectively, we present a proof of concept paper which represents the mouth region as a ensemble of image patches. Our experiments show that by dealing with the mouth region in this manner, we are able to extract more speech information from the visual domain. For the task of visual-only speaker-independent isolated digit recognition, we were able to improve the relative word error rate by more than 23\% on the CUAVE audio-visual corpus.

Impact and interest:

Citation countsare sourced monthly from Scopus and Web of Science® citation databases.

These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.

Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.

Full-text downloads:

114 since deposited on 05 Mar 2008
22 in the past twelve months

Full-text downloadsdisplays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.

ID Code: 12844
Item Type: Conference Paper
Additional URLs:
ISBN: 192068235X
ISSN: 1445-1336
Subjects: Australian and New Zealand Standard Research Classification > INFORMATION AND COMPUTING SCIENCES (080000) > ARTIFICIAL INTELLIGENCE AND IMAGE PROCESSING (080100) > Image Processing (080106)
Australian and New Zealand Standard Research Classification > INFORMATION AND COMPUTING SCIENCES (080000) > ARTIFICIAL INTELLIGENCE AND IMAGE PROCESSING (080100) > Natural Language Processing (080107)
Divisions: Past > QUT Faculties & Divisions > Faculty of Built Environment and Engineering
Past > Institutes > Information Security Institute
Copyright Owner: Copyright 2006 Australian Computer Society
Copyright Statement: Copyright c 2006, Australian Computer Society, Inc. This paper appeared at HCSNet Workshop on the Use of Vision in HCI (VisHCI 2006), Canberra, Australia. Conferences in Re- search and Practice in Information Technology (CRPIT), Vol. 56. R. Goecke, A. Robles-Kelly & T. Caelli, Eds. Reproduc- tion for academic, not-for profit purposes permitted provided this text is included.
Deposited On: 05 Mar 2008
Last Modified: 29 Feb 2012 23:31

Export: EndNote | Dublin Core | BibTeX

Repository Staff Only: item control page