Continuous Pose-Invariant Lipreading
In audio-visual automatic speech recognition (AVASR), no research to date has been conducted into the problem of recognising visual speech whilst the speaker is moving their head. In this paper, we extend our current system to deal with this task, which we entitle continuous pose-invariant lipreading. By developing an AVASR system which can deal with such a scenario, we believe we are making the system effectively "real-world" as it requires little cooperation from the user and as such can be used in a host of realistic applications (e.g. mobile phones, in-vehicles etc.). In this proof of concept paper, we show via our experiments on the CUAVE database, that recognising visual speech whilst a speaker is moving their head during the utterance is feasible.
Impact and interest:
Citation counts are sourced monthly from and citation databases.
These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.
Citations counts from theindexing service can be viewed at the linked Google Scholar™ search.
Full-text downloads displays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.
Repository Staff Only: item control page