vSpeak: Edge detection based feature extraction for sign to text conversion
Afeal, A.H., Tariq, A., & Nasir, C.S. (2009) vSpeak: Edge detection based feature extraction for sign to text conversion. In 2009 International Conference on Image Processing, Computer Vision, and Pattern Recognition (IPCV 2009), 13-16 July 2009, Las Vegas, NV.
This paper presents 'vSpeak', the first initiative taken in Pakistan for ICT enabled conversion of dynamic Sign Urdu gestures into natural language sentences. To realize this, vSpeak has adopted a novel approach for feature extraction using edge detection and image compression which gives input to the Artificial Neural Network that recognizes the gesture. This technique caters for the blurred images as well. The training and testing is currently being performed on a dataset of 200 patterns of 20 words from Sign Urdu with target accuracy of 90% and above.
Impact and interest:
Citation counts are sourced monthly from and citation databases.
These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.
Citations counts from theindexing service can be viewed at the linked Google Scholar™ search.
|Item Type:||Conference Paper|
|Keywords:||Feature extraction, Gesture recognition, Image compression|
|Divisions:||Current > QUT Faculties and Divisions > Faculty of Health
Current > Institutes > Institute of Health and Biomedical Innovation
Current > Schools > School of Public Health & Social Work
|Deposited On:||19 Apr 2016 22:48|
|Last Modified:||22 Apr 2016 01:57|
Repository Staff Only: item control page