An intuitive motion-based input model for mobile devices

Richards, Mark Andrew (2006) An intuitive motion-based input model for mobile devices. Masters by Research thesis, Queensland University of Technology.


Traditional methods of input on mobile devices are cumbersome and difficult to use. Devices have become smaller, while their operating systems have become more complex, to the extent that they are approaching the level of functionality found on desktop computer operating systems. The buttons and toggle-sticks currently employed by mobile devices are a relatively poor replacement for the keyboard and mouse style user interfaces used on their desktop computer counterparts. For example, when looking at a screen image on a device, we should be able to move the device to the left to indicate we wish the image to be panned in the same direction.

This research investigates a new input model based on the natural hand motions and reactions of users. The model developed by this work uses the generic embedded video cameras available on almost all current-generation mobile devices to determine how the device is being moved and maps this movement to an appropriate action.

Surveys using mobile devices were undertaken to determine both the appropriateness and efficacy of such a model as well as to collect the foundational data with which to build the model. Direct mappings between motions and inputs were achieved by analysing users' motions and reactions in response to different tasks.

Upon the framework being completed, a proof of concept was created upon the Windows Mobile Platform. This proof of concept leverages both DirectShow and Direct3D to track objects in the video stream, maps these objects to a three-dimensional plane, and determines device movements from this data.

This input model holds the promise of being a simpler and more intuitive method for users to interact with their mobile devices, and has the added advantage that no hardware additions or modifications are required the existing mobile devices.

Impact and interest:

Search Google Scholar™

Citation counts are sourced monthly from Scopus and Web of Science® citation databases.

These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.

Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.

Full-text downloads:

450 since deposited on 03 Dec 2008
11 in the past twelve months

Full-text downloads displays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.

ID Code: 16556
Item Type: QUT Thesis (Masters by Research)
Supervisor: Dunn, Timothy & Pham, Binh
Keywords: Input Model, Human–Computer Interface, Mobile Device, User Interface, Input Devices, Interaction Styles, Automated Survey, DirectX Mobile, DirectShow, Windows Mobile, Computer Vision, Image Processing, Edge Detection, Object Detection, Motion Tracking, Scene Analysis, Human Movement, ARToolkit, Augmented Reality
Department: Faculty of Information Technology
Institution: Queensland University of Technology
Copyright Owner: Copyright Mark Andrew Richards
Deposited On: 03 Dec 2008 04:05
Last Modified: 28 Oct 2011 19:49

Export: EndNote | Dublin Core | BibTeX

Repository Staff Only: item control page