High-fidelity simulation for evaluating robotic vision performance

, , , , , & (2016) High-fidelity simulation for evaluating robotic vision performance. In Kwon, D S (Ed.) Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016). Institute of Electrical and Electronics Engineers Inc., United States of America, pp. 2737-2744.

View at publisher

Description

Robotic vision, unlike computer vision, typically involves processing a stream of images from a camera with time varying pose operating in an environment with time varying lighting conditions and moving objects. Repeating robotic vision experiments under identical conditions is often impossible, making it difficult to compare different algorithms. For machine learning applications a critical bottleneck is the limited amount of real world image data that can be captured and labelled for both training and testing purposes. In this paper we investigate the use of a photo-realistic simulation tool to address these challenges, in three specific domains: robust place recognition, visual SLAM and object recognition. For the first two problems we generate images from a complex 3D environment with systematically varying camera paths, camera viewpoints and lighting conditions. For the first time we are able to systematically characterise the performance of these algorithms as paths and lighting conditions change. In particular, we are able to systematically generate varying camera viewpoint datasets that would be difficult or impossible to generate in the real world. We also compare algorithm results for a camera in a real environment and a simulated camera in a simulation model of that real environment. Finally, for the object recognition domain, we generate labelled image data and characterise the viewpoint dependency of a current convolution neural network in performing object recognition. Together these results provide a multi-domain demonstration of the beneficial properties of using simulation to characterise and analyse a wide range of robotic vision algorithms.

Impact and interest:

19 citations in Scopus
Search Google Scholar™

Citation counts are sourced monthly from Scopus and Web of Science® citation databases.

These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.

Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.

Full-text downloads:

107 since deposited on 10 Aug 2017
39 in the past twelve months

Full-text downloads displays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.

ID Code: 109473
Item Type: Chapter in Book, Report or Conference volume (Conference contribution)
ORCID iD:
Skinner, Johnorcid.org/0000-0001-8330-5835
Garg, Souravorcid.org/0000-0001-6068-3307
Suenderhauf, Nikoorcid.org/0000-0001-5286-3789
Corke, Peterorcid.org/0000-0001-6650-367X
Milford, Michaelorcid.org/0000-0002-5162-1793
Measurements or Duration: 8 pages
Keywords: Cameras, Complex networks, Computer hardware description languages, Computer vision, Convolution neural networks, Face recognition, High-fidelity simulations, Identical conditions, Intelligent robots, Learning systems, Lighting, Machine learning applications, Object recognition, Real-world image data, Robotic vision algorithms, Robotics, Training and testing, Viewpoint dependency
DOI: 10.1109/IROS.2016.7759425
ISBN: 978-1-5090-3762-9
Pure ID: 32996146
Divisions: Past > Institutes > Institute for Future Environments
Past > QUT Faculties & Divisions > Science & Engineering Faculty
Current > Research Centres > ARC Centre of Excellence for Robotic Vision
Funding:
Copyright Owner: Consult author(s) regarding copyright matters
Copyright Statement: This work is covered by copyright. Unless the document is being made available under a Creative Commons Licence, you must assume that re-use is limited to personal use and that permission from the copyright owner must be obtained for all other uses. If the document is available under a Creative Commons License (or other specified license) then refer to the Licence for details of permitted re-use. It is a condition of access that users recognise and abide by the legal requirements associated with these rights. If you believe that this work infringes copyright please provide details by email to qut.copyright@qut.edu.au
Deposited On: 10 Aug 2017 02:38
Last Modified: 03 May 2024 13:39