I2-S2: Intra-image-SeqSLAM for more accurate vision-based localisation in underground mines

, , Smith, David, Boswell, Nigel, , & (2018) I2-S2: Intra-image-SeqSLAM for more accurate vision-based localisation in underground mines. In Woodhead, I (Ed.) Proceedings of the Australasian Conference on Robotics and Automation (ACRA) 2018. Australian Robotics and Automation Association (ARAA), Australia, pp. 1-10.

[img]
Preview
Accepted Version (PDF 1MB)
pap110s1-file1.pdf.

Description

Many real-world robotic and autonomous vehicle applications, such as autonomous mining vehicles, require robust localisation under challenging environmental conditions. Laser range sensors have been used traditionally, but of- ten get lost in long tunnels that are the major components of underground mines. Recent re- search and applied systems have been increasingly using cameras, bringing in new challenges with regards to robustness against appearance and viewpoint changes. In this paper we develop a novel visual place recognition algorithm for autonomous underground mining vehicles that can be used to provide sufficiently accurate (sub-metre) metric pose estimation while also having the appearance-invariant and computationally lightweight characteristics of topological appearance-based methods. The challenge of large viewing angle variations typical in confined tunnels is addressed by incorporating multiple reference image candidates. The framework is evaluated with real-world multi- traverse datasets featuring different environments including underground mining tunnels and office building environments. The reprojection error of image registration is ∼ 50% lower than a state-of-the-art deep-learning based method (MR-FLOW) using manually-labelled ground truth on a set of images representing typical scenarios during the underground mining process.

Impact and interest:

4 citations in Scopus
Search Google Scholar™

Citation counts are sourced monthly from Scopus and Web of Science® citation databases.

These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.

Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.

Full-text downloads:

167 since deposited on 06 Feb 2019
20 in the past twelve months

Full-text downloads displays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.

ID Code: 125531
Item Type: Chapter in Book, Report or Conference volume (Conference contribution)
ORCID iD:
Zeng, Fanorcid.org/0000-0002-2564-7260
Jacobson, Adamorcid.org/0000-0002-8452-261X
Peynot, Thierryorcid.org/0000-0001-8275-6538
Milford, Michaelorcid.org/0000-0002-5162-1793
Measurements or Duration: 10 pages
Pure ID: 33311437
Divisions: Past > Institutes > Institute for Future Environments
Past > QUT Faculties & Divisions > Science & Engineering Faculty
Current > Research Centres > ARC Centre of Excellence for Robotic Vision
Funding:
Copyright Owner: Consult author(s) regarding copyright matters
Copyright Statement: This work is covered by copyright. Unless the document is being made available under a Creative Commons Licence, you must assume that re-use is limited to personal use and that permission from the copyright owner must be obtained for all other uses. If the document is available under a Creative Commons License (or other specified license) then refer to the Licence for details of permitted re-use. It is a condition of access that users recognise and abide by the legal requirements associated with these rights. If you believe that this work infringes copyright please provide details by email to qut.copyright@qut.edu.au
Deposited On: 06 Feb 2019 05:41
Last Modified: 09 Mar 2024 02:16