LoST? Appearance-invariant place recognition for opposite viewpoints using visual semantics

, , & (2018) LoST? Appearance-invariant place recognition for opposite viewpoints using visual semantics. In Howard, T, Atanasov, N, Srinivasa, S, & Kress-Gazit, H (Eds.) Robotics: Science and Systems XIV. Robotics Science and Systems Foundation, http://www.roboticsproceedings.org/index.html, pp. 1-10.

[img]
Preview
Accepted Version (PDF 8MB)
124630.pdf.

View at publisher

Description

Human visual scene understanding is so remarkable that we are able to recognize a revisited place when entering it from the opposite direction it was first visited, even in the presence of extreme variations in appearance. This capability is especially apparent during driving: a human driver can recognize where they are when travelling in the reverse direction along a route for the first time, without having to turn back and look. The difficulty of this problem exceeds any addressed in past appearance- and viewpoint-invariant visual place recognition (VPR) research, in part because large parts of the scene are not commonly observable from opposite directions. Consequently, as shown in this paper, the precision-recall performance of current state-of-the-art viewpoint- and appearance-invariant VPR techniques is orders of magnitude below what would be usable in a closed-loop system. Current engineered solutions predominantly rely on panoramic camera or LIDAR sensing setups; an eminently suitable engineering solution but one that is clearly very different to how humans navigate, which also has implications for how naturally humans could interact and communicate with the navigation system. In this paper we develop a suite of novel semantic- and appearance-based techniques to enable for the first time high performance place recognition in this challenging scenario. We first propose a novel Local Semantic Tensor (LoST) descriptor of images using the convolutional feature maps from a state-of-the-art dense semantic segmentation network. Then, to verify the spatial semantic arrangement of the top matching candidates, we develop a novel approach for mining semantically-salient keypoint correspondences.

Impact and interest:

66 citations in Scopus
Search Google Scholar™

Citation counts are sourced monthly from Scopus and Web of Science® citation databases.

These databases contain citations from different subsets of available publications and different time periods and thus the citation count from each is usually different. Some works are not in either database and no count is displayed. Scopus includes citations from articles published in 1996 onwards, and Web of Science® generally from 1980 onwards.

Citations counts from the Google Scholar™ indexing service can be viewed at the linked Google Scholar™ search.

Full-text downloads:

147 since deposited on 15 Jan 2019
26 in the past twelve months

Full-text downloads displays the total number of times this work’s files (e.g., a PDF) have been downloaded from QUT ePrints as well as the number of downloads in the previous 365 days. The count includes downloads for all files if a work has more than one.

ID Code: 124630
Item Type: Chapter in Book, Report or Conference volume (Conference contribution)
ORCID iD:
Garg, Souravorcid.org/0000-0001-6068-3307
Suenderhauf, Nikoorcid.org/0000-0001-5286-3789
Milford, Michaelorcid.org/0000-0002-5162-1793
Measurements or Duration: 10 pages
DOI: 10.15607/RSS.2018.XIV.022
ISBN: 978-0-9923747-4-7
Pure ID: 33310350
Divisions: Past > Institutes > Institute for Future Environments
Past > QUT Faculties & Divisions > Science & Engineering Faculty
Funding:
Copyright Owner: Consult author(s) regarding copyright matters
Copyright Statement: This work is covered by copyright. Unless the document is being made available under a Creative Commons Licence, you must assume that re-use is limited to personal use and that permission from the copyright owner must be obtained for all other uses. If the document is available under a Creative Commons License (or other specified license) then refer to the Licence for details of permitted re-use. It is a condition of access that users recognise and abide by the legal requirements associated with these rights. If you believe that this work infringes copyright please provide details by email to qut.copyright@qut.edu.au
Deposited On: 15 Jan 2019 04:10
Last Modified: 28 Jul 2024 22:50