Visual place recognition is an essential function for robot system, which can effectively reduce position error via traversing same place. However, a major challenge in this field is to be robust against viewpoint and illumination changes caused by environmental variations. To solve this problem, we propose a novel visual place recognition method in coarse-to-fine manner, which incorporating both high-level semantic feature and bag context information. Firstly, the semantic features of an image are obtained by adopting deep learning network, which converts the image data to the instance representation in more simple way. In coarse stage, we utilize the Bag of word (BoW) model to quantify semantic features by calculating their frequency distribution. In fine stage, we enhance the spatial correlations of semantic objects to further distinguish similar scenes. Through the above two stage, we achieve a robust pipeline of visual place recognition to tackle viewpoint and condition variations. Experimental results on several datasets, including both indoor scenes and outdoor scenes, show that our method achieve good performance, especially for extreme conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.