Development of a Highly Precise Place Recognition Module for Effective Human-robot Interactions in Changing Lighting and Viewpoint Conditions

Date
2020-01-07
Authors
Baumgartl, Hermann
Buettner, Ricardo
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
Ending Page
Alternative Title
Abstract
We present a highly precise and robust module for indoor place recognition, extending the work by Lemaignan et al. and Robert Jr. by giving the robot the ability to recognize its environment context. We developed a full end-to-end convolutional neural network architecture, using a pre-trained deep convolutional neural network and the explicit inductive bias transfer learning strategy. Experimental results based on the York University and Rzeszów University dataset show excellent performance values (over 94.75 and 97.95 percent accuracy) and a high level of robustness over changes in camera viewpoint and lighting conditions, outperforming current benchmarks. Furthermore, our architecture is 82.46 percent smaller than the current benchmark, making our module suitable for embedding into mobile robots and easily adoptable to other datasets without the need for heavy adjustments.
Description
Keywords
Human-Robot Interactions, convolutional neural networks, human-robot interaction, inductive bias transfer learning, machine learning, place recognition module
Citation
Extent
10 pages
Format
Geographic Location
Time Period
Related To
Proceedings of the 53rd Hawaii International Conference on System Sciences
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Local Contexts
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.