ShelfMCL - Semantic particle filter localization with low-cost sensors

Team members:
Shivendra Agrawal (lead)
Ashutosh Naik (Graduate researcher)
Dusty Woods (Lab Manager)
Jake Brawer (Postdoc)
Bradley Hayes (Advisor)

Overview

  • Problem: Estimating where an autonomous system is in a dynamic and cluttered environment such as a grocery store is challenging and unsolved. Such localization ability is essential to providing navigation guidance.
  • Solution: We present a novel Semantic Monte Carlo Localization algorithm that requires just an RGB-D camera and a visual odometry camera. Our system can be 1. mounted on a cart or 2. worn as a wearable.

Challenges

Environmental Complexity:

  • The uniform and symmetric nature of aisles makes depth or LiDAR observations essentially featureless and unreliable.
  • Constantly changing product stock and poses exacerbates the localization challenge.


Our Solution: A Novel Semantic Particle Filter

Minimal Sensor Requirements

Our system uses low-cost sensors such as an RGB-D camera and a visual odometry camera, without requiring LiDAR or wheel odometry.

Modularity

  • Mountable on Carts/Strollers: Can add autonomous capabilities to existing equipment.
  • Wearable: Can support assistive technology for navigation.


Algorithm

  • Semantic Mapping

    We trained a custom classifier to classify products into a fixed number of classes.

  • Pose Correction

    Real-world pose estimates obtained through inverse camera projection are refined using ray casting on the semantic map.


  • Semantic Localization

    Semantic information is fused with the depth observation to in a Monte Carlo Localization framework.


Demo


(Left) Ground truth pose for the above & (Right) Localization pose estimates from our system.

Preminary Results

(Detailed results are coming soon)

  • Without semantic information, the system converges incorrectly.
  • Using semantic information, our system performs robust global localization and maintains it.