Appearance-based landmark selection for efficient long-term visual localization




Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Aug 8, 2016

In this video we present an online landmark selection method for efficient and accurate visual localization under changing appearance conditions. The wide range of conditions encountered during long-term visual localization by e.g., fleets of autonomous vehicles, offers the potential to exploit redundancy and reduce data usage by selecting only those visual cues which are relevant at the given time. Therefore co-observability statistics guide landmark ranking and selection, significantly reducing the amount of information used for localization while maintaining or even improving accuracy.

The approach is demonstrated in two different environments: one with day-to-night appearance changes, and one with seasonal and weather changes spanning one year. Data is recorded with a car equipped with four cameras facing forwards, backwards, left, and right (representative images shown in the top-left during the video).

For more information, please refer to the following publication
M. Bürki, I. Gilitschenski, E. Stumm, R. Siegwart, and J. Nieto, "Appearance-based landmark selection for efficient long-term visual localization," (to appear) in IROS 2016.

Note: The poses of the vehicle are estimated from a vision-only non-linear least-squares optimization problem, that is, there is no temporal filtering or fusion with other sensor data (wheel-odometry, IMU, etc.). Hence the "shaking" behavior of the car in the visualization.


When autoplay is enabled, a suggested video will automatically play next.

Up next

to add this to Watch Later

Add to

Loading playlists...