Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on May 5, 2015
In this video, we demonstrate a multi-modal people tracking system for mobile robots in crowded environments that was developed during the EU FP7 project SPENCER. The video depicts a real use-case, where a robot equipped with forward and backward RGB-D and 2D laser range sensors is driving through a highly dynamic airport environment.
We use the existing RGB-D upper body and groundHOG detectors described in  and a re-implementation of the 2D laser segment detector from  in combination with a very efficient NN-IMM tracking system with track initiation logic. In contrast to more sophisticated data association schemes such as MHT, our tracker still runs very fast in complex scenarios with low CPU usage (less than 10% of a single i7 Core), leaving enough computational resources to run higher-level perception components onboard the robot.
The video shows work in progress. We currently use a flexible detection-to-detection fusion pipeline configured via XML, which in the future we possibly want to replace by detection-to-track fusion. Furthermore, we have plans to handle lengthy occlusions more robustly and increase the recall of our laser detectors. We also want to integrate far-range detections from stereo vision as well as online-learned appearance models.
The visualizations were generated using a series of custom RViz plugins that are part of our modular, ROS-based people tracking framework. A preview of the framework can be found online at: https://github.com/spencer-project/sp...
 Jafari O. Hosseini and Mitzel D. and Leibe B.. Real-Time RGB-D based People Detection and Tracking for Mobile Robots and Head-Worn Cameras, ICRA 2014.  Arras K.O. and Martinez Mozos O. and Burgard W.. Using Boosted Features for the Detection of People in 2D Range Data, ICRA 2007