Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Sep 12, 2016
Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that a paradigm shift is needed. We introduce the problem of Event-based Multi-View Stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our EMVS solution elegantly exploits two inherent properties of an event camera: (i) its ability to respond to scene edges—which naturally provide semidense geometric information without any preprocessing operation—and (ii) the fact that it provides continuous measurements as the sensor moves. Despite its simplicity (it can be implemented in a few lines of code), our algorithm is able to produce accurate, semidense depth maps. We successfully validate our method on both synthetic and real data. Our method is computationally very efficient and runs in real-time on a laptop CPU and even on a smartphone processor.
Paper: H. Rebecq, G. Gallego, D. Scaramuzza EMVS: Event-based Multi-View Stereo British Machine Vision Conference (BMVC), York, 2016. Best BMVC'16 Industry Paper Award! PDF: http://rpg.ifi.uzh.ch/docs/BMVC16_Reb...