Post has attachment
I am happy to announce our recent paper on event camera tracking which was just published in the IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI). This time we target specifically VR/AR applications. The idea of the paper is to achieve event-by-event 6-DOF camera tracking from a pre-built 3D photometric dense map of the scene (for example using a standard camera or an RGBD camera) and then perform tracking from events only and an an event by event fashion (so virtually microsecond latency). The reason for doing this is to overcome the known limitations of insideout tracking with standard cameras: high latency, motion blur, and low dynamic range. We show that with the event camera we can tackle these problems all together. We also show in the video that we can track the camera motion during very aggressive motions, which naturally lead standard camera tracking to fail.

Paper:
Gallego, Lund, Mueggler, Rebecq, Delbruck, Scaramuzza
Event-based, 6-DOF Camera Tracking from Photometric Depth Maps
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
http://rpg.ifi.uzh.ch/docs/PAMI17_Gallego.pdf
https://www.youtube.com/watch?v=iZZ77F-hwzs

Abstract: Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high-speed motions or in scenes characterized by high dynamic range. These features, along with a very low power consumption, make event cameras an ideal complement to standard cameras for VR/AR and video game applications. With these applications in mind, this paper tackles the problem of accurate, low-latency tracking of an event camera from an existing photometric depth map (i.e., intensity plus depth information) built via classic dense reconstruction pipelines. Our approach tracks the 6-DOF pose of the event camera upon the arrival of each event, thus virtually eliminating latency. We successfully evaluate the method in both indoor and outdoor scenes and show that—because of the technological advantages of the event camera—our pipeline works in scenes characterized by high-speed motion, which are still unaccessible to standard cameras.

Post has attachment
I am excited to announce the first autonomous flight with an event camera in closed loop! We demonstrate that we can even fly in low light environments where standard cameras fail due to motion blur! Arxiv paper: https://arxiv.org/pdf/1709.06310
Abstract
Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high speed motions or in scenes characterized by high dynamic range. However, event cameras output only little information when the amount of motion is limited, such as in the case of almost still motion. Conversely, standard cameras provide instant and rich information about the environment most of the time (in low-speed and good lighting scenarios), but they fail severely in case of fast motions, or difficult lighting such as high dynamic range or low light scenes. In this paper, we present the first state estimation pipeline that leverages the complementary advantages of these two sensors by fusing in a tightly-coupled manner events, standard frames, and inertial measurements. We show on the publicly available Event Camera Dataset that our hybrid pipeline leads to an accuracy improvement of 130% over event-only pipelines, and 85% over standard-frames only visual-inertial systems, while still being computationally tractable. Furthermore, we use our pipeline to demonstrate—to the best of our knowledge—the first autonomous quadrotor flight using an event camera for state estimation, unlocking
flight scenarios that were not reachable with traditional visual inertial odometry, such as low-light environments and high dynamic range scenes.
Video: https://youtu.be/jIvJuWdmemE

Post has attachment
We are excited to announce the new release of S-PTAM system corresponding to our recent paper "S-PTAM: Stereo Parallel Tracking and Mapping."

The code is available in
https://github.com/lrse/sptam

S-PTAM in action:
https://www.youtube.com/watch?v=ojBB07JvDrY

You can find the paper on http://www.sciencedirect.com/science/article/pii/S0921889015302955

Post has attachment
I am happy to release the paper and video demonstration of our latest work on Visual Inertial Odometry with an Event camera. We show that we can accurately track motions in both a high-speed scenarios (like spinning the camera attached to a leash) and high dynamic range scenes, where VIO systems based on conventional cameras currently fail. The algorithm runs in real time even on a smartphone processor. The method works by tracking features extracted from the events. To do that, we consider events in overlapping spatio-temporal windows and align them using the current camera motion and scene structure, yielding motion-compensated event frames. We then combine these feature tracks in a keyframe-based, visual-inertial odometry algorithm based on nonlinear optimization to estimate the camera's 6-DOF pose, velocity, and IMU biases.

Article: http://rpg.ifi.uzh.ch/docs/BMVC17_Rebecq.pdf
H.Rebecq, T. Horstschaefer, D. Scaramuzza, Real-time Visual-Inertial Odometry for Event Cameras using Keyframe-based Nonlinear Optimization
British Machine Vision Conference (BMVC), London, 2017.
https://youtu.be/F3OFzsaPtvI

Post has attachment
Dear colleagues, the video recordings and all the slides of the presentations of the first ICRA'17 Workshop on Event based Vision (which took place in Singapore on June 2nd) are now available on the workshop webpage: http://rpg.ifi.uzh.ch/ICRA17_event_vision_workshop.html
Photo

Post has attachment

I am excited to share our recent paper on "Active Exposure Control for Robust Visual Odometry in High Dynamic Range (HDR) Environments" (http://rpg.ifi.uzh.ch/docs/ICRA17_Zhang.pdf), by my student +Zichao Zhang, +Christian Forster, and myself, which will be presented at ICRA17 next week, in Singapore. Typically, the camera built-in autoexposure control is optimized for image photography rather than for VO, which makes it difficult for VO and V-SLAM pipelines to cope with High Dynamic Range (HDR) scenes. In our paper, we therefore propose to actively control exposure time of the camera to improve the robustness of VO and V-SLAM in HDR environments. Basically, our active exposure control method evaluates the proper exposure time by maximizing a robust gradient-based image quality metric. The optimization is achieved by exploiting the photometric response function of the camera. We evaluate our active exposure control method in different real-world environments and show that it outperforms both the built-in auto-exposure function of the camera and a fixed exposure time. Finally, to validate the benefit of our approach, we tested it on different state-of-the-art visual odometry pipelines (i.e., ORB-SLAM2, DSO, and SVO 2.0) and demonstrate significant improved performance in very challenging HDR environments! The results are impressive! Datasets and code will be released soon! Enjoy the video and the read! Paper here: http://rpg.ifi.uzh.ch/docs/ICRA17_Zhang.pdf Video: https://youtu.be/TKJ8vknIXbM

Post has attachment
A gentle reminder to those of you, who will be at ICRA next week in Singapore, to come and join our First International Workshop on Event-based Vision! We will also have 12 live demos! The final schedule of the workshop can be found here: http://rpg.ifi.uzh.ch/ICRA17_event_vision_workshop.html The workshop is partially sponsored by nuTonomy!
----------------------------------
From Academia:
------------------------------------ - Andrew Davison, Imperial College London
- Kostas Daniilidis, University of Pennsylvania
- Tobi Delbruck, ETH Zurich / University of Zurich, lead inventor of the DVS/DAVIS sensors
- Jorg Conradt, Technical University of Munich
- Chiara Bartolozzi, Istituto Italiano di Tecnologia
- Garrick Orchard, National University of Singapore
- Davide Scaramuzza, University of Zurich
----------------------------------
From Industry:
------------------------------------ - Yoel Yaffe, SAMSUNG Electronics
- Xavier Lagorce from CHRONOCAM, also inventor of the ATIS sensor
- Brian Taba, IBM Research
- Christian Brandli, CEO and founder of Insightness
- Hanme Kim, co-founder of Slamcore
- Sven-Erik Jacobsen, founder of INIVATION
Photo

Are there any SLAM-related workshops or symposia planned for 2017...?

Post has attachment
I wish you all happy new year by announcing our latest work. We present EVO: a geometric approach to Event-based 6-DOF Parallel Tracking and Mapping in Real-time, which has recently been accepted at IEEE RA-L. EVO successfully leverages the outstanding properties of event cameras to track fast camera motions while recovering a semi-dense 3D map of the environment. The implementation runs in real-time on a standard CPU and outputs up to several hundred pose estimates per second. Due to the nature of event cameras, our algorithm is unaffected by motion blur and operates very well in challenging, high dynamic range conditions with strong illumination changes (check especially at minute 0:57, when we point the camera towards the sun! and at minute 1:43 of the video, when we switch the lights off and on!). To achieve this, we combine a novel, event-based tracking approach based on image-to-model alignment with our recent event-based multiview stereo algorithm (EMVS, BMVC'16) in a parallel fashion. Additionally, we show that the output of our pipeline can be used to reconstruct intensity images from the binary event stream, though our algorithm does not require such intensity information. We believe that this work makes significant progress in SLAM by unlocking the potential of event cameras. This allows us to tackle challenging scenarios that are currently inaccessible to standard cameras.

Reference: Henri Rebecq, Timo Horstschaefer, Guillermo Gallego, Davide Scaramuzza, "EVO: A Geometric Approach to Event-based 6-DOF Parallel Tracking and Mapping in Real-time," IEEE Robotics and Automation Letters (RA-L), 2016.
http://rpg.ifi.uzh.ch/docs/RAL16_EVO.pdf

Our research page on event based vision:
http://rpg.ifi.uzh.ch/research_dvs.html

Robotics and Perception Group, University of Zurich, 2016
http://rpg.ifi.uzh.ch/

https://youtu.be/bYqD2qZJlxE

Post has attachment
If you are interested in the optimization / backend aspects of SLAM, you may like our recent WAFR paper:
http://www.wafr.org/papers/WAFR_2016_paper_138.pdf
(this just won the best paper award - a great collaboration with +David Rosen, +John Leonard, and +Afonso Bandeira)
An extended version of the paper (49 pages, including proofs, more results, other cool stuff) is now available on ArXiv: https://arxiv.org/pdf/1612.07386v1.pdf

In a nutshell we demonstrate that a particular convex relaxation is able to compute exact solutions for SLAM when the measurement noise is reasonable (this practically covers all instances found in robotics / computer vision applications). Moreover we provide a numerical solver that can solve the convex relaxation with optimality guarantees while being faster than standard iterative techniques (e.g., Gauss Newton). Hope you guys like it - Happy Holidays!
Wait while more posts are being loaded