Post has attachment
I wish you all happy new year by announcing our latest work. We present EVO: a geometric approach to Event-based 6-DOF Parallel Tracking and Mapping in Real-time, which has recently been accepted at IEEE RA-L. EVO successfully leverages the outstanding properties of event cameras to track fast camera motions while recovering a semi-dense 3D map of the environment. The implementation runs in real-time on a standard CPU and outputs up to several hundred pose estimates per second. Due to the nature of event cameras, our algorithm is unaffected by motion blur and operates very well in challenging, high dynamic range conditions with strong illumination changes (check especially at minute 0:57, when we point the camera towards the sun! and at minute 1:43 of the video, when we switch the lights off and on!). To achieve this, we combine a novel, event-based tracking approach based on image-to-model alignment with our recent event-based multiview stereo algorithm (EMVS, BMVC'16) in a parallel fashion. Additionally, we show that the output of our pipeline can be used to reconstruct intensity images from the binary event stream, though our algorithm does not require such intensity information. We believe that this work makes significant progress in SLAM by unlocking the potential of event cameras. This allows us to tackle challenging scenarios that are currently inaccessible to standard cameras.

Reference: Henri Rebecq, Timo Horstschaefer, Guillermo Gallego, Davide Scaramuzza, "EVO: A Geometric Approach to Event-based 6-DOF Parallel Tracking and Mapping in Real-time," IEEE Robotics and Automation Letters (RA-L), 2016.
http://rpg.ifi.uzh.ch/docs/RAL16_EVO.pdf

Our research page on event based vision:
http://rpg.ifi.uzh.ch/research_dvs.html

Robotics and Perception Group, University of Zurich, 2016
http://rpg.ifi.uzh.ch/

https://youtu.be/bYqD2qZJlxE

Post has attachment
If you are interested in the optimization / backend aspects of SLAM, you may like our recent WAFR paper:
http://www.wafr.org/papers/WAFR_2016_paper_138.pdf
(this just won the best paper award - a great collaboration with +David Rosen, +John Leonard, and +Afonso Bandeira)
An extended version of the paper (49 pages, including proofs, more results, other cool stuff) is now available on ArXiv: https://arxiv.org/pdf/1612.07386v1.pdf

In a nutshell we demonstrate that a particular convex relaxation is able to compute exact solutions for SLAM when the measurement noise is reasonable (this practically covers all instances found in robotics / computer vision applications). Moreover we provide a numerical solver that can solve the convex relaxation with optimality guarantees while being faster than standard iterative techniques (e.g., Gauss Newton). Hope you guys like it - Happy Holidays!

Post has attachment
Dear colleagues, I would be glad if you could share this job ad within your network. Several PhD student and Postdoc opportunities are now available on research projects sponsored by the DARPA FLA program, the SNSF-ERC Starting Grant, and the NCCR Robotics. For more info and applications here: http://rpg.ifi.uzh.ch/positions.html

As a PhD student or Postdoc in our lab, you will work in a team of researchers developing new algorithms to advance the state of the art on autonomous and agile navigation of vision-controlled quadrotors. Applicants with an interest in, and strong experience with one of the following disciplines are encouraged to apply: Deep Learning, Control, Robot Vision. Women are strongly encouraged to apply.

Our vision is to make drones one day able to navigate as birds or better than birds, using mainly onboard vision sensors. To achieve this goal we investigate how machine learning, novel sensors (such as event cameras), and coupled perception and control can be used to advance the state of the art. We are looking for motivated researchers to help make this vision a reality!

The position is fully funded. PhD student and Postdoc positions in Switzerland are regular jobs with social benefits (i.e., a pension plan!). You will get a very competitive salary and access to excellent research facilities (motion capture, 3D printing, a large flying arena, electronic and machine workshops). Zurich is regularly ranked among the top cities in the world for quality of life. Additionally, we have a very enjoyable work atmosphere and organize many social events, such as ski trips, hikes, dinners, and lab retreats.

More info and applications here: http://rpg.ifi.uzh.ch/positions.html
Photo

Post has attachment
Dear SLAM colleagues, I am very happy to announce the release of the first public collection of datasets recorded with an event camera (DAVIS) for pose estimation, visual odometry, and SLAM applications! The data also include intensity images, inertial measurements, ground truth from a motion-capture system, synthetic data, as well as an event camera simulator that allows you to create your own sequences! All the data are released both as standard text files and binary files (i.e., rosbag).

Dataset: http://rpg.ifi.uzh.ch/davis_data.html
Paper: https://arxiv.org/pdf/1610.08336v1
Video of some of the sequences: https://youtu.be/bVVBTQ7l36I
More on our research on event-based vision: http://rpg.ifi.uzh.ch/research_dvs.html

We provide data:
* from a large variety of scenarios, ranging from indoors to outdoors, and high dynamic range
* featuring a variety of motions, from slow to fast, 1-DOF to 6-DOF
* with several sequences recorded using a motorized linear slider, leading to very smooth motions!
* synthetic data and, moreover, an event camera simulator that allows you to create your own sequences!
* including intensity images and inertial measurements at high frequencies.
* with precise ground truth from a motion-capture system.
* with accurate intrinsic and extrinsic calibration.

All of this would never have been possible without the great work of my collaborators Elias Mueggler, +Henri Rebecq, +Guillermo Gallego, and +Tobi Delbruck (the lead inventor of the DAVIS) !!!

This work was supported by the DARPA FLA Program, the Google Faculty Research Award, the Qualcomm Innovation Fellowship, NCCR Robotics, SNSF-ERC Starting Grant, the Swiss National Science Foundation, and the UZH Forschungskredit.

All feedback is very welcome!

----------------------------------------------------------
About event cameras and the DAVIS sensor
----------------------------------------------------------

Event cameras are revolutionary vision sensors that overcome the limitations of standard cameras in scenes characterized by high-dynamic range and high-speed motion: https://youtu.be/iZZ77F-hwzs . However, as these cameras are still expensive and not widely spread, we hope that will accelerate research on event-based algorithms!

Our dataset was recorded with a DAVIS240C sensor, which incorporates a conventional global-shutter camera and an event-based sensor in the same pixel array. This sensor has great potential for high-speed and high-dynamic range robotics and computer vision application because it combines the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution (~1 micro-second), and very high dynamic range (120 dB). However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightness changes (called "events") and synchronous grayscale frames.

Photo

Post has shared content

Post has attachment
We collected great feedback from the community and we included/addressed all the comments in the new version of our SLAM survey paper. You can find an updated version of the manuscript at http://arxiv.org/pdf/1606.05830.pdf including a discussion on the role of deep learning for SLAM, acknowledgements, and more! thanks again to all the contributors!

Did the address gave by John Leonard has video/audio recording?(30 Years of SLAM [Slides)Workshop: July 17, 2015

Post has shared content
Photos from the great workshop "Geometry and Beyond" that took place within last RSS conference in Ann Arbor, MI. Thank you to all attendees for making that a great event!
PhotoPhotoPhotoPhotoPhoto
2016-07-15
6 Photos - View album

Post has attachment
Photos from the workshop "Geometry and Beyond" that took place within last RSS conference in Ann Arbor, MI. Thank you to all attendees for making that a great event!
PhotoPhotoPhotoPhotoPhoto
2016-07-15
6 Photos - View album

Post has attachment
is SLAM solved? do robots really need SLAM? what are the open problems in SLAM? this is our answer:
http://arxiv.org/pdf/1606.05830.pdf

We'll be happy to collect any comment (from a missing comma/reference to a completely different view of the future) - comments can be posted in this community or at https://scirate.com/arxiv/1606.05830
Wait while more posts are being loaded