Profile

Cover photo
Ran Sui
Attended University of Electronic Science and Technology of China
20,190 views
AboutPostsVideos

Stream

Ran Sui

Shared publicly  - 
 
 
The source code has just been released for ORB-SLAM by Raul Mur-Artal, +J.M. Martínez Montiel and +Mingo Tardós from the University of Zaragoza. This is a real-time monocular SLAM system which works in many different scenarios (indoor, outdoor, small or large scale) and these are probably the most accurate large scale single camera only (no IMU) SLAM results I've ever seen. It's based on ORB features and a scale drift aware BA back-end. Great competition for LSD-SLAM (and it's very interesting to look at the pros and cons of  these two approaches). Raul, congrats on getting the code out!
Code: https://github.com/raulmur/ORB_SLAM
Project Webpage: http://webdiis.unizar.es/~raulmur/orbslam/
2 comments on original post
1
Add a comment...

Ran Sui

Shared publicly  - 
 
 
New work from +Niko Sünderhauf et al on real-time semantic mapping from a mobile robot using ConvNets. We're going to see a lot more developments in robotics like this soon now that the software is out there to make deep learning widely accessible and relatively fast. For more info see: https://wiki.qut.edu.au/display/cyphy/Vision-based+Semantic+Mapping
How about putting the paper on arXiv? We did that with our recent ICRA submission on SLAMBench and the ICRA people said it wasn't a problem for the review process (and it's what all the cool deep learning people do ;) )
1
Add a comment...

Ran Sui

Shared publicly  - 
 
 
Interesting work on object discovery within a dense SLAM framework from Lu Ma and +Gabe Sibley. Certainly more evidence about how combining localisation, dense mapping, segmentation and recognition into a single real-time system makes all of those individual things much easier.
 (to appear at ECCV 2014: here is the paper: http://maluhomepage.weebly.com/uploads/8/4/1/7/8417990/unsupervised_dense_object_discovery_detection_tracking_and_reconstruction.pdf )
1
Add a comment...

Ran Sui

Shared publicly  - 
 
 
We have just publicly released SLAMBench, which is a new open source SLAM software framework with multiple language (CUDA, OpenCL, OpenMP and C++) implementations of the KinectFusion algorithm within a standardised evaluation harness. SLAMBench is a software framework that supports research in hardware accelerators and software tools by comparison of performance, energy-consumption and accuracy of 3D dense SLAM in the context of a known synthetic ground truth from the ICL-NUIM dataset. SLAMBench is an output of the PAMELA project collaboration between Manchester, Edinburgh and Imperial College which brings together researchers in computer vision, software performance optimisation, compilers, runtime systems and computer architecture to work on how algorithms, programming tools and architecture must surely co-evolve as processors become increasingly multi-core and heterogeneous, particularly under the power constraints imposed by mobile applications.

Get the code and more information here:
http://apt.cs.manchester.ac.uk/projects/PAMELA/tools/SLAMBench/

A paper with full details on SLAMBench is available from arXiv:
http://arxiv.org/abs/1410.2167
Introduction. Computer vision algorithms for 3D scene understanding have enormous potential impacts for power constrained robotics application contexts. SLAMBench presents a foundation for quantitative, comparable and validatable experimental research to investigate trade-offs for performance, ...
1
Add a comment...

Ran Sui

Shared publicly  - 
 
 
We have what we think are breakthrough results in reconstructing natural looking mosaics from a hand-held event camera (DVS) and no additional sensing.

An event camera is a silicon retina which has no global shutter. Instead, each pixel is an independent and asynchronous brightness sensor which reports an event or spike whenever it measures a threshold change in log intensity along with the microsecond-precise timing of that change.

Our approach to recovering intensity mosaics from this output is an essential SLAM one of interleaved probabilistic filters to track the pure rotation pose of the camera (incrementally, event by event) while reconstructing a gradient map of the scene --- since each event, given an estimate of camera position and velocity relative to the estimated mosaic, improves our estimate of the component of the gradient in the mosaic in the instantaneous direction of motion. The gradient map can then be upgraded to an intensity map using Poisson reconstruction. We think that essentially the same method should extend to 3D depth map estimation and visual odometry in the near future, or other sorts of motion assumption up to generic optical flow estimation.

We can reconstruct mosaics with both higher resolution than the input event stream and huge dynamic range (the video shows log intensity mosaics and gradient maps), and we should be able to track extremely rapid motion against these. We think this really proves that the DVS device is doing what it is supposed to and capturing the really important information in a moving scene --- the changes --- at a hugely reduced data rate compared to standard video, but in such a way that the whole scene can be reconstructed if needed.

This work was led by +Hanme Kim with help from +Ankur Handa (now in Cambridge) and collaboration with Sio-Hoi Ieng and Ryad Benosman from the Institut de Vision in Paris, and will be published at BMVC 2014. Here is the paper: http://www.doc.ic.ac.uk/~ajd/Publications/kim_etal_bmvc2014.pdf
1
Add a comment...

Ran Sui

Shared publicly  - 
 
Amazing!
 
If you are doing energy optimization in C++ at this day
Look no further! openGM2 lib is the way :)
1
Add a comment...
People
Basic Information
Gender
Male
Education
  • University of Electronic Science and Technology of China
    Electronic Information Engineering, 2007 - 2011
Links
YouTube