Profile cover photo
Profile photo
Kosta Derpanis
248 followers
248 followers
About
Posts

Post has shared content
Add a comment...

Post has shared content
If you are at CVPR next week, please join us on Monday for the Workshop in Deep Learning for Visual SLAM, organised by +Ronnie Clark, +Sudeep Pillai, +Alex Kendall, +Will Maddern, +Stefan Leutenegger and myself. We've got a great set of invited speakers as well as submitted talks and posters. The full schedule is available here:
http://visualslam.ai/schedule.html

and the full text for the submitted papers is already online here:
http://openaccess.thecvf.com/CVPR2018_workshops/CVPR2018_W9.py
Photo
Add a comment...

Post has attachment
The #ECCV2018 Workshop on Geometry meets #DeepLearning call for papers is now available: https://drive.google.com/file/d/16MikZLusjiMGDc5YGYtTcOq1KUrEJgkL/view?usp=sharing #ComputerVision

Submission deadline: July 8, 2018

Please share widely
GMDL ECCV 2018 CFP.pdf
GMDL ECCV 2018 CFP.pdf
drive.google.com
Add a comment...

Post has shared content
I am excited that our TRO'17 paper "On-Manifold Preintegration for Real-Time Visual--Inertial Odometry" by +Christian Forster, +Luca Carlone, +Frank Dellaert and myself will receive the 2017 Transactions on Robotics (TRO) best paper award at the next ICRA'18 conference in Brisbane. On this occasion, IEEE made the article open access for the next ten years!
Paper: https://ieeexplore.ieee.org/document/7557075/
Video: https://youtu.be/CsJkci5lfco
Add a comment...

Post has shared content
New work on neural network compression. Check out @ml_review’s Tweet: https://twitter.com/ml_review/status/983253943542263808?s=09
Add a comment...

Post has shared content
Here's a brief overview of how computer vision and deep learning are being used to both generate and fight against DeepFakes. May 2018 edition.
Add a comment...

Post has shared content
‪The paper submission deadline is July 8th, 2018‬
The 3rd edition of the Geometry Meets #DeepLearning Workshop will take place in #Munich #Germany on September 14 2018 in conjunction with #ECCV2018. The official workshop website is now up: https://sites.google.com/site/deepgeometry2018

We have an exciting lineup of confirmed speakers (with more to be announced).

Paper submission deadline: July 8th 2018

Please share widely

#ComputerVision
Add a comment...

Post has shared content
We have two open positions at the Dyson Robotics Lab at Imperial College London, led by myself and +Stefan Leutenegger and focusing on breakthrough research in 3D vision, learning-based semantic SLAM and vision-guided manipulation. We are an academic lab which works on long-term publishable research while also collaborating with Dyson on novel applications in home robotics.

The first is a Dyson Fellow position (current and recent holders of these positions include +Edward Johns, +Michael Blösch, +Ronnie Clark, +Ankur Handa, +Thomas Whelan and Akis Tsiotsios), where we are looking for a leading post-doctoral researcher in any of our areas of interest to take responsibility for part of our core research mission.

http://www.jobs.ac.uk/job/BJI330/dyson-research-fellow

The second is a Research Engineer position, where we need someone with exceptional engineering skills in applied robotics and computer vision who will plan a key role in adding structure to our research efforts and in transferring developments to Dyson's in-house robotics team.

http://www.jobs.ac.uk/job/BJH985/research-engineer

Please feel free to get in touch with any informal questions, or follow the links to the adverts for official details on how to apply.
Photo
Add a comment...

Post has shared content
Add a comment...

Post has shared content
We are excited to release open-source our FAST-inspired corner detector for event cameras! Our implementation is capable of processing a million events per second on a single core (less than a micro-second per event)!
The code is available on GitHub: https://github.com/uzh-rpg/rpg_corner_events

Reference paper:
E. Mueggler, C. Bartolozzi, D. Scaramuzza
Fast Event-based Corner Detection
British Machine Vision Conference (BMVC), London, 2017.
http://rpg.ifi.uzh.ch/docs/BMVC17_Mueggler.pdf
Video: https://youtu.be/tgvM4ELesgI

Abstract:
Event cameras offer many advantages over standard frame-based cameras, such as low latency, high temporal resolution, and a high dynamic range. They respond to pixel level brightness changes and, therefore, provide a sparse output. However, in textured scenes with rapid motion, millions of events are generated per second. Therefore, state-of-the-art event-based algorithms either require massive parallel computation (e.g., a GPU) or depart from the event-based processing paradigm. Inspired by frame-based preprocessing techniques that reduce an image to a set of features, which are typically the input to higher-level algorithms, we propose a method to reduce an event stream to a corner event stream. Our goal is twofold: extract relevant tracking information (corners
do not suffer from the aperture problem) and decrease the event rate for later processing stages. Our event-based corner detector is very efficient due to its design principle, which consists of working on the Surface of Active Events (a map with the timestamp of the latest event at each pixel) using only comparison operations. Our method asynchronously processes event by event with very low latency. Our implementation is capable of processing millions of events per second on a single core (less than a micro-second per event) and reduces the event rate by a factor of 10 to 20.
Add a comment...
Wait while more posts are being loaded