Could I draw your kind attention to our new library of annotated synthetic indoor scenes we have been working on for a while? We think this could be potentially very useful for people interested in scene understanding for robotics and generating unlimited amount of training data from arbitrary view-points. This library emerged as a part of our joint-reconstruction-and-semantic-segmentation with conv-nets based approach. We have been trying to do segmentation of functional categories of objects purely from geometric cues.
Our semantic segmentation module builds on the work which +Vijay Badrinarayanan
kindly involved me in last year using his idea of saving pooling indices inspired from Marc'Aurelio Ranzato's unsupervised learning method. It is quite essential when do you semantic segmentation that you get your boundaries right and saving pooling indices in your conv-net whenever you use pooling helps quite a lot. You can find the relevant papers at the bottom of the webpage and nice little demo +Alex Kendall
created together with +Kesar Breen
's caffe implementation.
The library webpage also has a small presentation in the publications section that I gave this CVPR in a workshop organised by Ian Reid. This is joint work with +Vijay Badrinarayanan
, +Viorica Patraucean
, +Simon Stent
and +Roberto Cipolla
and we are happy to share all the models and hope that you can use them and give us good feedback on expanding the library and the overall approach.