Profile cover photo
Profile photo
Vincent Vanhoucke
3,814 followers
3,814 followers
About
Vincent's posts

Post has attachment
All-star line-up at CoRL this November! Delighted to welcome our invited speakers: J. Andrew Bagnell, Rodney Brooks, Anca Dragan, Yann LeCun, and Stefanie Tellex.


Post has attachment
Nice piece featuring my colleague Esteban Real's work on evolving convnets.

Post has attachment
A new publication model for research
Isn't it strange how research still gets written up in 8-page two-column serif font, as if anyone was going to actually read the paper in printed proceedings? Why is the interesting, rich data relegated to the 'supplementary material' section (when it even exists)? Why is it so hard to refer to online resources and code? Why do many authors today feel the need to provide an accompanying website or blog post?
Let's make that website the centerpiece instead! Distill is a new way to publish research which provides a much richer authoring model and tools to help researchers communicate their work better. The journal is entirely designed to live on the web, is peer reviewed and registered with the Library of Congress and CrossRef. The constraints: a high bar for content quality, clarity and educational value. Go check it out!

Post has attachment
A better - and very simple - way to inspect deep nets!

I often hear researchers complaining how Google tends to publish a lot about large-scale, comparatively dumb approaches to solving problems. Guilty as charged: think about ProdLM and 'stupid backoff', or the 'billion neuron' cat paper, AlphaGo, the more recent work on obscenely large mixture of experts or the large-scale learning-to-learn papers.
The charges levied against this line of work is that they're inefficiently using large amounts of resources, not being 'clever', and that nobody else can reproduce them as a result. But that's exactly the point!! The marginal benefit of us exploring the computational regimes that every other academic lab can do just as well is inherently limited. Better explore the frontier that few others have the resources to explore: what happens when we go all out; try the simple stuff first, and then if it looks promising we can work backwards and make it more efficient. ProdLM gave us the focus on data for machine translation that made production-grade neural MT possible. The 'cat paper' gave us DistBelief and eventually TensorFlow. That's not waste, that's progress.

Post has attachment
Workshop on Deep Learning for Robotic Vision at CVPR. Deadline March 31st!

Post has attachment
There goes the neighborhood... Welcome +Anthony Goldbloom and crew!

Post has attachment

Post has attachment
Some fun work on automatic model design trough evolution: you can evolve (last year's) state-of-the-art CIFAR models from scratch using 100 exaFLOPs in under 300 hours without baking any assumptions into the architecture search. Given that it took us years to get there through 'Graduate Student Descent', it's not a bad start!

Post has attachment
Delighted to welcome Panasonic, Microsoft, Facebook, Osaro, Nuro.ai, and the Australian Centre for Robotic Vision as co-sponsors of the First Conference on Robot Learning!
Wait while more posts are being loaded