Profile

Cover photo
Verified name
Vincent Vanhoucke
Works at Google
Lives in San Francisco, CA
3,011 followers|633,618 views
AboutPostsPhotosVideosReviews

Stream

Vincent Vanhoucke

Shared publicly  - 
 
Some nice analysis of GoogLeNet deconvolutions.
15
6
Emad Barsoum's profile photoArthur Chan's profile photo
Add a comment...

Vincent Vanhoucke

Shared publicly  - 
 
 
My startup, SimplyCredit, is hiring full-stack or backend engineers! If you or anyone you know is interested in getting in on the ground floor of a venture-backed fintech company looking to change lending and credit for the better, let me know! We're located in San Francisco, near CalTrain/Muni/BART.
View original post
1
Add a comment...

Vincent Vanhoucke

Shared publicly  - 
 
Adversarial training is one of my favorite topics in deep learning these days. It relates to the perennial problem in machine learning of 'knowing when you don't know'. In spite of all the Bayesians roaming ML conference halls, we still are doing a terrible job in general at addressing that problem. It's difficult: there are two main ways in which we 'don't know': 1) the model fails 2) the data is out of sample. The remedies to each problem tend to be very different, akin to 1) estimating a posterior vs 2) estimating a prior. It also often hinges on estimating a complicated (and expensive) partition function well. More importantly, rare are the machine learning classification benchmarks that have a 'none of the above' category, which means there is comparatively little incentive to work on this problem.
 
Blog post on KDnuggets clarifying some misconceptions about adversarial examples: http://www.kdnuggets.com/2015/07/deep-learning-adversarial-examples-misconceptions.html
Google scientist clarifies misconceptions and myths around Deep Learning Adversarial Examples, including: they do not occur in practice, Deep Learning is more vulnerable to them, they can be easily solved, and human brains make similar mistakes. c comments. By Ian Goodfellow (Google).
5 comments on original post
7
2
Emre Safak's profile photoVincent Vanhoucke's profile photoPeter Vrancx's profile photoMike G's profile photo
2 comments
 
+Emre Safak that's one way to deal with 'known unknowns', not necessarily 'unknown unknowns', let alone 'hostile unknowns'.
Add a comment...

Vincent Vanhoucke

Shared publicly  - 
 
Female guest: ' I wrote the algorithm.'
Male radio host, sounding incredulous: ' You did?'
Hey +NPR, you're not helping.
8
Add a comment...

Vincent Vanhoucke

Shared publicly  - 
 
This is the most fun we've had in the office in a while. We've even made some of those 'Inceptionistic' art pieces into giant posters. Beyond the eye candy, there is actually something deeply interesting in this line of work: neural networks have a bad reputation for being strange black boxes that that are opaque to inspection. I have never understood those charges: any other model (GMM, SVM, Random Forests) of any sufficient complexity for a real task is completely opaque for very fundamental reasons: their non-linear structure makes it hard to project back the function they represent into their input space and make sense of it. Not so with backprop, as this blog post shows eloquently: you can query the model and ask what it believes it is seeing or 'wants' to see simply by following gradients. This 'guided hallucination' technique is very powerful and the gorgeous visualizations it generates are very evocative of what's really going on in the network.
180
29
Saravanan Thirumuruganathan's profile photoNicola Rohrseitz's profile photoPeter K.T. Yu's profile photoRameswar Panda's profile photo
35 comments
 
+marley nicolson Just follow the instructions in the ipython notebook, "ipython notebook dream.ipynb"
Add a comment...

Vincent Vanhoucke

Shared publicly  - 
 
I'll be at ICML in a few weeks. Come and say hi if you're there!
2
Philippe Girolami's profile photo
 
Tu y vas direct de CDG ? Prenons un verre si tu passes à Paris
 ·  Translate
Add a comment...
Have them in circles
3,011 people
Kıyamet Kitabı's profile photo
Marcos Paulo's profile photo
Nikki Mirghafori's profile photo
GDG Black Forest's profile photo
john norman's profile photo
Adeel Magma's profile photo
Mike Wilde's profile photo
Joses Paul's profile photo
Ahmad Hamzawi's profile photo

Vincent Vanhoucke

Shared publicly  - 
 
We're at the point where state-of-the-art machine learning can run in real-time on your phone. The possibilities are endless.
 
The power of Neural Networks, on your phone

Today we announced that the Google Translate app now does real-time visual translation of 20 more languages. So the next time you're translating a foreign menu or sign in Prague with the latest version of Google's Translate app, you're now using a deep neural net. Learn how it works, all on your phone and without an Internet connection, on the Google Research blog, linked below.
3 comments on original post
28
9
Andrew Delong's profile photoSachin Shenoy's profile photoJulian Ibarz's profile photoEmre Safak's profile photo
3 comments
 
The learning algorithm isn't running on the phone. Instead, the network is trained and then (I'm guessing here) compressed and then downloaded onto the phone.
Add a comment...

Vincent Vanhoucke

Shared publicly  - 
 
A big shoot out to this year's ICML organizers:
- Opening remarks dispatched in under 20 min,
- Good coffee and snacks,
- Plenty of space to walk between posters and booths, all centrally located,
- Working A/V and WiFi,
- name badges you can read from BOTH SIDES!
I'm always surprised how often conferences fail at providing the basics. Nice job!
22
1
Oriol Vinyals's profile photoRama Govindaraju's profile photo
 
I honestly found the poster space tight and subpar. For the rest, though, I totally agree!
Add a comment...

Vincent Vanhoucke

Shared publicly  - 
 
I'm enjoying a lot all the creative #deepdream artwork that the interwebs have been generating.
Lots of people are asking what's up with all the eyes and #dogslug images: why does the system want to see dogs everywhere?
The answer is simple and very boring: it turns out that the ImageNet dataset that most of these models are trained on consists of about one third images of dogs, merely for historical reasons: it was derived from a previous academic dataset that focussed on comparing dog breeds.
When a third of what you've ever seen in your lifetime looks like a dog, dogs is predominantly what you hallucinate in your dreams :)
15
1
Marc Cohen's profile photoEmre Safak's profile photo
2 comments
 
Effectively this is your model's prior. That's not necessarily ideal given your use cases.
Add a comment...

Vincent Vanhoucke

Shared publicly  - 
 
A geeky deck about street light engineering in San Francisco. Left turns onto one-way streets have always been a puzzling feature of US traffic laws to me. TIL they are actually pretty dangerous.
Every fellow San Franciscan will undoubtedly recognize the Junipero / Sloat / Portola intersection from Hell on slide #12.
2
Add a comment...

Vincent Vanhoucke

Shared publicly  - 
 
This is a very neat UI.
 
If you like Wikipedia, try out a site I made for finding all the articles available about some topic.
What is this? This is an experimental interface for finding Wikipedia articles to read when you're doing in-depth research on a topic. Type a topic above to see links to Wikipedia articles that match — English only, for now. As you type, the green bar below the input box will fill with matching ...
View original post
2
2
Cristian Petrescu-Prahova's profile photoJack Woolford's profile photo
Add a comment...

Vincent Vanhoucke

Shared publicly  - 
 
 
Are you training recurrent neural networks to produce sequences of tokens (like machine translation or image captioning)?
If so, you should read our recent arxiv paper http://arxiv.org/abs/1506.03099 in which we propose a scheduled sampling approach to improve inference in multi-step prediction tasks by reducing the gap between how you train your model and how you use it for inference.
This is joint work with +Oriol Vinyals , +Navdeep Jaitly , and Noam Shazeer. 
Abstract: Recurrent Neural Networks can be trained to produce sequences of tokens given some input, as exemplified by recent results in machine translation and image captioning. The current approach to training them consists in maximizing the likelihood of each token in the sequence given the ...
2 comments on original post
5
Add a comment...
People
Have them in circles
3,011 people
Kıyamet Kitabı's profile photo
Marcos Paulo's profile photo
Nikki Mirghafori's profile photo
GDG Black Forest's profile photo
john norman's profile photo
Adeel Magma's profile photo
Mike Wilde's profile photo
Joses Paul's profile photo
Ahmad Hamzawi's profile photo
Work
Employment
  • Google
    Tech Lead / Manager, present
Story
Introduction

Please refer to my website.

Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
San Francisco, CA
Previously
Very good food. Bring a heavy jacket in the Winter months, this place is freezing and the waiters all wear one.
Public - 7 months ago
reviewed 7 months ago
An avalanche of failures in the peak Holiday season. OpenTable booking broken, you have to go on site. Slow service. Inexperienced, confused looking staff. Tepid potato leek soup, much too salty to eat. Overcooked pasta and fries drowning in salt as well. This might have been a great restaurant in the past, but it's seemingly poorly managed.
Public - 7 months ago
reviewed 7 months ago
Fantastic food. Ever surprising menu. (edit: no longer the Incanto I remember)
Public - a year ago
reviewed a year ago
Indifferent, borderline hostile service. Appetizing menu but poor execution.
Public - a year ago
reviewed a year ago
23 reviews
Map
Map
Map
A mean hot chocolate.
Public - 12 months ago
reviewed 12 months ago
By far the best meal I've had in Trouville. Don't be in a rush, we were the first in the Restaurant that day and were in there for 1 hour 45 minutes.
Public - a year ago
reviewed a year ago
Very superficial assessment. Tech missed a blatantly ruptured pipe which was pouring air into our crawl space and causing our heating issues. Emails bounce back.
Public - 3 years ago
reviewed 3 years ago