Profile cover photo
Profile photo
Vincent Vanhoucke
3,810 followers
3,810 followers
About
Vincent's posts

Post has attachment
When are we going to have self-driving cars at last?
How about today. Does today work for you?

Post has attachment
Some very cool new work from Pierre, Corey, Jasmine and Sergey on self-supervised learning using multiple viewpoints of a scene to learn semantic representations. Paper here: https://arxiv.org/abs/1704.06888

Post has attachment
Heading to ICLR! I'll also be giving a talk at the École Polytechnique in Palaiseau on Friday.

Post has attachment
Someone beat me to V-GAN, I am so sad.

Post has shared content
You know those CSI episodes where they reconstruct the perpetrator's face from the 2-pixel-wide reflection on the victim's cornea via a surveillance cam? Yeah, those...
At some point soon, if not already, reasonable people (the judiciary, law enforcement) will quite understandably start believing that we actually have these kinds of superpowers, thanks to advances in conditional generative models.
At some point, actually ... right now, we as a community have to start explaining that: 1) no 2) this generated information doesn't come from a magical source of truth, but that it's all priors; in other words: the generated data doesn't add information about the image it's conditioned on: all new information comes from our (possibly hella incomplete, hella biased) model of the world. 3) we definitely don't know at a very fundamental level how to attach calibrated probabilities to those generated images, hence they're essentially worthless as either positive or negative evidence.
I position these generated images when I talk to non-experts as hallucinations, because I think that evokes all the right messages: it comes from our 'mental model', not from the sensory input. It's weakly connected to reality, and not to be trusted for anything serious.
Rui Huang et al. (https://arxiv.org/pdf/1704.04086.pdf) can now generate frontal view faces from side view input. They call their net TP-GAN and include e. g. symmetry loss in addition to adversarial loss in their training. Work in GANs is exploding these days with beautiful results.
Photo

Post has attachment
Federated Learning is one of the things I'm really excited about these days. It has the potential to profoundly change how we do machine learning and how we approach privacy. It makes it possible to securely learn models, both shared across users and personalized for you, while the data remains on your phone, and with very strong mathematical guarantees which prevent model updates to communicate any private information. The mathematical and engineering sophistication required to make this happen in a practical manner should delight any ML and/or crypto enthusiast. If you're intrigued, read on:

Post has shared content
All you ever wanted to know about TPUs!
ISCA paper preprint about Google's Tensor Processing Unit

Paper: https://drive.google.com/file/d/0Bx4hafXDDq2EMzRNcy1vSUxtcEk/view
Blog post by +Norm Jouppi: https://cloudplatform.googleblog.com/2017/04/quantifying-the-performance-of-the-TPU-our-first-machine-learning-chip.html

Last June at Google I/O, +Sundar Pichai showed an example of a new type of custom ASIC that Google had developed to accelerate machine learning workloads, called a Tensor Processing Unit (TPU), but didn't give very many details. The TPU is used to run large neural networks very efficiently and with low latency throughout many Google products, including Search, Photos, Translate, and also powered the AlphaGo system used during the match against Lee Sedol in Korea last March, and offers 92 trillion operations per second (TOPs) per chip with a modest power budget. I'm happy to announce that we now have a detailed paper In-Datacenter Performance Analysis of a Tensor Processing Unit​ that will appear in this year's International Symposium on Computer Architecture (ISCA) conference in Toronto in June. Today we've published a pre-print of the paper and a companion blog post, and +David Patterson will be giving a talk about the TPU at the Computer History Museum in Mountain View this afternoon (https://sites.google.com/corp/view/naeregionalsymposium: sadly no more space is available).

Various news articles:
https://www.nextplatform.com/2017/04/05/first-depth-look-googles-tpu-architecture/
https://www.wired.com/2017/04/building-ai-chip-saved-google-building-dozen-new-data-centers/
Hacker News discussion: https://news.ycombinator.com/item?id=14043059



Post has attachment
Just WOW!

Post has attachment
This is the first time I come across a plausible, detailed, and well substantiated explanation of how life might have started on Earth. A fascinating read, especially because the proposed mechanism implies that sustained life actually started multiple times, from initial conditions that are plausibly common in the universe. It also unfortunately suggests that evolution beyond the bacterial stage only happened successfully exactly once, and that's not great news for the ubiquity of intelligent life out there given how much opportunity there should have been for that evolutionary step to take place.

Post has attachment
I'll be giving a keynote talk on 'Generative Adversarial Robotics' on May 1st at the Berkeley Symposium on Robot Learning.
Wait while more posts are being loaded