Vincent's posts
Post has attachment
Public
This is a work of fiction. Any resemblance to actual persons, cats, AIs, or actual events is purely coincidental.
Post has attachment
Public
Huge congratulations to +Sergey Levine and +Oriol Vinyals for being selected to MIT Technology Review's 35 Innovators Under 35!!
Post has shared content
Public
/popcorn
NVIDIA has called out Intel for exaggerating its chip performance in neural networks training using old version of benchmarking.
Post has attachment
Post has attachment
Public
When I joined Stanford's Compression and Classification Group in 1999, it became quickly evident to me that research in signal compression was really at an impasse: it was clear at the time that one would have to move towards more semantic interpretations of images and videos to make any significant gains in bandwidth, and in spite of standards already moving towards enabling these 'higher-level' coding methods, nobody really knew how to go about them.
Fast forward to today, I'm very excited to see deep nets make a significant dent into the problem, while enabling seamless, practical variable-rate coding, bit-per-bit progressive decoding, and with huge gains over JPEG to boot.
Fast forward to today, I'm very excited to see deep nets make a significant dent into the problem, while enabling seamless, practical variable-rate coding, bit-per-bit progressive decoding, and with huge gains over JPEG to boot.
Post has attachment
Public
Today, we're releasing two large datasets for robotics research:
Grasping: A collection of 650k grasp attempts, data used in: http://arxiv.org/abs/1603.02199
Push: A collection of 59k examples of pushing motions, data used in: http://arxiv.org/abs/1605.07157
Both datasets contain RGB-D views of the arm, gripper and objects, along with actuation and position parameters. They were collected in a controlled environment using a wide collection of everyday objects, some of which were held out for evaluation. Enjoy!
Credits: +Sergey Levine, +Chelsea Finn and +Laura Downs.
Grasping: A collection of 650k grasp attempts, data used in: http://arxiv.org/abs/1603.02199
Push: A collection of 59k examples of pushing motions, data used in: http://arxiv.org/abs/1605.07157
Both datasets contain RGB-D views of the arm, gripper and objects, along with actuation and position parameters. They were collected in a controlled environment using a wide collection of everyday objects, some of which were held out for evaluation. Enjoy!
Credits: +Sergey Levine, +Chelsea Finn and +Laura Downs.
Post has attachment
Public
Reminder: if you have burning questions like: how much brain power can you cram into a single desk? Or: what's up with the ponies? head over to Reddit soon and AUA:
https://www.reddit.com/r/MachineLearning/comments/4w6tsv/ama_we_are_the_google_brain_team_wed_love_to/
https://www.reddit.com/r/MachineLearning/comments/4w6tsv/ama_we_are_the_google_brain_team_wed_love_to/

Post has attachment
Post has attachment
Wait while more posts are being loaded
