Profile cover photo
Profile photo
Jeff Dean
51,347 followers -
Google Senior Fellow
Google Senior Fellow

51,347 followers
About
Jeff's posts

Post has attachment
In case you haven't seen it, the Distill Journal (http://distill.pub), created and edited by Google Brain team members +Christopher Olah and +Shan Carter, launched earlier this week. It's a totally new kind of journal and presentation style for machine learning research: online, interactive, and encouraging of lucid and clear presentations of research that go beyond using a Gutenberg-era presentation medium for ML topics. I'm really excited about this, and Chris and Shan have put a ton of work into this. The reactions so far have been quite amazing and positive

"Yes. Yes. Yes. A million times yes. I can't count how many times I've invested meaningful time and effort to grok the key ideas and intuition of a new AI/DL/ML paper, only to feel that those ideas and intuitions could have been explained much better, less formally, with a couple of napkin diagrams.... I LOVE what Olah, Carter et al are trying to do here." (Hacker News)

"I really love this effort. Research papers are low bandwidth way to get information into our brains..." (Hacker News)

"finally, someone gets it!! we need to COMMUNICATE research CLEARLY" (Twitter)

"My gosh, interactive dataviz is now the core of an academic journal. Thank you @shancarter & @ch402 & @distillpub!" (Twitter)

"This new machine learning journal is seriously exciting; an emphasis on clear explanation & interactive illustration" (Twitter)

"'Research Debt' - I am curious where @distillpub will go but I really like this essay by @ch402 & @shancarter" (Werner Vogels, CTO of Amazon, Twitter)

Blog posts announcing Distill:

Google Research: https://research.googleblog.com/2017/03/distill-supporting-clarity-in-machine.html

OpenAI: https://blog.openai.com/distill/

Y Combinator: https://blog.ycombinator.com/distill-an-interactive-visual-journal-for-machine-learning-research/

DeepMind: https://deepmind.com/blog/distill-communicating-science-machine-learning/

Chris Olah's blog: http://colah.github.io/posts/2017-03-Distill/



Post has attachment
I'm very excited about the work our group (g.co/brain) is doing in various areas of medical imaging. Today, we published a preprint of a paper titled "Detecting Cancer Metastases on Gigapixel Pathology Images" by Yun Liu, Krishna Gadepalli, Mohammad Norouzi, George E. Dahl, Timo Kohlberger, Aleksey Boyko, Subhashini Venugopalan, Aleksei Timofeev, Philip Q. Nelson, Greg Corrado, Jason D. Hipp, Lily Peng, and Martin C. Stumpe.

Some key statistics from the paper used to evaluate the effectiveness of this work:

Tumor localization score (FROC): ("find all the tumors")
0.89 (our model)
0.73 (human pathologists with infinite time)

Sensitivity at 8 false positives:
0.92 (our model)
0.73 (human pathologists with infinite time)

(It's also worth point out that Yun Liu is a member of the Google Brain Residency program: g.co/brainresidency)

This work follows on our earlier work on detection of diabetic retinopathy in retinal images.

Pathology blog post: https://research.googleblog.com/2017/03/assisting-pathologists-in-detecting.html
Paper preprint:
https://arxiv.org/abs/1703.02442

Earlier blog post on diabetic retinopathy work: https://research.googleblog.com/2016/11/deep-learning-for-detection-of-diabetic.html

Edit: update paper preprint link to different URL (but same content)
Edit2: updated to use arxiv.org URL now that it's up on Arxiv.


Post has attachment
Daniel Bogan runs a site called usesthis.com that does interviews with different people from all kinds of professions about their "work setup" (everything from computer scientists to chefs to beekeepers). He asked me if I'd participate, and here's the result:

https://usesthis.com/interviews/jeff.dean/

Browing the other interviews is kind of fun just to hear about what tools are useful for other professions:

https://usesthis.com/interviews/
https://usesthis.com/categories/

Thanks for running this site, Daniel!

Post has attachment
Following up on my earlier post about ICLR papers, the ICLR best paper awards were just announced at http://www.iclr.cc/doku.php?id=iclr2017:schedule, and the Google Brain team is represented on two of the three best papers:

Understanding deep learning requires rethinking generalization, by Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals, http://openreview.net/forum?id=Sy8gdB9xx

and

Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data, by Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, Kunal Talwar, http://openreview.net/forum?id=HkwoSDPgg

Congrats to these authors!

Post has attachment
Our first TensorFlow Developers Summit is starting in about 15 minutes, with an exciting day long agenda! We have about 400 developers and researchers attending live in Mountain View, and many more watching on the livestream throughout the world. Livestream link here, and the videos will be posted on YouTube after a couple hour delay.

Post has attachment
Nice to see the release of Cloud Spanner today! I helped design and implement Spanner along with many other people in order to provide a strongly consistent, geographically distributed database system that could be used in our products. Google's advertising systems was one of our primary early customers for Spanner. Now the same system can be used externally.

Engineers at Quizlet did a nice comparison of Cloud Spanner's performance and scaling characteristics compared with MySQL:

https://quizlet.com/blog/quizlet-cloud-spanner

A detailed paper about Spanner appeared in OSDI 2012:

https://research.google.com/archive/spanner.html

To all the people continuing to improve Spanner today, congrats on the launch! Great work!

Post has attachment
I'm excited that the Google Brain team (g.co/brain) will have a decent presence at ICLR 2017 (http://www.iclr.cc), with 20 papers (including 4 papers chosen for oral presentation), plus an additional 4 papers in the workshop track. Of these, 9 of the papers have co-authors from our Brain Residency program (g.co/brainresidency), and another 8 have co-authors who were interns in our group. The Brain affiliated papers are below:

- Understanding deep learning requires rethinking generalization, by Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals, http://openreview.net/forum?id=Sy8gdB9xx (Intern co-author), Oral
- Neural Architecture Search with Reinforcement Learning, by Barret Zoph and Quoc Le, http://openreview.net/forum?id=r1Ue8Hcxg (Brain Resident co-author), Oral
- Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data, by Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, Kunal Talwar, http://openreview.net/forum?id=HkwoSDPgg, Oral
- Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic, by Shixiang (Shane) Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine, http://openreview.net/forum?id=SJ3rcZcxl (Intern co-author), Oral
- Adversarial Machine Learning at Scale, by Alexey Kurakin, Ian J. Goodfellow, Samy Bengio, http://openreview.net/forum?id=BJm4T4Kgx
- Density estimation using Real NVP, by Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio, http://openreview.net/forum?id=HkpbnH9lx (Intern co-author)
- Learning to Remember Rare Events, by Lukasz Kaiser, Ofir Nachum, Aurko Roy, Samy Bengio, http://openreview.net/forum?id=S1yTEt9ex (Brain Resident co-author)
- Categorical Reparameterization with Gumbel-Softmax, by Eric Jang, Shixiang (Shane) Gu, Ben Poole, http://openreview.net/forum?id=rkE3y85ee (Intern co-author)
- HyperNetworks, by David Ha, Andrew Dai, Quoc V. Le, http://openreview.net/forum?id=rkpACe1lx (Brain Resident co-author)
- Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer, by Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean, http://openreview.net/forum?id=B1ckMDqlg (Brain Resident co-author)
- Learning a Natural Language Interface with Neural Programmer, by Arvind Neelakantan, Quoc V. Le, Martín Abadi, Andrew McCallum, Dario Amodei, http://openreview.net/forum?id=ry2YOrcge (Intern co-author)
- Deep Information Propagation, by Samuel Schoenholz, Justin Gilmer, Surya Ganguli, Jascha Sohl-Dickstein, http://openreview.net/forum?id=H1W1UN9gg (Brain Resident co-author)
- Decomposing Motion and Content for Natural Video Sequence Prediction, by Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, Honglak Lee, http://openreview.net/forum?id=rkEFLFqee
- Capacity and Trainability in Recurrent Neural Networks, by Jasmine Collins, Jascha Sohl-Dickstein, David Sussillo, http://openreview.net/pdf?id=BydARw9ex (Brain Resident co-author)
- Unrolled Generative Adversarial Networks, by Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein, http://104.155.136.4:3000/forum?id=BydrOIcle (Brain Resident co-author)
- A Learned Representation For Artistic Style, by Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur, http://openreview.net/forum?id=BJO-BuT1g (Intern co-author)
- Identity Matters in Deep Learning, by Moritz Hardt, Tengyu Ma, http://openreview.net/forum?id=ryxB0Rtxx
- Latent Sequence Decompositions, by William Chan, Yu Zhang, Quoc Le, Navdeep Jaitly, http://104.155.136.4:3000/forum?id=SyQq185lg (Intern co-author)
- Improving policy gradient by exploring under-appreciated rewards, by Ofir Nachum, Mohammad Norouzi, Dale Schuurmans, http://openreview.net/forum?id=ryT4pvqll (Brain Resident co-author)
- Adversarial Training Methods for Semi-Supervised Text Classification, by Takeru Miyato, Andrew M. Dai, Ian Goodfellow, https://openreview.net/forum?id=r1X3g2_xl
- Adversarial examples in the physical world, by Alexey Kurakin, Ian J. Goodfellow, Samy Bengio, http://openreview.net/forum?id=S1OufnIlx, Workshop
- Short and Deep: Sketching and Neural Networks, by Amit Daniely, Nevena Lazic, Yoram Singer, Kunal Talwar, http://openreview.net/forum?id=r1br_2Kge, Workshop
- Unsupervised Perceptual Rewards for Imitation Learning, by Pierre Sermanet, Kelvin Xu, Sergey Levine, http://openreview.net/pdf?id=Bkul3t9ee (Brain Resident co-author), Workshop
- Tuning Recurrent Neural Neworks with Reinforcement Learning, by Natasha Jaques, Shixiang (Shane) Gu, Richard E. Turner, Douglas Eck, http://openreview.net/forum?id=BJ8fyHceg, (Intern co-author), Workshop

You can find the full list of accepted papers at ICLR 2017 here:
https://openreview.net/group?id=ICLR.cc/2017/conference

Edit: Added intern identification to one of the papers


Post has attachment
+Esteban Real, +Jon Shlens, Xin Pan, and +Vincent Vanhoucke in the Google Brain team and Stefano Mazzocchi in another team at Google Research just released a new public dataset called YouTube-BoundingBoxes, consisting of 5 million human annotated bounding boxes across 380,000 video segments.

Deep learning models for handling video rather than just static images are likely to be the next frontier for computer vision research, and this large dataset is likely to be an important new tool in assessing the effectiveness of a wide variety of video models in the areas of localization, detection, and object tracking.

There's a more detailed associated Arxiv paper that describe the dataset and the methodology used for collecting it in more detail: https://arxiv.org/pdf/1702.00824v1.pdf

Nice work, everyone!

Post has attachment
The Google Brain team — Looking Back on 2016

I wrote up a blog post about the work the Google Brain team has been doing over 2016. I'm really excited to work with such great colleagues! In writing this up, it's pretty remarkable to me that nearly every other sentence has one or a few links to many more details on significant and impactful work.



Post has attachment
In 2016, we welcomed into the Google Brain team our first batch of Google Brain Residents (g.co/brainresidency). We just published a blog post about what the 27 residents have been up roughly halfway into their one year residency program. They're an impressive group and the vitality and energy they've added to our group has been fantastic!

The deadline for applying for this year's program is coming up on January 13th, so if you're interested in coming to the Brain team for a year to do research and be mentored by our research scientists, I would encourage you to apply.


Wait while more posts are being loaded