Getting machine learning systems to really work requires excellent machine learning models and algorithms, but also often requires lots of good systems work to surround the core machine learning models/algorithms to make them useful as components in larger systems. We've been building up these pieces and open-sourcing them to make it easier for everyone to use machine learning in their products and applications.
A while ago, I posted (post: https://plus.google.com/+JeffDean/posts/6okdnD1MHmX) about the open-sourcing of TensorFlow Serving, an open source package developed at Google that complements the core TensorFlow system. TensorFlow Serving makes it easy to take models that have been trained with TensorFlow and move them into a system for serving inference requests on those models (and simplifying messy issues like updating models in the live serving system as they are updated through continuous training, etc.).
This blog post by the TensorFlow Serving team at Google shows how to use TensorFlow, TensorFlow Serving, and Kubernetes (another project open-sourced by Google) to deploy a pre-trained Inception-v3 image classification model using TensorFlow Serving, running in Kubernetes containers, and using Kubernetes' ability to dynamically scale the number of replicas up and down as load changes.
Full tutorial link from the bottom of the blog post:
Class(ification) is starting!
This week, AlphaGo became the first computer program to beat a world champion in the game of Go, in its games against the great Lee Sedol, the best Go player over the last decade and possibly the best player ever.
I was lucky enough to be in Seoul, Korea this week, as a guest of the DeepMind team to watch the first couple of matches (https://deepmind.com/alpha-go.html). (See attached pictures).
The AlphaGo work started a couple of years ago as a modest collaboration between Ilya Sutskever (then in the Brain team), his Brain team intern at the time, Chris Maddison and Aja Huang and David Silver in DeepMind team (http://arxiv.org/abs/1412.6564) on building a neural net to do move evaluation for Go. The DeepMind team then really pushed this work forward over the last couple of years and added the major pieces of improvement through reinforcement learning through millions of games of self-play, and an optimized Monte-Carlo Tree Search to significantly strengthen the system's ability to understand long-term implications of move sequences.
Many of the pieces of the AlphaGo system were trained with DistBelief (our first generation training system for neural nets) and TensorFlow (tensorflow.org). Large-scale parallel neural net training for the win!
The amount of media coverage at the AlphaGo match was crazy.
I also had a great time visiting the Google Seoul office, and giving a talk at Campus Seoul, a startup incubator space that Google operates in Seoul.
In 2002, we needed a new car, so we bought "Googley", my trusty Volvo. He has served us well over the years, faithfully ferrying me to and from work for many years, and helping both my kids learn to drive. Today, we sent him along to find a new home, as a donation to KQED, our local public radio station. Farewell, Googley. Thanks for your companionship.
I signed it, and added the following comments:
I am a computer scientist. I believe strongly in the ability of computing to change the world, and also that every person, regardless of their school, socioeconomic background, or other factors should have the opportunity to be exposed to computer science and computational thinking. Our field needs a diversity of opinions and backgrounds, and our world will be a better place when more people understand the power and capabilities of computing.
Exposing all students to computer science at an early enough age will go a long way to ensuring that the field of computer science reflects the diversity of the world's people.
This week, in the Brain team, describing joint work with many others, posted about our release of a full training pipeline for training an Inception-style image classification model on the ImageNet training dataset, and also to fine-tune such a model on your own example images and classes. This post describes that system, and also shows you how to train your own classifier.
We had previously released a pre-trained inception model late last year, but a bit more work had to be done to get some of the operators we used for image manipulation in the training pipeline into a state that could be open sourced. Now that we've released this, it should be pretty straightforward for people to train their own image classifiers. Thanks to everyone for their patience as we worked to get this out the door (we had a lot of requests for this functionality once we'd released the pre-trained model).
Several people in our group have been collaborating with people in our Google [X] Robotics group on how to use machine learning to build new robotics capabilities. This blog post (and accompanying Arxiv research paper at http://arxiv.org/abs/1603.02199) describes the first research from this collaboration: a system that uses a parallel set of robotic arms to gather data and to autonomously learn hand-eye coordination to grasp a variety of objects. Every day, the robots gather new data about what sorts of grasping positions work well for different kinds of objects, and every night we use this data to retrain the model, resulting in improved grasping capability. Over time, the robotics grasping gets better and better.
(Watch the videos in the blog post: they're pretty fun).
- Google Senior Fellow, present
Prior to joining Google, I was at DEC/Compaq's Western Research Laboratory, where I worked on profiling tools, microprocessor architecture, and information retrieval. Prior to graduate school, I worked at the World Health Organization's Global Programme on AIDS, developing software for statistical modeling and forecasting of the HIV/AIDS pandemic.
I earned a B.S. in computer science and economics (summa cum laude) from the University of Minnesota and received a Ph.D. and a M.S. in computer science from the University of Washington. I was elected to the National Academy of Engineering in 2009, which recognized my work on "the science and engineering of large-scale distributed computer systems."
- University of WashingtonComputer Science
- University of MinnesotaComputer Science and Economics
Improving Photo Search: A Step Across the Semantic Gap
Posted by Chuck Rosenberg, Image Search Team Last month at Google I/O, we showed a major upgrade to the photos experience: you can now easil
The Tree of Life: YHGTBFKM: Ecological Society of America letter regardi...
The Tree of Life. Blog of Jonathan A. Eisen, evolutionary biologist, microbiologist and genomics researcher, Open Access and Open Science ad