Edit: fix typo
Edit: fix typo
Grasping: A collection of 650k grasp attempts, data used in: http://arxiv.org/abs/1603.02199
Push: A collection of 59k examples of pushing motions, data used in: http://arxiv.org/abs/1605.07157
Both datasets contain RGB-D views of the arm, gripper and objects, along with actuation and position parameters. They were collected in a controlled environment using a wide collection of everyday objects, some of which were held out for evaluation. Enjoy!
Credits: , and .
We are excited to announce that the Google Brain Residency Program application will re-open this coming Thursday, September 1st! Our first year's program launched in October last year and we’ve received an overwhelmingly positive response. We welcomed our first cohort of 27 Google Brain Residents in June, 2016 and we're excited about the impact they're already making with the research they are conducting!
In conjunction with the application re-opening, I would like to invite you to join me at a YouTube Live event where I will be discussing the Brain Residency Program as well as presenting an overview of some of the research work being done in the Google Brain team (http://g.co/brain).
To attend this event, simply visit goo.gl/KvDkS7 and tune in tomorrow (Thursday, September 1st) at 3pm PDT. The event will last about an hour and will be filmed live where you can not only watch but also post questions in real time via chat. We will have moderators online to help answer questions as they roll in during the event.
To learn more about the program, check out http://g.co/brainresidency. Applications for next year's program will officially open on September 1st, 2016 (tomorrow).
If you have any questions, please direct them to email@example.com
I sincerely hope to see you all there!
Fast forward to today, I'm very excited to see deep nets make a significant dent into the problem, while enabling seamless, practical variable-rate coding, bit-per-bit progressive decoding, and with huge gains over JPEG to boot.
(We have 279 comments posted to the thread, and it's the most up-voted post of all time on /r/MachineLearning, and we haven't even started answering questions. We'll have to see how many questions we can get through on Thursday!).
I'm also excited to see this topic being addressed openly, in a collaboration across many different institutions.
Actual paper: https://arxiv.org/abs/1606.06565
Google Research blog post: https://research.googleblog.com/2016/06/bringing-precision-to-ai-safety.html
Open AI blog post:
You can learn more about the Cloud Vision API, which is now in GA ("General Availability") at https://cloud.google.com/vision/
I'm very excited that we can finally discuss this in public. Today at Google I/O revealed the TPU (Tensor Processing Unit), a custom ASIC that Google has designed and built specifically for machine learning applications. We've had TPUs deployed in Google datacenters for more than a year, and they are an order of magnitude faster and more power efficient per operation than other computational solutions for the kinds of models we are deploying to improve our products. This computational speed allows us to use larger, more powerful machine learned models, expressed and seemlessly deployed using TensorFlow (tensorflow.org) into our products, and to deliver the excellent results from those models in less time.
TPUs are used on every Google Search to power RankBrain (https://en.wikipedia.org/wiki/RankBrain), they were a key secret ingredient in the recent AlphaGo match against Lee Sedol, they are used for speech and image recognition, and they are powering a growing list of other smart products and features.
and the rest of the team that developed this ASIC did a fabulous job, and it's great to see it discussed in public!
Link to the part of the keynote where Sundar discusses TPUs:
Edit: Added a link and some text.
Prior to joining Google, I was at DEC/Compaq's Western Research Laboratory, where I worked on profiling tools, microprocessor architecture, and information retrieval. Prior to graduate school, I worked at the World Health Organization's Global Programme on AIDS, developing software for statistical modeling and forecasting of the HIV/AIDS pandemic.
I earned a B.S. in computer science and economics (summa cum laude) from the University of Minnesota and received a Ph.D. and a M.S. in computer science from the University of Washington. I was elected to the National Academy of Engineering in 2009, which recognized my work on "the science and engineering of large-scale distributed computer systems."
- University of WashingtonComputer Science
- University of MinnesotaComputer Science and Economics
- Google Senior Fellow, present
Improving Photo Search: A Step Across the Semantic Gap
Posted by Chuck Rosenberg, Image Search Team Last month at Google I/O, we showed a major upgrade to the photos experience: you can now easil
The Tree of Life: YHGTBFKM: Ecological Society of America letter regardi...
The Tree of Life. Blog of Jonathan A. Eisen, evolutionary biologist, microbiologist and genomics researcher, Open Access and Open Science ad