Research at Google's posts
Post has attachment
Public
The Google Brain Team (g.co/brain) is open-sourcing #TensorFlow model code for summarization research. Learn more on the Google Research blog, linked below
Post has attachment
Public
Check out a Q&A where Google Anti-abuse Research Lead +Elie Bursztein discusses some of the research that goes into making Gmail safe.
Post has attachment
Public
Got machine learning questions for the Google Brain team (g.co/brain)? Join our first #RedditAMA on r/MachineLearning tomorrow, August 11 at 10am PT
Post has attachment
Public
Attending the 2016 Meeting of the Association for Computational Linguistics in Berlin? As a leader in Natural Language Processing (NLP) and a Platinum Sponsor of the conference, Google will be on hand at #acl2016 to showcase research interests that include syntax, semantics, discourse, conversation, multilingual modeling, sentiment analysis, question answering, summarization, and generally building better learners using labeled and unlabeled data, state-of-the-art modeling, and learning from indirect supervision.Check out our research being presented, below.
Post has attachment
Public
Interested in genomics research? Sign up for a free workshop at the +Institute for Systems Biology in Seattle to learn how to leverage the resources of the ISB-CGC and +Google Cloud Platform to analyze publicly available TCGA data.
Post has attachment
Public
Earlier this year, we shared some research on Deep Learning for Robots, focused on hand-eye coordination and grasping (goo.gl/vXFQJc ). Yesterday, we released the grasping and push datasets containing RGB-D views of the arm, gripper and objects, along with actuation and position parameters. Learn more at the site below.
Post has attachment
Public
Reminder, we’ll be starting the #RedditAMA with the Google Brain team (g.co/brain) today at 10am PT in r/MachineLearning.
Post has attachment
Public
When we released Parsey McParseface last May as part of SyntaxNet, we were already planning to expand to more languages, and it soon became clear that this was both urgent and important, because researchers were having trouble creating top notch SyntaxNet models for other languages.
Just in time for #ACL2016 , we are pleased to announce that Parsey McParseface now has 40 cousins! Parsey’s Cousins is a collection of pretrained syntactic models for 40 languages, capable of analyzing the native language of more than half of the world’s population at often unprecedented accuracy. To better address the linguistic phenomena occurring in these languages we have endowed SyntaxNet with new abilities for Text Segmentation and Morphological Analysis.
Just in time for #ACL2016 , we are pleased to announce that Parsey McParseface now has 40 cousins! Parsey’s Cousins is a collection of pretrained syntactic models for 40 languages, capable of analyzing the native language of more than half of the world’s population at often unprecedented accuracy. To better address the linguistic phenomena occurring in these languages we have endowed SyntaxNet with new abilities for Text Segmentation and Morphological Analysis.
Post has attachment
Public
Maggie Johnson, Director of Education and University Relations at Google, discusses the benefits of computational thinking, and how teachers can introduce it in school curriculum.
Post has attachment
Public
Check out PRUDAQ, a open source BeagleBone cape built around the Analog Devices AD9201 ADC, which samples two inputs simultaneously at up to 20 megasamples per second per channel, making it useful for scientific applications where a built-in ADC isn't quite up to the task. Learn more on the Google Research blog, linked below.
Wait while more posts are being loaded





