The Great AI Awakening: How Google used artificial intelligence to transform Translate, and how machine learning is poised to reinvent computing

What follows here is the story of how a team of Google researchers and engineers - at first one or two, then three or four, and finally more than a hundred - made considerable progress in that direction [towards a generally intelligent all-encompassing personal digital assistant]. It's an uncommon story in many ways, not least of all because it defies many of the Silicon Valley stereotypes we've grown accustomed to. It does not feature people who think that everything will be unrecognizably different tomorrow or the next day because of some restless tinkerer in his garage. It is neither a story about people who think technology will solve all our problems nor one about people who think technology is ineluctably bound to create apocalyptic new ones. It is not about disruption, at least not in the way that word tends to be used.

It is, in fact, three overlapping stories that converge in Google Translate's successful metamorphosis to A.I. - a technical story, and institutional story and a story about the evolution of ideas.

The story spans three continents and seven decades.

One interesting tidbit that is tangential to the bigger story: the Google Translate team ran latency experiments on a small percentage of users, in the form of faked delays, to determine how much delay users would tolerate. That let Google figure out how much processing power they would need to be able to introduce the new machine learning-based system to the public (and it turns out to require a lot of processing power).

Read the full fascinating article about the development of Google Brain at +The New York Times"
http://www.nytimes.com/2016/12/14/magazine/the-great-ai-awakening.html?smid=go-share
Shared publiclyView activity