Profile cover photo
Profile photo
Curtis Dietz
136 followers
136 followers
About
Curtis's posts

Post has attachment

Post has attachment

Post has shared content
We’re graduating from Google[x] labs

It’s hard to believe that Glass started as little more than a scuba mask attached to a laptop. We kept on it, and when it started to come together, we began the Glass Explorer Program as a kind of “open beta” to hear what people had to say.

Explorers, we asked you to be pioneers, and you took what we started and went further than we ever could have dreamed: from the large hadron collider at CERN, to the hospital operating table; the grass of your backyard to the courts of Wimbledon; in fire stations, recording studios, kitchens, mountain tops and more.

Glass was in its infancy, and you took those very first steps and taught us how to walk. Well, we still have some work to do, but now we’re ready to put on our big kid shoes and learn how to run.

Since we first met, interest in wearables has exploded and today it’s one of the most exciting areas in technology. Glass at Work has been growing and we’re seeing incredible developments with Glass in the workplace. As we look to the road ahead, we realize that we’ve outgrown the lab and so we’re officially “graduating” from Google[x] to be our own team here at Google. We’re thrilled to be moving even more from concept to reality.

As part of this transition, we’re closing the Explorer Program so we can focus on what’s coming next. January 19 will be the last day to get the Glass Explorer Edition. In the meantime, we’re continuing to build for the future, and you’ll start to see future versions of Glass when they’re ready. (For now, no peeking.)

Thanks to all of you for believing in us and making all of this possible. Hang tight—it’s going to be an exciting ride.
Photo

Post has shared content
We work hard to create the healthiest, happiest and most productive work environments possible at all of our campuses, including our Bay Area headquarters. Want to see for yourself? Check out our Mountain View openings and apply here: http://goo.gl/ejjYOJ #tourtuesday  

Post has shared content
The chemistry behind that wonderfully smelling tree

Image: Compound Interest
Photo

Post has attachment

Post has shared content
A Glimpse into Computer Vision

Neural networks have recently had great success in significantly advancing the state of the art on challenging image classification and object detection datasets. However, this accuracy comes at a high computational cost both at training and testing time.

But what if one takes inspiration from how people recognize objects, by selectively focusing on the important parts of an image instead of processing an entire image at once? By ignoring irrelevant noisy features in an image, fewer pixels need to be processed, substantially reducing classification and detection complexity.

Last week, during #NIPS2014 (goo.gl/uEHYAt), Google DeepMind presented Recurrent Models of Visual Attention, a paper which describes an “attention-based task-driven visual processing” that is capable of extracting information from an image or video by adaptively selecting a sequence of smaller regions (glimpses), processing only selected regions at high resolution. 

Read the full paper at http://goo.gl/dEdWkk
Photo

Post has shared content
We know that it’s often helpful to visualize data trends in a spreadsheet, so you can now add miniature charts, or sparklines, into individual cells in Sheets.

Find out more about the types of charts available and get instructions for using the SPARKLINE function in the help center: http://goo.gl/2rWeIF. 
Photo

Post has shared content
Google mobile search is getting faster - to be exact, 100-150 milliseconds faster! When you click on one of the search results, the browser begins fetching the destination page… and here's the trick: we also provide a hint to the browser indicating which other critical resources it should fetch in parallel to speed up rendering of the destination page! 

This is a powerful pattern and one that you can use to accelerate your site as well. The key insight is that we are not speculatively prefetching resources and do not incur unnecessary downloads. Instead, we wait for the user to click the link and tell us exactly where they are headed, and once we know that, we tell the browser which other resources it should fetch in parallel - aka, reactive prefetch!

As you can infer, implementing the above strategy requires a lot of smarts both in the browser and within the search engine... First, we need to know the list of critical resources that may delay rendering of the destination page for every page on the web! No small feat, but the Search team has us covered - they're good like that. Next, we need a browser API that allows us to invoke the prefetch logic when the click occurs: the search page listens for the click event, and once invoked, dynamically inserts prefetch hints into the search results page. Finally, this is where Chrome comes in: as the search results page is unloaded, the browser begins fetching the hinted resources in parallel with the request for the destination page. The net result is that the critical resources are fetched much sooner, allowing the browser to render the destination page 100-150 milliseconds earlier.

P.S. Currently, reactive prefetch is only enabled for users of Google Chrome on Android, as it is the only browser that supports (a) dynamically inserted prefetch hints, and (b) reliably allows prefetch requests to persist across navigations. We hope to add support for other browsers once these features become available!
Photo

Post has attachment
Photo
Wait while more posts are being loaded