Profile cover photo
Profile photo
Troy Lee
104 followers -
Someone loving problem solving using computers and trying to be someone.
Someone loving problem solving using computers and trying to be someone.

104 followers
About
Troy's posts

Post has attachment
Totally agree....

Post has shared content

Post has attachment
Dryrun
To start the participation of this year's openKWS, setup the dry-run of the submission is carried out first before having any system yet. Detailed instructions could be found at  http://www.nist.gov/itl/iad/mig/openkws14dryrun.cfm Following are the steps do...

Post has attachment
Jellyfish @ Sentosa S.E.A aquarium, Singapore
Photo

Post has shared content

Post has shared content
Taming Latency Variability and Scaling Deep Learning

Recently, Google Senior Fellow +Jeff Dean presented the talk Taming Latency Variability and Scaling Deep Learning to the San Francisco Bay Area Professional Chapter of the Association for Computing Machinery (http://goo.gl/S3fEcz). Given in two parts, Dean’s talk first covers achieving low latency in shared environments with the goal of improving the overall user experience, e.g. updating the Google search results page as the user is typing.  

In the second part of the talk, starting at the ~20 minute mark, Dean speaks about the efforts in constructing computing systems that are able to automatically generate "understanding" of the raw audio, image, and textual data that is openly available on the Internet, by building high levels of abstraction in an unsupervised manner.    

While human beings are incredibly good at building abstractions, such as the ability to identify an object in an image regardless of its perspective, background, or context in which the image was taken, teaching computers how automatically identify objects in a similar manner is a challenging task.

Watch the video below to learn more about recent efforts in Machine Learning (http://goo.gl/fjw1) via Neural Networks (http://goo.gl/Y6yk), which would allow computers to automatically understand data in a fashion similar to humans.

Post has shared content
Communications of the ACM has a nice article about the history of deep learning, and the reasons behind the recent surge of interest.
There are quotes from me, +Geoffrey Hinton , John Platt , and Andrew Zisserman.
Sadly, +Yoshua Bengio, who made key contributions to deep learning from the early days, isn't mentioned. But it's still a considerably more accurate picture than whatever appeared in Wired.

Link for non-mobile devices: http://cacm.acm.org/magazines/2013/6/164601-deep-learning-comes-of-age/fulltext

Post has shared content
The British Library releases a million images from 17th-19th century books, for use as a classification dataset.

"We plan to launch a crowdsourcing application at the beginning of next year, to help describe what the images portray. Our intention is to use this data to train automated classifiers that will run against the whole of the content. The data from this will be as openly licensed as is sensible (given the nature of crowdsourcing) and the code, as always, will be under an open licence."

Post has attachment

Post has attachment
Wait while more posts are being loaded