Profile cover photo
Profile photo
Nickolay Shmyrev
418 followers
418 followers
About
Nickolay's posts

Post has attachment
CMUSphinx has been selected for GSOC 2017, congratulations to +James Salsman

https://summerofcode.withgoogle.com/organizations/6234667528224768/

Post has shared content
Very interesting topic, important parts:

Machine Learning from Verbal User Instruction, Tom Mitchell, Carnegie Mellon University
https://simons.berkeley.edu/talks/tom-mitchell-02-13-2017

Interactive Language Learning from the Extremes, Sida Wang, Stanford University
https://simons.berkeley.edu/talks/sida-wang-02-14-2017

I-SED: An Interactive Sound Event Detector, Bongjun Kim, Northwestern University
https://simons.berkeley.edu/talks/bongjun-kim-2017-02-16
http://music.cs.northwestern.edu/publications/Kim_Pardo_IUI2017.pdf

Post has shared content
The LSH part is very interesting
Yesterday, we announced the launch of Android Wear 2.0, along with brand new wearable devices, that will run Google's first entirely “on-device” ML technology for powering smart messaging.

This on-device ML system enables technologies like Smart Reply to be used for any application, including third-party messaging apps, without ever having to connect with the cloud…so now you can respond to incoming chat messages directly from your watch, with a tap. Learn more, below.

Post has attachment

Post has attachment
CMUSphinx-powered app for League of Legends https://play.google.com/store/apps/details?id=nl.selwyn420.vast

Post has attachment

Post has attachment
This is a big technical problem to solve, pretty interesting one

http://gizmodo.com/tv-report-on-accidental-amazon-orders-triggers-attempte-1790958217

and, i-vectors do not really work for short utterances.

Post has attachment

Learning with huge memory

Recently a set of papers were published about "memorization" in neural networks. For example:

Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer https://openreview.net/forum?id=B1ckMDqlg

also

Understanding deep learning requires rethinking generalization https://openreview.net/forum?id=Sy8gdB9xx

It seems that large memory system has a point, you don’t need millions of computing cores in CPU and, it is too power-expensive, you could just go ahead with very large memory and reasonable amount of cores to access memory with hashing (think of Shazam or randlm, or G2P by analogy). You probably do not need heavy tying either.

Advantages are: you can quickly incorporate new knowledge, just put new values in memory, you can model corner cases since they are all still accessible, and, again, you are much more energy-efficient.

Maybe we will see mobile phones with 1Tb of memory sometimes.

Not quite a scientific paper, but "memorization" is a very promising concept.

UNDERSTANDING DEEP LEARNING REQUIRES RE-THINKING GENERALIZATION
Chiyuan Zhang et al

https://openreview.net/pdf?id=Sy8gdB9xx
Wait while more posts are being loaded