Profile

Cover photo
Michael Tetelman
Works at Invensense, Inc
Attended Нижегородский Государственный Университет им. Н.И. Лобачевского (ННГУ)
Lives in San Francisco Bay Area
512 followers|59,852 views
AboutPostsPhotosVideos

Stream

Michael Tetelman

Shared publicly  - 
 
Neural Nets now can be used by millions of people daily - huge success of the new technology
 
Leaner. Faster. More robust.

Today, we’re happy to announce that we’ve launched improved neural network acoustic models for voice searches and commands in the Google app (on Android and iOS), and for dictation on Android devices. 

Using Connectionist Temporal Classification and sequence discriminative training techniques, these models are a special extension of recurrent neural networks that use much less computational resources, are more accurate, robust to noise, and faster to respond to voice search queries.

Check out the Google Research blog below to learn more. Happy (voice) searching!
16 comments on original post
1
1
Mihail Sirotenko's profile photo
Add a comment...

Michael Tetelman
owner

Discussion  - 
 
 
The video of my recent webinar on convolutional nets and deep learning organized by NVIDIA GPU Tech is available: 80 minutes of insufferable pontificating interspersed with slightly less insufferable videos and live demos.

http://bit.ly/1xrEExO
1
Add a comment...

Michael Tetelman
owner

Discussion  - 
 
If you like Eigen..
 ·  Translate
 
I am very proud of the work our team has done on the Eigen open source matrix library this past year. Try out the development (3.3) branch for yourself, you'll likely find that it's the fastest BLAS there is on modern Intel CPUs. Plus it's under a very open license, portable, flexible, and has a very intuitive C++ API.
Eigen 3.2.2 has been released on August 4, 2014. This is a maintenance release with many bug fixes since the release of 3.2.1 five months ago. In particular, this release includes various numerical improvements in JacobiSVD, LDLT, BiCGSTAB, and ColPivHouseholderQR. There are also some limited ...
1
1
Jie Feng's profile photo
Add a comment...

Michael Tetelman
owner

Discussion  - 
 
 
And here we go: GoogLeNet, net that won this year ImageNet using ~12x less parameters than original Alex Krizhevsky net and with more than 20 layers and it even got something to do with Hebbian principle...
  
Slides: http://image-net.org/challenges/LSVRC/2014/slides/GoogLeNet.pptx
Paper: http://arxiv.org/abs/1409.4842 
1
Add a comment...
In his circles
1,646 people
Have him in circles
512 people
Gianfranco Omar Leiva Santillán's profile photo
Yelena Sakharova's profile photo
ВСУ "Черноризец Храбър"'s profile photo
Uwe Schmitt's profile photo
Chen Change Loy's profile photo
Я Лидер's profile photo
Andrei Cimpean's profile photo
Yun Chi's profile photo
Sara A. Solla's profile photo

Michael Tetelman

Shared publicly  - 
 
Great Learning Resource on Deep Learning
2
Add a comment...

Michael Tetelman
owner

Discussion  - 
 
 
Videos of KDD tutorials are up at http://videolectures.net/kdd2014_newyork/

Check out tutorials on deep learning by Yoshua Bengio 
on scaling up deep learning as well as my tutorial on 
recent advances in deep learning at:
http://videolectures.net/kdd2014_salakhutdinov_deep_learning/

Code for training various models, including multimodal ones,
are publicly available at:
http://deeplearning.cs.toronto.edu/

Slides are also available at 
http://www.cs.toronto.edu/~rsalakhu/kdd.html
1
Add a comment...

Michael Tetelman
owner

Discussion  - 
 
 
-------------------------------------------------------------------
3rd International Conference on Learning Representations (ICLR2015)
-------------------------------------------------------------------

Website: http://www.iclr.cc/
Submission deadline:  December 19, 2014
Location:  Hilton San Diego Resort & Spa, May 7-9, 2015

Overview
--------
It is well understood that the performance of machine learning methods is heavily dependent on the choice of data representation (or features) on which they are applied. The rapidly developing field of representation learning is concerned with questions surrounding how we can best learn meaningful and useful representations of data. We take a broad view of the field, and include in it topics such as deep learning and feature learning, metric learning, kernel learning, compositional models, non-linear structured prediction, and issues regarding non-convex optimization.

Despite the importance of representation learning to machine learning and to application areas such as vision, speech, audio and NLP, there was no venue for researchers who share a common interest in this topic. The goal of ICLR has been to help fill this void.

A non-exhaustive list of relevant topics:
- unsupervised, semisupervised, and supervised representation learning
- metric learning and kernel learning
- dimensionality expansion
- sparse modeling
- hierarchical models
- optimization for representation learning
- learning representations of outputs or states
- implementation issues, parallelization, software platforms, hardware
- applications in vision, audio, speech, natural language processing, robotics, neuroscience, or any other field

The program will include keynote presentations from invited speakers, oral presentations, and posters.
This year, the program will also include a joint session with AISTATS.

ICLR's Two Tracks
-----------------
ICLR has two publication tracks.

Conference Track: These papers are reviewed as standard conference papers. Papers should be between 6-9 pages in length. Accepted papers will be presented at the main conference as either an oral or poster presentation and will be included in the official proceedings.
A subset of accepted conference track papers will be selected to participate in a JMLR special topics issue on the subject of Representation Learning. Authors of the selected papers will be given an opportunity to extend their original submissions with supplementary material.

Workshop Track: Papers submitted to this track are ideally 2-3 pages long and describe late-breaking developments. This track is meant to carry on the tradition of the former Snowbird Learning Workshop. These papers are non-archival workshop papers, and therefore may be published elsewhere.

Note that submitted conference track papers that are not accepted to the conference proceedings are automatically considered for the workshop track.

ICLR Submission Instructions
----------------------------
1. Authors should post their submissions (both conference and workshop tracks) on arXiv: http://arxiv.org
2. Once the arXiv paper is publicly visible (there can be an approx. 30 hour delay), authors should go to the openreview ICLR2015 website to submit to either the conference track or the workshop track.

To register on the openreview ICLR2015 website, the submitting author must have a Google account.

For more information on paper preparation, including style files and the URL for the openreview ICLR2015 website, please see http://www.iclr.cc/doku.php?id=iclr2015:main

Submission deadline:  December 19, 2014

Notes:
i. Regarding the conference submission's 6-9 page limits, these are really meant as guidelines and will not be strictly enforced. For example, figures should not be shrunk to illegible size to fit within the page limit. However, in order to ensure a reasonable workload for our reviewers, papers that go beyond the 9 pages should be formatted to include a 9 page submission and a separate supplementary material submission that will be optionally reviewed. If the paper is selected for the JMLR special topic issue, this supplementary material can be incorporated into the final journal version.
ii. Workshop track submissions should be formatted as a short paper, with introduction, problem statement, brief explanation of solution, figure(s) and references. They should not merely be abstracts.
iii. Paper revisions will be permitted, and in fact are encouraged, in response to comments from and discussions with the reviewers (see "An Open Reviewing Paradigm" below).
iv. Authors are encouraged to post their papers to arXiv early enough that the paper has an arXiv number and URL by the submission deadline of 19 Dec. 2014.  However, if these are not yet available, authors have up to one week after the submission deadline to provide the arXiv number and URL. At submission time, simply provide the title, authors, abstract, and temporary arXiv number indicating that the paper has been submitted to arXiv.

An Open Reviewing Paradigm
--------------------------
1. Submissions to ICLR are posted on arXiv prior to being submitted to the conference.
2. Authors submit their paper to either the ICLR conference track or workshop track via the the openreview ICLR2015 website.
3. After the authors have submitted their papers via openreview.net, the ICLR program committee designates anonymous reviewers as usual.
4. The submitted reviews are published without the name of the reviewer, but with an indication that they are the designated reviews.
5. Anyone can openly (non-anonymously) write and publish comments on the paper. Anyone can ask the program chairs for permission to become an anonymous designated reviewer (open bidding). The program chairs have ultimate control over the publication of each anonymous review. Open commenters will have to use their real names, linked with their Google Scholar profiles.
6. Authors can post comments in response to reviews and comments. They can revise the paper as many times as they want, possibly citing some of the reviews.  Reviewers are expected to revise their reviews in light of paper revisions.
7. The review calendar includes a generous amount of time for discussion between the authors, anonymous reviewers, and open commentators.  The goal is to improve the quality of the final submissions.
8. The ICLR program committee will consider all submitted papers, comments, and reviews and will decide which papers are to be presented in the conference track, which are to be presented in the workshop track, and which will not appear at ICLR.
9. Papers that are presented in the workshop track or are not accepted will be considered non-archival, and may be submitted elsewhere (modified or not), although the ICLR site will maintain the reviews, the comments, and the links to the arXiv versions.

General Chairs
--------------
Yoshua Bengio, Université de Montreal
Yann LeCun, New York University and Facebook

Program Chairs
--------------
Brian Kingsbury, IBM Research
Samy Bengio, Google
Nando de Freitas, University of Oxford
Hugo Larochelle, Université de Sherbrooke

Contact
-------
The organizers can be contacted at iclr2015.programchairs@gmail.com
It is well understood that the performance of machine learning methods is heavily dependent on the choice of data representation (or features) on which they are applied. The rapidly developing field of representation learning is concerned with questions surrounding how we can best learn ...
1
Add a comment...
 
 
And here we go: GoogLeNet, net that won this year ImageNet using ~12x less parameters than original Alex Krizhevsky net and with more than 20 layers and it even got something to do with Hebbian principle...
  
Slides: http://image-net.org/challenges/LSVRC/2014/slides/GoogLeNet.pptx
Paper: http://arxiv.org/abs/1409.4842 
1
Add a comment...

Michael Tetelman
owner

Discussion  - 
 
 
Our NIPS 2014 paper on body tracking with ConvNets+MRF is on arXiv in its final form. The method beats the state of the art on the FLIC and LSP datasets by pretty large margins.

"Joint Training of a Convolutional Network and a Graphical Model for Human Pose Estimation", by  Jonathan Tompson, Arjun Jain, Yann LeCun, Christoph Bregler

- Link to arXiv paper: http://arxiv.org/abs/1406.2984
- Jonathan's page on the project: http://cims.nyu.edu/~tompson/cs_portfolio.html#bodytracking
- The FLIC-Plus dataset: http://cims.nyu.edu/~tompson/flic_plus.htm

Abstract: This paper proposes a new hybrid architecture that consists of a deep Convolutional Network and a Markov Random Field. We show how this architecture is successfully applied to the challenging problem of articulated human pose estimation in monocular images. The architecture can exploit structural domain constraints such as geometric relationships between body joint locations. We show that joint training of these two model paradigms improves performance and allows us to significantly outperform existing state-of-the-art techniques.
Abstract: This paper proposes a new hybrid architecture that consists of a deep Convolutional Network and a Markov Random Field. We show how this architecture is successfully applied to the challenging problem of articulated human pose estimation in monocular images. The architecture can exploit ...
1
Add a comment...
People
In his circles
1,646 people
Have him in circles
512 people
Gianfranco Omar Leiva Santillán's profile photo
Yelena Sakharova's profile photo
ВСУ "Черноризец Храбър"'s profile photo
Uwe Schmitt's profile photo
Chen Change Loy's profile photo
Я Лидер's profile photo
Andrei Cimpean's profile photo
Yun Chi's profile photo
Sara A. Solla's profile photo
Work
Occupation
I am a scientist. I am working on Technology of Prediction, which basically means understanding what you already know in a way to make a good guess about what you do not know yet.
Skills
Developing and implementing algorithms based on Deep Learning with Neural Networks and Variational Bayes
Employment
  • Invensense, Inc
    Principal Speech Processing ASR Engineer, 2015 - present
    Developing Artificial Intelligence in Any Sense by Farming and Schooling Neural Nets: creating a NN framework that allows to learn from sensory data and produce an optimized runnable code for any chip without a human touch.
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
San Francisco Bay Area
Previously
I grew up far far away ( but not in Kansas)
Links
Story
Tagline
Life is a chain of accidents, but it is a most probable chain of accidents
Introduction
"Thoughts are invariants of spoken representations" - you realize that when you try to apply Group Theory to Artificial Intelligence.
I am working on advanced Machine Learning methods called Deep Learning with Infinite Feature Algebras and Continuous Learning - that is a continuous improvement of predictive models which has an interesting link to SVMs as a special limit of "perfect" models. This seems to be an universal approach to learn  structures and to model sources of any kind of data, continuously and without limits. 
Bragging rights
Developing Artificial Intelligence in Any Sense by Farming and Schooling Neural Nets: creating a NN framework that allows to learn from sensory data and produce an optimized runnable code for any chip without a human touch.
Education
  • Нижегородский Государственный Университет им. Н.И. Лобачевского (ННГУ)
    PhD, Theoretical and Mathematical Physics, Nizhniy Novgorod State University, Russia, 1992
Basic Information
Gender
Male