Profile

Cover photo
Mat Kelcey
Works at Google
Lives in San Francisco Bay Area
307 followers|261,427 views
AboutPostsPhotosVideos+1'sReviews

Stream

Mat Kelcey

Shared publicly  - 
 
you know what a killer app for Google glass would be? find the dropped piece of Lego in this picture... +Charles Mendis
5
Tin Lam's profile photoJeremy Kahn's profile photoMat Kelcey's profile photo
3 comments
 
it was black and I did find it eventually :)
Add a comment...

Mat Kelcey

Shared publicly  - 
1
Add a comment...

Mat Kelcey

Shared publicly  - 
 
1
Add a comment...

Mat Kelcey

Shared publicly  - 
 
 
My dear NYU colleague +Gary Marcus wrote a critical response to +John Markoff's front-page article on deep learning in the New York Times.

Gary is a professor in the psychology department at NYU and the author of a number of books, including a very nice little book entitled "Kluge: the haphazard construction of the human mind" in which he argues (very convincingly) that the brain is a collection or hacks (which were called kluges, back when cool things were mechanical and not software), the result of haphazard refinement through evolution.

Gary has been a long-time critic of non-symbolic (or sub-symbolic) approaches to AI, such as neural nets and connectionist models. He comes from the Chomsky/Fodor/Minsky/Pinker school of thought on the nature of intelligence whose main tenet is that the mind is a collection of pre-wired modules, that are largely determined by genetics. This is meant to contrast the working hypothesis on which us, deep learning people, are basing our research: the cortex runs a somewhat "generic" and task-independent learning "algorithm" that will capture the structure of whatever signal it is fed with. 

To be sure, none of us are extreme in our positions. I have been a long-time advocate for the necessity of some structure in learning architectures (such as convolutional net). All of learning theory points to the fact that learning needs structure. Similarly, Gary doesn't claim that learning has no role to play. 

In the end, it all comes down to two questions:
- how important of a role does learning play in building a human mind?
- how much prior structure is needed?

+Geoffrey Hinton and I have devoted most of our careers to devise learning algorithms that can do interesting feats with as little structure as possible (but still some). It's a matter of degree.

One important point in Gary's piece is the fact that neural nets are merely "a ladder on the way to the moon" because they are incapable of symbolic reasoning. I think there are two issues with that argument:
1. As I said on previous occasions, I'd be happy if, within my lifetime, we have machines as intelligent as a rat. I don't think Gary would argue that rats do symbolic reasoning, but they are pretty smart. I don't think human intelligence is considerably (qualitatively) different from that of a rat, and definitely not that different from that of an ape. We could do a lot without human-style symbolic reasoning.
2. There is not that much of a conceptual difference between some of the learning systems that we are building and the symbolic reasoning systems that Gary likes. Much of modern ML systems produce their output by minimizing some sort of energy function, a process qualitatively equivalent to inference (Geoff and I call that "energy-based models", but Bayesian nets also fit in that framework). Training consists in shaping the energy function so that the inference process produces an acceptable answer (or a distribution over answers).

Gary points out that the second wave of neural nets in the late 80's and early 90's was pushed out by other methods. Interestingly, they were pushed out by methods such as Support Vector Machines which are closer to the earliest Perceptrons and even further away from the symbolic reasoning than deep learning systems. To some extent, it could be argued that the kernel trick allowed us to temporarily abandon the search for methods that could go significantly beyond linear classifiers and template matching. 

There is one slight confusing thing in Gary's piece (as well as in John Markoff's piece): the fact that all the recent successes of deep learning are due to unsupervised learning. That is not the case. Many of the stunning results use purely supervised learning, sometimes applied to convolutional network architectures, as in Geoff's ImageNet object recognizer, our scene parsing system, our house number recognizer (now used by Google) and IDSIA's traffic sign recognizer. The key idea of deep learning is to train deep multilayer architectures to learn pre-processing, low-level feature extraction, mid-level feature extraction, classification, and sometimes contextual post-processing in an integrated fashion. Back in the mid 90's, I used to call this "end to end learning" or "global training".

Gary makes the point that even deep learning modules are but one component of complex systems with lots of other components and moving parts. It's true of many systems. But the philosophy of deep learning is to progressively integrate all the modules in the learning process.
An example of that is the check reading system I built at Bell Labs in the early 1990's with +Leon Bottou, +Yoshua Bengio and +Patrick Haffner.  It integrated a low-level feature extractor, a mid-level feature extractor, a classifier (all parts of a convolutional net), and a graphical model (word and language model) all trained in an integrated fashion.

So, just wait a few years, Gary. Soon, deep learning system will incorporate reasoning again.

The debate is open. Knock yourself out, dear readers.
#deeplearning  
14 comments on original post
4
Add a comment...

Mat Kelcey

Shared publicly  - 
 
print __doc__ from pprint import pprint import numpy as np from sklearn import datasets from sklearn.cross_validation import StratifiedKFold from sklearn.grid_search import GridSearchCV from sklearn.m...
1
Add a comment...
In their circles
286 people
Have them in circles
307 people
Matthew Sinclair's profile photo
Mark Sullivan's profile photo
Ed Cortis's profile photo
Nigel Dalton's profile photo
TechNationNews.com's profile photo
OJ Reeves's profile photo
Siamak Faridani's profile photo
Siva Palakurthi's profile photo
Louise Lonergan's profile photo

Mat Kelcey

Shared publicly  - 
2
Tin Lam's profile photo
Tin Lam
+
1
2
1
 
nice 360degrees pic!
Add a comment...

Mat Kelcey

Shared publicly  - 
 
 
Jupiter and the Sun are the two largest objects in our Solar System, and as they orbit around one another, they create regions where their gravity roughly cancels out. These are the Lagrangian points, created whenever two objects orbit one another: places where gravity is such that another small object can follow along in the orbit without being pulled in or out. And since things aren't getting pulled out of there, they get stuck in there as well: and so we have two large clumps of asteroids (and miscellaneous smaller space debris) in Jupiter's orbit. These are called the Trojan Asteroids; the group ahead of Jupiter is known as the Greek Camp, and the group behind it the Trojan Camp, with the asteroids in each camp being named after famous people in that war. Together, these two camps have as many asteroids as the Asteroid Belt.

Other stable patterns are possible, too: another one is what's called a 3:2 resonance pattern, asteroids whose motion gets confined to a basically triangular shape by the combined pull of Jupiter and the Sun. This group (for Jupiter) is called the Hilda Family, and their route forms a triangle with its three points at the two Lagrange points and at the point on Jupiter's orbit directly opposite it from the Sun. 

None of these orbits are perfectly stable, because each of these asteroids is subject to pulling from everything in the Solar System; as a result, an asteroid can shift from the Lagrange points to the Hilda family, and from the Hilda family to the Asteroid Belt (not shown), especially if it runs into something and changes its course. 

The reason that Pluto was demoted from planet to dwarf planet is that we realized that these things are not only numerous, but some of them are quite big. Some things we formerly called asteroids are actually bigger than Pluto, so the naming started to seem a little silly. So our Solar System has, in decreasing order of size, four gas giant planets (Jupiter, Saturn, Neptune and Uranus); four rocky planets (Earth, Venus, Mars, and Mercury); five officially recognized dwarf planets (Eris, Pluto, Haumea, Makemake, and Ceres); and a tremendous number of asteroids. (We suspect that there are actually about 100 dwarf planets, but the job of classifying what's an asteroid and what's actually a planet is still in progress -- see the "dwarf planet" link below if you want to know the details)

Ceres orbits in the Asteroid Belt, about halfway between Mars and Jupiter, just inside the triangle of the Hilda Family; Pluto and Haumea are both in the distant Kuiper Belt, outside the orbit of Neptune but shepherded by its orbit in much the same way that the Hildas are shepherded by Jupiter; Makemake is what's called a "cubewano," living in the Kuiper Belt but unshepherded, orbiting independently; and Eris is part of the Scattered Disc, the even more distant objects whose orbits don't sit nicely in the plane of the Solar System at all, having been kicked out of that plane by (we believe) scattering off large bodies like Jupiter.

But mostly, I wanted to share this to show you how things orbit. This picture comes from the amazing archive at http://sajri.astronomy.cz/asteroidgroups/groups.htm, which has many other such pictures, and comes to me via +Max Rubenacker

More information about all of these things:
http://en.wikipedia.org/wiki/Lagrangian_point
http://en.wikipedia.org/wiki/Trojan_(astronomy)
http://en.wikipedia.org/wiki/Hilda_family
http://en.wikipedia.org/wiki/Dwarf_planet
http://en.wikipedia.org/wiki/Kuiper_belt
http://en.wikipedia.org/wiki/Scattered_disc

#ScienceEveryDay
127 comments on original post
3
1
James Long's profile photoAndrei Savu's profile photo
 
I was recently talking with a friend about n-body problems. Mainly in the context of 1) the human body and 2) financial markets. Fascinating stuff. 

I didn't realize that the astroid belts were where they are because of the sun / jupiter gravity interaction. That's really cool. 
Add a comment...

Mat Kelcey

Shared publicly  - 
 
a classic piece of reference.. my physical copy is starting to fall apart :/
2
Geoffrey Phipps's profile photo
 
Something I have always wanted to read more abuot
Add a comment...

Mat Kelcey

Shared publicly  - 
1
Add a comment...
People
In their circles
286 people
Have them in circles
307 people
Matthew Sinclair's profile photo
Mark Sullivan's profile photo
Ed Cortis's profile photo
Nigel Dalton's profile photo
TechNationNews.com's profile photo
OJ Reeves's profile photo
Siamak Faridani's profile photo
Siva Palakurthi's profile photo
Louise Lonergan's profile photo
Work
Occupation
Software Engineer
Skills
Machine learning, natural language processing, information retrieval, distributed systems.
Employment
  • Google
    Software Engineer, present
  • Wavii
    Software Engineer
  • Amazon Web Services
    Software Engineer
  • Lonely Planet
    Software Engineer
  • Sensis
    Software Engineer
  • Distra
    Software Engineer
  • Nokia
    Software Engineer
  • Australian Stock Exchange
    Software Engineer
Basic Information
Gender
Decline to State
Story
Tagline
data nerd wannabe
Introduction
I work in the Machine Intelligence group at Google building as-large-as-I-can-get neural networks for knowledge extraction.
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
San Francisco Bay Area
Previously
seattle - melbourne - calgary - london - sydney - hobart
Links
Contributor to
Mat Kelcey's +1's are the things they like, agree with, or want to recommend.
Aphex Twin - Music on Google Play
market.android.com

Richard David James, best known by his stage name Aphex Twin, is a British electronic musician and composer. He has been described by The Gu

Chess Tactics Pro (Puzzles)
market.android.com

Get better at chess with this large collection of chess puzzles for all levels !This tactic trainer lets you practice in 3 different modes :

Google Search
market.android.com

Google Search app for Android: The fastest, easiest way to find what you need on the web and on your device.* Quickly search the web and you

NetHack
market.android.com

This is an Android port of NetHack: a classic roguelike game originally released in 1987.Main features ------------- * User-friendly interfa

Improving Photo Search: A Step Across the Semantic Gap
googleresearch.blogspot.com

Posted by Chuck Rosenberg, Image Search Team Last month at Google I/O, we showed a major upgrade to the photos experience: you can now easil

Machine Learning - Stanford University
ml-class.org

A bold experiment in distributed education, "Machine Learning" will be offered free and online to students worldwide during the fa

Game Theory
www.game-theory-class.org

Game Theory is a free online class taught by Matthew Jackson and Yoav Shoham.

Probabilistic Graphical Models
www.pgm-class.org

Probabilistic Graphical Models is a free online class taught by Daphne Koller.

RStudio
rstudio.org

News. RStudio v0.94 Available (6/15/2011). RStudio v0.94 is now available. In this release we've made lots of enhancements based on the

Hadoop 0.20.205.0 API
hadoop.apache.org

Frame Alert. This document is designed to be viewed using the frames feature. If you see this message, you are using a non-frame-capable web

Shapecatcher.com: Unicode Character Recognition
shapecatcher.com

You need to find a specific Unicode Character? With Shapecatcher.com you can search through a database of characters by simply drawing your

Duncan & Sons Automotive Service Center
plus.google.com

Duncan & Sons Automotive Service Center hasn't shared anything on this page with you.

Natural Language Processing
www.nlp-class.org

Natural Language Processing is a free online class taught by Chris Manning and Dan Jurafsky.

name value description hadoop.tmp.dir /tmp/hadoop-${user.name} A ...
hadoop.apache.org

name, value, description. hadoop.tmp.dir, /tmp/hadoop-${user.name}, A base for other temporary directories. hadoop.native.lib, true, Should

Apache OpenNLP Developer Documentation
incubator.apache.org

Written and maintained by the Apache OpenNLP Development Community. Version 1.5.2-incubating. Copyright © , The Apache Software Foundation.

ggplot.
had.co.nz

ggplot. An implementation of the grammar of graphics in R. Check out the documentation for ggplot2 - the next generation. ggplot is an imple

ChainMapper (Hadoop 0.20.1 API)
hadoop.apache.org

public class ChainMapper; extends Object; implements Mapper. The ChainMapper class allows to use multiple Mapper classes within a single Map

Neural net language models - Scholarpedia
www.scholarpedia.org

A language model is a function, or an algorithm for learning such a function, that captures the salient statistical characteristics of the d

tech stuff by mat kelcey
www.matpalm.com

my nerd blog. latent semantic analysis via the singular value decomposition (for dummies). semi supervised naive bayes. statistical synonyms

brain of mat kelcey
matpalm.com

collocations in wikipedia, part 1. October 19, 2011 at 08:00 PM | categories: nlp, phrase-extraction, collocations | View Comments. collocat

Public - a year ago
reviewed a year ago
1 review
Map
Map
Map